Researchers at The Ohio State Wexner Medical Center are testing the Microsoft Dragon Ambient eXperience (DAX) Copilot application and creating clinical notes entered into the patient's electronic medical record using conversational, ambient, and generative AI.
Artificial intelligence (AI) permeates every aspect of human lives, from social media algorithms to smart home appliances. According to a recent nationwide survey conducted on behalf of The Ohio State University Wexner Medical Center, the majority of Americans—with some exceptions—think it is.
The national poll of 1,006 people found:
- 75% of respondents think using AI to reduce human error is critical.
- 71% of people want AI to shorten wait times.
- 70% of people feel at ease with AI taking notes during meetings.
- 66% think AI should help healthcare professionals balance their work and personal lives better.
The Ohio State Wexner Medical Center is testing the Microsoft Dragon Ambient eXperience (DAX) Copilot application to address a few of these problems. It securely listens to a provider-patient visit and creates clinical notes entered into the patient's electronic medical record using conversational, ambient, and generative AI. The doctor can concentrate on the patient and review and edit the notes after the visit instead of typing notes during the visit.
The Ohio State Wexner Medical Center's Chief Health Information Officer, Ravi Tripathi, MD, oversaw the pilot program. Between mid-January and mid-March, around 24 doctors and advanced practice providers in cardiology, obstetrics, gynecology, and primary care tested the technology during outpatient clinic visits. The physician uses the AI application to log the visit after getting the patient's consent. The notes are organized and prepared for review in less than a minute after the visit ends.
We found it saved up to four minutes per visit. That’s time the physician can use to connect with the patient, do education and make sure they understand the plan going forward. A few clinicians preferred their old workflow but, overall, 80% completed the pilot. In fact, we allowed them to keep using the AI solution afterward because it had significantly impacted their practices in the eight weeks of testing.
Ravi Tripathi, MD, Chief Health Information Officer and Pilot Program Lead, Ohio State Wexner Medical Center
Harrison Jackson, MD, an internist, was one of the pilot participants. Jackson expressed frustration about the amount of typing required during each patient visit.
Documentation is necessary, but it takes away from the quality of patient interaction during a visit. I even apologize. I say, ‘I’m sorry, I know I'm making more eye contact with the computer than with you.
Harrison Jackson, MD, Internist and Pilot Program Participant, Ohio State Wexner Medical Center
Following his testing of the AI documentation, Jackson notes a few sporadic errors, like using the wrong pronouns or mispronouncing words; all of these, Jackson claims, were quickly corrected during his chart review. Jackson is in favor of AI being used in healthcare in the future.
I’m spending as much if not more time with each patient, and it’s higher quality time with more eye contact. I often mention aspects of a physical exam out loud for the AI program to capture, and it prompts a good conversation with my patient. I’ve also let our residents use the technology under my supervision, and we’ve noticed the quality of their patient interactions and the quality of plans they present have improved.
Harrison Jackson, MD, Internist and Pilot Program Participant, Ohio State Wexner Medical Center
Even though most Americans believe AI can improve healthcare, more than half (56%) still find it frightening, and 70% are worried about data privacy.
“I know patients are concerned about the privacy and the security of their data, but we hold the artificial intelligence and this technology to the same standards that we hold our electronic medical record,” said Tripathi.
Ohio State made ambient documentation more accessible to all providers in outpatient settings. Patients who feel their conversations with their physicians were more valuable have improved their satisfaction scores, and in the first two weeks of expanded use, 100 clinicians gained back 64 hours.
Survey Methodology
SSRS used its Opinion Panel Omnibus platform to conduct this study on behalf of The Ohio State University Comprehensive Cancer Center. The SSRS Opinion Panel Omnibus is a probability-based, twice-monthly nationwide survey. Data collection on a sample of 1,006 respondents took place between May 17 and May 20, 2024.
The survey was administered in English and was completed by phone (n = 32) and online (n = 974). At the 95% confidence level, the margin of error for all respondents is +/-3.5% points. The target population of American adults 18 years of age and older is represented by the weighting of all SSRS Opinion Panel Omnibus data.