Imagine this. You finally mustered up the courage to go to your family doctor for an embarrassing problem. You sit down. Your family doctor says:
before we start, I operate a computer to log my visits. This is AI – it will write a summary of notes and a letter to the specialist. Is this OK?
Wait – AI writes our medical records? Why would we want that?
Documentation is imperative for secure and effective healthcare. Physicians must keep good records to keep your registrationHealth services must provide good record keeping systems for accreditationRecords are also legal documents: they may be essential in the event of an insurance claim or legal action.
But writing things down (or dictating notes or letters) takes time. During visits, doctors can divide their attention between good recordkeeping and good patient communication. Sometimes doctors have to work on records after hours, at the end of an already long day.
So it is understandable excitementfrom all kinds of healthcare professionals about “ambient artificial intelligence” or “digital scribes.”
Who are digital scribes?
This isn’t an old-fashioned transcription program: you dictate a letter, and the program transcribes it word by word.
Digital scribes are different. They operate AI – huge language models with generative capabilities – similar to ChatGPT (or sometimes GPT4 myself).
The app silently records a conversation between a doctor and a patient (using a phone, tablet, or computer microphone, or a dedicated sensitive microphone). AI converts the recording into a word-by-word transcription.
The AI system then uses the transcript and instructions received to write clinical notes and/or letters for other clinicians, ready for the clinician to review.
Most clinicians know little about these technologies: they are experts in their specialty, not in AI. Marketing materials promise to “let AI take care of your clinical notes so you can spend more time with your patients.”
Put yourself in the clinician’s shoes. You can say, “Yes, please!”
How are they regulated?
Lately, Australian Medical Practice Regulatory Agency published a code of practice for the operate of digital scribes. Royal Australian College of General Practitioners an information card was published. Both warn physicians that they remain responsible for the content of their medical records.
Some AI applications are regulated as medical devicesbut many digital scribes are not. Therefore, it is often up to health care providers or physicians to determine whether scribes are secure and effective.
What does the research say so far?
Real-world data and evidence on the effectiveness of digital writers is very narrow.
In a huge California hospital system, researchers tracked the work of 9,000 physicians for ten weeks. in the digital scribe pilot test.
Some doctors liked the scribe: their working hours were reduced, they communicated better with patients. Others did not even start using the scribe.
And the person taking the note made mistakes – for example, they wrote down the wrong diagnosis or noted that the test was done he was left done when it was required to do.
So what should we do with digital writers?
This Recommendations the first Australian National Citizens’ Jury on AI in Healthcare show what Australians expect from AI in healthcare and provide a good starting point.
Building on these recommendations, here are some things to keep in mind about digital scribes the next time you go to the clinic or emergency room:
1) You should be informed if a digital scribe is used.
2) Only healthcare-grade typescripts should be used in healthcare. Ordinary, publicly available generative AI tools (such as ChatGPT or Google Gemini) should not be used in clinical care.
3) You should be able to give or refuse consentto operate a digital scribe. You should have all relevant risks explained to you and be able to freely agree or decline.
4) Those who create digital records for clinical purposes must meet strict privacy standards. You have the right to privacy and confidentiality in healthcare. The entire record of a visit can contain much more detail than a clinical note. So ask:
- Are your meeting transcripts and summaries processed in Australia or another country?
- How are they protected and secured (e.g. are they encrypted)?
- Who has access to them?
- How are they used (e.g. are they used to train AI systems)?
- Does the scribe have access to other data from your record to make the summary? If so, is that data ever shared?
Is human supervision sufficient?
Generative AI systems can make mistakes, get confused, or misunderstand the accents of some patients. But they often communicate these errors in a way that sounds very convincing. This means that close human review is imperative.
Doctors are told by tech and insurance companies that they must check every summary or letter (and they must). But that’s not It’s that simple. Busy clinicians can become overly dependent on a scribe and simply accept summaries. Tired or inexperienced clinicians may think their memory must be wrong and the AI must be right (known as automation bias).
Some people have suggested these scribes should also be able to create patient summaries. We don’t own our own medical records, but we usually have the right to access them. Knowing that a digital scribe is in operate can boost consumers’ motivation to review what’s in their medical records.
Doctors have always written notes about our embarrassing problems and have always been responsible for those notes. Privacy, security, confidentiality and quality of those records have always been essential.
Perhaps one day, digital scribes will mean better records and better interactions with our clinicians. But right now, we need good evidence that these tools can work in real-world clinics without compromising quality, safety, or ethics.