In the year since generative artificial intelligence (AI) software first began to emerge for use, the staggering pace and breadth of development has condensed years of growth and change into months and weeks. Among the settings where these tools may find the greatest straight-line relevance is private medical practice.
Last month’s column on the basics of AI sparked some interesting questions regarding the various generative algorithms and their usefulness to us in medicine. A multitude of generative AI products with potential medical applications are now available, with new ones appearing almost weekly. (As always, I have no financial interest in any product or service mentioned in this column.)
Last month, I discussed ChatGPT, the best-known AI algorithm, and some of its applications in clinical practice, such as generating website, video, and blog content. ChatGPT can also provide rapid and concise answers to general medical questions, like a search engine – but with more natural language processing and contextual understanding. Additionally, the algorithm can draft generic medical documents, including templates for after-visit summaries, postprocedure instructions, referrals, prior authorization appeal letters, and educational handouts.
Another useful feature of ChatGPT is its ability to provide accurate and conversational language translations, thus serving as an interpreter during clinic visits in situations where a human translator is not available. It also has potential uses in clinical research by finding resources, formulating hypotheses, drafting study protocols, and collecting large amounts of data in short periods of time. Other possibilities include survey administration, clinical trial recruitment, and automatic medication monitoring.
GPT-4, the latest version of ChatGPT, is reported to have greater problem-solving abilities and an even broader knowledge base. Among its claimed skills are the ability to find the latest literature in a given area, write a discharge summary for a patient following an uncomplicated surgery, and an image analysis feature to identify objects in photos. GPT-4 has been praised as having "the potential to help drive medical innovation, from aiding with patient discharge notes, summarizing recent clinical trials, providing information on ethical guidelines, and much more."
Bard, an AI "chat bot" introduced by Google earlier this year, intends to leverage Google’s enormous database to compete with ChatGPT in providing answers to medical questions. Bard also hopes to play a pivotal role in expanding telemedicine and remote care via Google’s secure connections and access to patient records and medical history, and "facilitate seamless communication through appointment scheduling, messaging, and sharing medical images," according to PackT, a website for IT professionals. The company claims that Bard’s integration of AI and machine learning capabilities will serve to elevate health care efficiency and patient outcomes, PackT says, and "the platform’s AI system quickly and accurately analyzes patient records, identifies patterns and trends, and aids medical professionals in developing effective treatment plans."
Doximity has introduced an AI engine called DocsGPT, an encrypted, HIPAA-compliant writing assistant that, the company says, can draft any form of professional correspondence, including prior authorization letters, insurance appeals, patient support letters, and patient education materials. The service is available at no charge to all U.S. physicians and medical students through their Doximity accounts.
Microsoft has introduced several AI products. BioGPT is a language model specifically designed for health care. Compared with GPT models that are trained on more general text data, BioGPT is purported to have a deeper understanding of the language used in biomedical research and can generate more accurate and relevant outputs for biomedical tasks, such as drug discovery, disease classification, and clinical decision support. Fabric is another health care–specific data and analytics platform the company described in an announcement in May. It can combine data from sources such as electronic health records, images, lab systems, medical devices, and claims systems so hospitals and offices can standardize it and access it in the same place. Microsoft said the new tools will help eliminate the "time-consuming" process of searching through these sources one by one. Microsoft will also offer a new generative AI chatbot called the Azure Health Bot, which can pull information from a health organization’s own internal data as well as reputable external sources such as the Food and Drug Administration and the National Institutes of Health.
Several other AI products are available for clinicians. Tana served as an administrative aid and a clinical helper during the height of the COVID-19 pandemic, answering frequently asked questions, facilitating appointment management, and gathering preliminary medical information prior to teleconsultations. Dougall GPT is another AI chatbot tailored for health care professionals. It provides clinicians with AI-tuned answers to their queries, augmented by links to relevant, up-to-date, authoritative resources. It also assists in drafting patient instructions, consultation summaries, speeches, and professional correspondence. Wang has created Clinical Camel, an open-source health care–focused chatbot that assembles medical data with a combination of user-shared conversations and synthetic conversations derived from curated clinical articles. The Chinese company Baidu has rolled out Ernie as a potential rival to ChatGPT. You get the idea.
Of course, the inherent drawbacks of AI, such as producing false or biased information, perpetuating harmful stereotypes, and presenting information that has since been proven inaccurate or out-of-date, must always be kept in mind. All AI algorithms have been criticized for giving wrong answers, as their datasets are generally culled from information published in 2021 or earlier. Several of them have been shown to fabricate information – a phenomenon labeled "artificial hallucinations" in one article. "The scientific community must be vigilant in verifying the accuracy and reliability of the information provided by AI tools," wrote the authors of that paper. "Researchers should use AI as an aid rather than a replacement for critical thinking and fact-checking."
Dr. Eastern practices dermatology and dermatologic surgery in Belleville, N.J. He is the author of numerous articles and textbook chapters, and is a longtime monthly columnist for Dermatology News. Write to him at dermnews@mdedge.com.
This article originally appeared on MDedge.com, part of the Medscape Professional Network.
Follow Medscape on Facebook, X (formerly known as Twitter), Instagram, and YouTube
Credits:
Lead image: Wrightstudio/Dreamstime
Image 1: Joseph S. Eastern, MD
© 2023 Frontline Medical Communications Inc.
Any views expressed above are the author's own and do not necessarily reflect the views of MDedge or its affiliates.
Cite this: Artificial Intelligence in the Office: Part 2 - Medscape - Dec 01, 2023.
Comments