Reviewed by Lexie CornerSep 19 2024
A survey conducted by Uppsala University in Sweden and published in BMJ Health and Care Informatics found that many UK general practitioners (GPs) are incorporating generative artificial intelligence (AI) tools like ChatGPT into their clinical workflows.
The findings highlight the rapidly expanding role of AI in healthcare. While this technology has the potential to transform patient care, it is also raising significant ethical and safety concerns.
While there is much talk about the hype of AI, our study suggests that the use of AI in healthcare is not just on the horizon—it’s happening now. Doctors are deriving value from these tools. The medical community must act swiftly to address the ethical and practical challenges for patients that generative AI brings.
Dr. Charlotte Blease, Study Lead Researcher and Associate Professor, Uppsala Universitet
According to the study, 20 % of GPs use generative AI tools in their practice, with ChatGPT being the most popular. Conducted in collaboration with Harvard Medical School and the University of Basel, this is the most comprehensive study of generative AI in clinical practice since the launch of ChatGPT in November 2022.
The study was conducted in February 2024 as part of a monthly omnibus survey. It included 1,006 GPs from various regions across the UK, surveyed through Doctors.net.uk, the largest professional network for UK doctors. It aimed to assess the use of AI-powered chatbots by GPs in clinical settings and to understand how these tools are being integrated into their practice. With the rise of large language models (LLMs), there has been significant interest in their potential to assist with tasks ranging from documentation to differential diagnosis.
In addition to finding that 20 % of GPs use AI tools in their practice, the study revealed that 29 % of users employed these tools to generate documentation after patient appointments and 28 % to support differential diagnosis.
These findings suggest that AI chatbots are becoming valuable tools in medical practice, particularly for reducing administrative burdens and supporting clinical decision-making. However, their use also presents significant risks.
There is potential for AI to introduce errors (hallucinations), amplify biases, and compromise patient privacy. As these tools continue to evolve, the healthcare industry must develop robust guidelines and training programs to ensure their safe and effective use.
Blease concluded, “This study underscores the growing reliance on AI tools by UK GPs, despite the lack of formal training and guidance and the potential risks involved. As the healthcare sector and regulatory authorities continue to grapple with these challenges, the need to train doctors to be 21st-century physicians is more pressing than ever.”
The study was funded by the Research Council on Health, Working Life, and Welfare’s “Beyond Implementation of eHealth” project (2020-0122) and the University of Basel.
Journal Reference:
Blease, CR., et al. (2024) Generative artificial intelligence in primary care: an online survey of UK general practitioners. BMJ Health and Care Informatics. doi.org/10.1136/bmjhci-2024-101102