Artificial Intelligence (AI) has been used in healthcare for years, assisting with radiology dictations, lab result analysis, and EKG interpretations. Newer could revolutionize clinical workflows, alleviate documentation burden, and elevate clinical decision-making, offering substantial benefits for patients, health system operations, and even clinicians suffering from burnout.
Widespread adoption of AI in healthcare will be gradual, however, as there are several substantial barriers to overcome. Clinicians are anxious about accuracy, administrators are worried about the cost of implementing AI and whether their clinicians will use the tool, and patients are just plain skeptical. ( of adults believe AI is safe and secure.) Data security and bias also are significant concerns.
U.S. policymakers, including those in the White House, also have questions. The Biden administration just issued a far-reaching on AI, which will require federal health agencies to establish a task force and, within 365 days, produce a strategic plan that could include for AI-enabled tools used in hospitals, insurance companies, and other healthcare organizations. The president also called for the establishment of a national tracking system for AI-related harms and unsafe practices, which will be used to develop best practices to avoid these harms.
Healthcare tech investors and innovators, along with those of us who believe in the promise of AI, will need to work hard to establish this tool's trustworthiness and to demonstrate the good it can do for patients and providers. Stringent regulations and protection of health information will play key roles in making this a reality.
AI Will Be a Game Changer for Patients and Clinicians
As physicians, one area where we see the most potential for AI is to reduce clinician burnout and improve patient-doctor relationships.
Physicians spend of their time on administrative tasks, including heightened documentation demands and onerous prior authorization processes. These distractions from spending time directly with our patients can lead us to burn out, and reduce our work hours or leave the profession entirely. The application of AI to create efficiencies have yielded promising results, particularly among tasks such as clinical note documentation, work shift scheduling, and medical billing and coding.
The University of Pittsburgh Medical Center, the University of Kansas Health System, University of California San Diego Health, and others also are to reduce documentation burdens.
Additionally, AI tools have effectively organized scattered from various sources into concise patient history timelines, which will give providers a more accurate view of what a patient might need and what solutions have already been tried. As the American Hospital Association has , "The use of AI has advanced patient safety by evaluating data to produce insights, improve decision-making and optimize health outcomes. Systems that incorporate AI can improve error detection, stratify patients and manage drug delivery."
Valid Concerns About Bias, Equity, and Security
Despite this potential, other storylines have emerged: AI could perpetuate racial biases and may lead to .
Indeed, recent studies have confirmed worries that AI is , but this phenomenon aligns with the fact that we all possess implicit biases. The technology operates based on the inputs and algorithms created by fallible humans. As an published by the World Economic Forum states, "Bias is an inherent human trait and can be reflected and embedded in everything we create, particularly when it comes to technology." There are potential fixes, however, and the article suggests using open-source data might be one solution.
The point is we need to -- and can -- solve for these problems. We should not abandon the use of AI in healthcare simply because we have not yet tackled this important issue.
Because AI draws data from various platforms, there also is growing apprehension regarding the safety of intellectual property and other confidential information. For instance, if one of us were to seek assistance crafting a corporate announcement involving a trade secret set to be revealed in 6 months and input this data into ChatGPT, could others gain access to it? The ownership of the data becomes a pertinent question, and the confidentiality of the secret may be compromised. Patients share similar concerns about the privacy of their health information and whether it may be inadvertently shared.
These dual concerns demand careful consideration in the evolving landscape of AI. So, what are the solutions?
Public Policy: AI Needs to Be Regulated
The industry needs clear regulatory guidelines. Americans agree with this idea. In fact, of U.S. adults believe there should be regulations ensuring AI is safe and secure.
Numerous companies have to congressional leaders, urging the establishment of regulatory standards to safeguard patients, businesses, and innovators. A less-discussed issue is the uncertainty lack of regulation creates for innovators and investors. Investors recognize regulation is inevitable. Consequently, investing in products or technologies that could face stringent regulation in the future raises concerns. What if a product we have poured money into is later regulated out of existence?
Regulation provides a necessary framework and level playing field for investors and innovators alike. The Biden administration's executive order is a good start, but Congress's input will be essential.
The World Health Organization recently outlined for regulation of AI for health: transparency and documentation; risk management; validating data and intended use; data quality; privacy and data protection; and ensuring collaboration between regulatory bodies, patients, healthcare professionals, industry representatives, and government partners.
This is a useful framework for U.S. policymakers to consider as they go forward. In the slow-moving world of public policy, lawmakers need to act with haste.
Healthcare Systems: Investment in Data Protection
Government actors cannot solve all the problems related to AI, of course. As with any new IT integration, healthcare systems must consider the new data protection measures critical to building trust and adoption.
These protection measures include having comprehensive plans for potential data breaches and ransomware attacks. Additionally, health systems should implement access and encryption controls, leveraging strong multi-factor authentication models. These measures add additional layers of security and can limit the likelihood of unauthorized access. While adding these layers of security, health systems must avoid inadvertently injecting further complexity into workflows and processes, which could limit timely access for those who should be using the system. Working with technology developers and vendors regularly to proactively address cyber threats and minimize potential incidents' impact is essential.
Finally, hospitals and health systems must also ensure clinicians are trained on new systems and included in any design or implementation decisions. Educating staff on cyber security best practices is crucial in preventing data breaches.
As with any new technology, industry leaders and healthcare institutions must engage in meaningful dialogues with end-users. Initiating discussions at the outset regarding the potential risks and apprehension associated with AI is key to preempting future setbacks. Moreover, the active involvement of practicing clinicians at every stage of development and implementation will ensure success.
These steps are not exhaustive but will help build trust in AI.
N. Adam Brown, MD, MBA, is a practicing emergency physician, entrepreneur, and healthcare executive. He is the founder of ABIG Health, a healthcare growth strategy firm, and a professor at the University of North Carolina's Kenan-Flagler Business School. is a practicing physician and expert in public policy and health technology. She is Chief Clinical Officer at health AI company Abridge and former senior advisor to U.S. Surgeon General Vivek Murthy.