A report from GAO and the National Academy of Medicine assesses the AI healthcare landscape and signals regulatory oversight may emerge down the road.
Once the realm of science fiction, artificial intelligence is making inroads into many aspects of healthcare, with one recent report from UnivDatos predicting the technology will attain a market value of $26.6 billion by 2025. Some may fear AI has entered a Wild West phase, unfettered by proper policies and oversight.
A report issued by the U.S. Government Accountability Office (GAO), in concert with the National Academy of Medicine (NAM), signals that the government and healthcare professionals are taking a closer look at the AI phenomenon, and that regulatory oversight may emerge down the road.
Artificial Intelligence in Health Care: Benefits and Challenges of Technologies to Augment Patient Care, a 106-page report to "congressional requesters," addresses the promise of AI in healthcare, as well as the need to exercise caution.
In a jointly signed introductory letter to the report, Karen L. Howard, PhD, director, science, technology assessment, and analytics for the GAO and J. Michael McGinnis, MD, MA, MPP, the Leonard D. Schaeffer Executive Officer and Executive Director of the NAM Leadership Consortium, said "AI has promising applications in health care, including in augmenting patient care. For example," they continued, "it may have the potential to improve treatment, reduce burden on providers, and generally increase the efficiency with which health care facilities and providers use resources, resulting in potential cost savings or health gains. However, as might be expected with a tool with such broad potential use in health and health care decision-making, applying AI tools for health and health care also raises ethical, legal, economic, and social questions."
The report provides an assessment of current and emerging AI tools to improve patient care, along with their benefits, the challenges associated with these tools, as well as policy options.
Among the challenges cited:
- Data access: Developers experience difficulties obtaining the high-quality data needed to create effective AI tools.
- Bias: Limitations and bias in data used to develop AI tools can reduce their safety and effectiveness for different groups of patients, leading to treatment disparities.
- Scaling and integration: AI tools can be challenging to scale up and integrate into new settings because of differences among institutions and patient populations.
- Lack of transparency: AI tools sometimes lack transparency—in part because of the inherent difficulty of determining how some of them work, but also because of more controllable factors, such as the paucity of evaluations in clinical settings.
- Privacy: As more AI systems are developed, large quantities of data will be in the hands of more people and organizations, adding to privacy risks and concerns.
- Uncertainty over liability: The multiplicity of parties involved in developing, deploying, and using AI tools is one of several factors that have rendered liability associated with the use of AI tools uncertain. This may slow adoption and impede innovation.
The report also offers the following opportunities for policy development:
- Encourage interdisciplinary collaboration between developers and health care providers
- Development or expansion of high-quality data access mechanisms
- Establishment of best practices (such as standards) for development, implementation, and use of AI technologies
- Creation of opportunities for more workers to develop interdisciplinary skills
- Collaboration with relevant stakeholders to clarify appropriate oversight mechanisms
- Maintaining the status quo
“ … Applying AI tools for health and health care also raises ethical, legal, economic, and social questions.”
GAO and NAM officials in introductory letter to the report, Artificial Intelligence in Health Care: Benefits and Challenges of Technologies to Augment Patient Car
Mandy Roth is the innovations editor at HealthLeaders.
AI may have the potential to improve treatment, reduce burden on providers, and increase efficiency.
The technology poses ethical, legal, economic, and social concerns.
Potential policies could address interdisciplinary collaboration, data access, best practices, and oversight mechanisms.