Medical AI regulators should learn from the global financial crisis

0

In regulating AI in medicine, the FDA must exercise caution in implementing a principled framework.

Artificial intelligence (AI) has the potential to make healthcare more effective and efficient in the United States and beyond, if it is developed responsibly and regulated appropriately. The Food and Drug Administration (FDA), which reviews most medical AIs, only approves AI in a “locked” or “frozen” form, meaning it can no longer learn and adapt when ” she interacts with patients and providers. This important guarantee ensures that AI does not become less safe or effective over time, but also limits some of its potential benefits.

To promote the benefits while managing the risks of AI, the FDA has proposed a new program to oversee “unlocked” AI products that can learn and change over time.

The FDA proposal calls for a complex and collaborative approach to regulation that is quite similar to how financial regulators in the UK operated before the global financial crisis from 2007 to 2008. Financial regulators in the UK used a framework principle-based, which establishes principles instead of more detailed rules.

This style of regulation failed during the financial crisis, but that doesn’t mean the FDA’s AI proposal is doomed. This does mean, however, that regulators and Congress should learn a few lessons from where things went wrong during the financial crash, including ensuring that regulators can remain independent of the companies they oversee and putting fairness on the back burner. the very heart of the regulatory equation.

Taking a step back, AI promises breakthroughs in several areas of medicine. For example, providers can use AI to diagnose patients by looking at their medical imagery or even photos of their skin. Recent evidence suggests that AI may be much better at making accurate diagnoses than even trained radiologists or dermatologists, so using software could reduce the costs and time to get a diagnosis in one. routine case.

Despite its potential, the risks of medical AI have yet to be managed. Unfortunately, it’s not always easy to determine how an AI system makes a diagnosis or medical recommendation, as AI often cannot explain its decision making.

AI software doesn’t “know” what a human body or disease is, but instead, it recognizes patterns in the pictures, words, or numbers it “sees”. This limitation can raise real questions about whether an AI system is making the correct diagnosis or recommendation and how it came to that conclusion.

Big problems can result from AI systems that analyze data with bias. The American medical system has a long history of racism and other forms of marginalization, which can be reflected in medical data. When AI systems learn using biased data, they can contribute to worse health outcomes for marginalized patient groups. Regulation to prevent these unfair outcomes is essential.

Many applications of medical AI would fall under the authority of the FDA to regulate medical devices. The 21st Century Cures Act excludes certain types of “low risk” AI from FDA review, such as algorithms designed to help physicians. But many AI products have yet to pass FDA review, allowing the agency to take its risks seriously.

In April 2019, the FDA released a white paper outlining an innovative regulatory plan for unlocked AI software. The FDA’s AI proposal builds on its previous idea of ​​”pre-certification,” which would require the agency to regulate developers as a whole, instead of their individual software products, using general principles such as “clinical accountability” and “proactive culture”.

The 2019 AI proposal adds to this framework by asking developers to describe how they expect their software to change over time and how they will manage the risks associated with those changes. The FDA, with the help of the developers, would then monitor the actual results in the clinics and may require additional regulatory review if the software changes too much.

The Net Effect is a regulatory system in which the FDA approves a software developer’s plans for self-regulation of AI development, then uses the principles of pre-certification to assess regulatory outcomes and determine if they are consistent with public policy objectives. This type of system could be called principles-based regulation, an approach that UK financial regulators used before the global financial crisis and others continue to use today.

Earlier in 2021, the FDA announced plans to move the proposal forward and respond to stakeholder comments. If the FDA continues to move forward with its proposed principles-based plan, there are important lessons to be learned from the global financial crisis.

First, although regulators can learn and adapt to new and complicated situations or technologies by working with the companies they oversee, regulators must always maintain their independence from those companies.

An essential part of this lesson requires regulators to have a large enough budget to effectively oversee the industry. Regulators need sufficient resources to oversee companies and develop their own internal expertise on the technologies and markets they regulate, so as not to depend too much on companies for their expertise and expose themselves to the risks of capture.

Second, regulatory failures can often hurt marginalized groups the most, which the financial crash has once again shown. Already, peer-reviewed reports have shown incidents of algorithmic bias leading to worse medical care for black patients, meaning AI may be more or less safe and effective for different groups of patients. Failure to regulate this component of AI could result in unacceptable and unfair health damage.

To address these issues, policymakers should consider at least two actions. The FDA should ask Congress for a long-term budget increase when the agency calls for new legislation to implement the AI ​​plan – and Congress must be prepared to respond to the agency’s request. In addition, “health equity” should be used as a stand-alone principle or outcome by which the agency will measure business performance and results in the real world. These and other changes to the FDA’s AI proposal could allow regulators to better protect all patients.

Congress and civil society groups should also monitor this complex area of ​​policy and regulation to ensure that AI in medicine makes society healthier, safer and more equitable.

Walter G. Johnson is a doctoral researcher at the School of Regulation and Global Governance (RegNet) at the Australian National University.


Source link

Leave A Reply

Your email address will not be published.