For hospitals, the “AI revolution” has barely begun. AAMI Committee experts explain why that is and whether a future for AI in healthcare is guaranteed.
Media Relations Manager, AAMI
You’re out hiking when you feel a sharp, near-icy pain in the back of your left calf; it’s a bite from a snake coiled less than a foot away. You snap a picture of the snake with your smartphone and upload the photo to an app, which uses an algorithm to produce the result: “Dangerous. Seek help.”
Minutes later, help arrives with the high-pitched whir of a drone. Its cargo is a small box with bold, red lettering: “Antivenom.”
This scenario remains something of a pipe dream, for now, says Pat Baird, regulatory head of global software standards at Philips. But the technologies that can make it happen are already very real.
“It’s a little off the beaten path from the typical diagnostic imaging kind of use case that people talk about,” Baird said, “but wow, wouldn’t that be so cool?”
Baird is the co-chair of the Association for the Advancement of Medical Instrumentation (AAMI) standards committee focused on artificial intelligence (AI) and machine learning (ML) in healthcare. He explains that outside the healthcare space, AI and the ML algorithms that drive them are everywhere.
Facial recognition, Google image searches, “smart” home devices and their associated digital assistants, and even your curated social media feed are all possible thanks to increasingly sophisticated AI. There are also image recognition apps designed for identifying birds and, yes, even snakes, although without the added benefit of express-delivering the right antivenom.
AI’s future in healthcare
So, what about applying this amazing technology towards saving lives?
“You know, there are only a handful of FDA-approved devices that use AI,” said AAMI AI committee co-chair Jesse Ehrenfeld, M.D., M.P.H. “The first was for detection of diabetic retinopathy, which is a really important problem.”
Ehrenfeld (immediate past chair of the American Medical Association Board of Trustees and a professor of anesthesiology at the Medical College of Wisconsin) explains that one of the strongest benefits of the technology is not that AI-based devices will replace clinicians. It is, rather, that they augment the capabilities and scale of medical staff. In areas where manpower is lacking, such as patient monitoring and data analysis, AI can compensate.
“We can’t make ophthalmologists fast enough to screen every patient in America with diabetes, but there’s now a device that you can park in a corner of a drugstore,” he added. With the device’s aid, a high school-trained technician can screen and diagnose a patient “with a very high degree of certainty.”
Programs that can assist in diagnoses are some of the most highly anticipated uses of AI/ML in healthcare. The program known as GI Genius, for instance, recently became the first AI-based device to be granted premarket authorization by the U.S. Food and Drug Administration (FDA) for detecting colon cancer. In a study assessed by the FDA, colonoscopy paired with the AI program proved to be about 15 percent more effective at identifying lab-confirmed adenomas or carcinomas when compared with traditional colonoscopy practices.
The application of image recognition systems is being explored in the radiology space as well; even while pathology is anticipating a surge in algorithms for identifying infection or predicting the chances of a pathogen’s spread.
Trusting artificial intelligence
However, some clinicians are hesitant to accept these new technologies. After all, the consequences of an AI causing a mistake in a medical setting are far greater than a digital assistant misunderstanding a request to turn on the lights.
For instance, in the case of Baird’s anti-venom example, what would be considered an acceptable margin of error? If the wrong antivenom is delivered, a patient may die. And if a patient’s medical history is not correctly taken into account, severe allergic reactions to the treatment may occur.
“Also, how do we think about reliance on the technology?” Ehrenfeld asked. “My residents in training today have never documented an anesthetic for a surgical case on paper. They’ve always had electronic health records. So what happens when an AI algorithm isn’t working, and prompts aren’t there to remind caretakers to readjust the antibiotics? Will they be prepared for this?
“It reminds me of this story of this couple who drove directly into a lake because their GPS told them to,” he adds. Most of the time, mistakes like this can be avoided “because common sense kicks in.”
But Baird warns that when the ML algorithm guiding a program is extremely sophisticated, understanding why it reaches a specific conclusion can be difficult. “When you don’t understand how a technology is working, it’s a black box. There’s a risk of mistakes not being identified.”
Solving the “black box” problem
“One of the nice abilities for these ML systems is they can continue to learn even after the product is launched,” Baird explained, adding that the program “gains experience” as its algorithm is exposed to more data specific to a particular hospital or patient group.
However, the consequence of this is that the decision-making behavior of an AI/ML system may not stay the same, creating a moving target for safety checks and regulation.
“The FDA’s traditional paradigm of medical device regulation was not designed for adaptive artificial intelligence and machine learning technologies,” the agency stated. “Under the FDA’s current approach to software modifications … many of these artificial intelligence and machine learning-driven software changes to a device may need premarket review.”
Naturally, accounting for changes that have yet to even occur in a program is a tricky proposition. That’s why the FDA is working with Baird’s and Ehrenfeld’s AI committee to establish a new Good Machine Learning Practice (GMLP) for the development of medical devices, an important aspect of the agency’s first action plan outlining steps for regulating AI/ML-based software as a medical device.
What’s more, the AAMI AI committee and a mirroring BSI AI committee are pooling experts to draft risk management guidance for AI/ML in the healthcare space. This new document will repurpose key lessons from an existing internationally known standard, while leveraging the joint drafting committees’ AI expertise.
“A lot of people think that machine learning is mysterious — that it requires completely new ways of doing things,” said Baird, who has also had leadership roles on AI-related projects with the World Health Organization, the International Organization for Standardization, and the International Electrotechnical Commission. “I don’t think that’s necessarily true. I don’t think we need to reinvent the wheel … The only difference is that ML is going to fail in slightly different ways than how software typically fails.”
Personally, Ehrenfeld is eager to make AI trustworthy enough to introduce into his own day job, earning more meaningful face time with his patients.
“I actually think AI is going to help us re-humanize healthcare. It’s making it so caregivers have more time to give care,” he said. “To me, that’s the payoff of this technology.”