WebMD BlogsMy Experience

Artificial Intelligence: Finding Meaning in the Data

brain scan of stroke
January 02, 2020

By Kavya and Neeyanth Kopparpu

Ever since the term artificial intelligence (AI) was coined by computer scientist John McCarthy more than 60 years ago, researchers have been able to apply the powerful technology to a variety of applications, including many used in health care. Despite some obstacles, the intersection between health care and AI has the potential for a great partnership. The abundance of patient data—from doctors’ notes and MRI scans to gene sequences—has allowed artificial intelligence to swoop in and help doctors and researchers create predictions with a high degree of accuracy.

But for patients, something is still missing. If a doctor determined you had terminal brain cancer, your first question would probably be “Why?” Unfortunately, because most powerful AI models are unable to explain their decisions, your doctor would be stuck saying, “because a computer told me.”

Recently, advancements in the field of AI have allowed, if not yet an explanation, at least a kind of interpretation. These AI models can provide additional information about what is important in the given data.

As the creators of some of these AI models, we have seen firsthand that an additional step of interpretability adds to an AI’s usefulness. For example, when we created a model to detect early signs of Parkinson’s disease, we only picked up on problems with it after we looked at how the model interpreted MRI scans. Originally, the model claimed that random points in and out of the brain were important to its decision-making process. After realizing this, we were able to fix the model to more accurately detect early signs of the disease.

Most important, interpretability may build trust by providing additional insight into using AI. It brings us closer to a future of human/machine teams, allowing doctors to start to understand why the AI made its decision.

And we need both: Without AI, we lose the ability to use vast amounts of medical data that could lead to more accurate diagnoses. Without human doctors, we lose empathy, reliability, and natural patient experiences. Integrating interpretable results into AI systems will be an important step toward using AI for a more accurate, cheaper, and accessible tool in the doctor’s office.

WebMD Blog Reviewed by Neha Pathak, MD on January 02, 2020
© 2020 WebMD, LLC. All rights reserved.

More from the My Experience Blog

View all posts on My Experience

Latest Blog Posts on WebMD

View all blog posts

Important: The opinions expressed in WebMD Blogs are solely those of the User, who may or may not have medical or scientific training. These opinions do not represent the opinions of WebMD. Blogs are not reviewed by a WebMD physician or any member of the WebMD editorial staff for accuracy, balance, objectivity, or any other reason except for compliance with our Terms and Conditions. Some of these opinions may contain information about treatments or uses of drug products that have not been approved by the U.S. Food and Drug Administration. WebMD does not endorse any specific product, service or treatment.

Do not consider WebMD Blogs as medical advice. Never delay or disregard seeking professional medical advice from your doctor or other qualified healthcare provider because of something you have read on WebMD. You should always speak with your doctor before you start, stop, or change any prescribed part of your care plan or treatment. WebMD understands that reading individual, real-life experiences can be a helpful resource, but it is never a substitute for professional medical advice, diagnosis, or treatment from a qualified health care provider. If you think you may have a medical emergency, call your doctor or dial 911 immediately.

Read More