Artificial intelligence (AI) is all the rage in 2024 with every indication it will continue to play a key role in people’s lives even as it rapidly evolves. Specifically in the area of health care and AI, much is already happening. Some of it is good and promising while some of it is concerning, as is the case with most everything that involves AI. In this article we will take a look at not so much how you as a health care consumer might use AI but rather how health care providers are using AI in ways that can impact you.

How You Might “Encounter” AI in Your Colorado Doctor’s Office

The Associated Press has looked at selected ways physicians are using AI to assist communications that are part of their routine office visits with patients. The news entity found that doctors are, for example, using AI to answer patient messages and/or to take notes during exams. The AP cited the instance of a 70-year-old Denver patient who received a friendly audio message from his doctor that sounded exactly like him, but the message ended with a disclosure that it had been automatically generated and edited by the doctor. The patient found that disclosure to be very important. But the AP says that in Colorado such a disclosure is not mandatory. The AP noted that the head of technology innovations at UCHealth said approximately 250 doctors and staff in that system use an AI tool to draft messages to patients, which are then delivered through the UCHealth patient portal.

Doctors are also using devices to record what is said during patient exams, typically using an AI-powered smartphone app “that listens, documents and instantly organizes everything into a note you can read later.” A plus — for both the physician’s office and potentially for you — is that such a recording won’t overlook details that can legitimately be billed to insurance. It also means the physician is no longer tied to a computer during an exam and can maintain more eye contact with you as the patient. In addition, talking throughout the exam for the benefit of the AI recording may help patients better understand what is going on.

The AP adds that your doctor should ask for your consent before using such a recording tool, and you may see new wording along this line in forms you fill out at the doctor’s office. According to the AP, some health systems encourage disclosure while others do not. This communications automation saves doctors time and lessens burnout, the AP says, “but it also shakes up the doctor-patient relationship, raising questions of trust, transparency, privacy, and the future of human connection.”

Possible AI Pitfalls for Colorado Patients

No technology is flawless, and that includes AI. Such tools, the AP explained, “can misinterpret input or even fabricate inaccurate responses, an effect called hallucination.” But the new tools are expected to have internal guardrails to prevent inaccuracies from reaching patients, or having such inaccuracies end up in a patient’s electronic health record.

The guardrails don’t always work, however. The AP found that a Colorado patient with a runny nose was alarmed when she got an AI-generated message that her problem could be “a brain fluid leak.” This was in fact not the case. The problem was that a nurse had not proofread the auto-generated message carefully enough and it went out to the patient with the inaccurate information. As one digital innovations leader was quoted as saying: “You don’t want those fake things entering the clinical notes.”

What about you as a patient using an AI chatbot like ChatGPT for health information? There are pros and cons here as well. An AI chatbot can help you be a more informed patient and may help guide you in discussions with your physician. But as with other AI output, a chatbot can generate false information about your health issue and can even fabricate research citations. Also, because medicine always involves risks and benefits, no chatbot can deliver a sure-thing solution to you. Your health care decisions should include what science has to say and what your doctor has to say. Remember also that a chatbot typically assumes no liability for giving you inaccurate information.

Will Coloradans’ Private Information Be Safe When Using AI?

The AP noted that U.S. law requires health care systems to get assurances from entities they work with that protected health information will be safeguarded. The Department of Health and Human Services can conduct investigations and impose fines if this mandate is violated. While the doctors the AP interviewed expressed confidence about data security, the AP stated, “Information shared with the new tools is used to improve them, so that could add to the risk of a health care data breach.”

Specific Health Benefits AI Might Bring to Coloradans

The use of AI for selected medical/health issues is constantly evolving, being used for such things as reading mammograms, diagnosing eye disease, and assessing heart problems. Given what is often massive amounts of data to process, AI has been shown capable of making diagnoses in some cases that are more accurate than previous tried-and-true approaches such as a medical scan.

But as a new and evolving field, use of AI for health issues can be complex and nuanced. Dr. Michael Abramoff is a professor of electrical and computer engineering and founder of the AI Healthcare Coalition. When he was interviewed by Belvoir Media Group for his take on usage of AI in health, he made a number of key points as follows in one of Belvoir’s health-focused newsletters:

  • Abramoff posited that there’s a distinction between “glamour AI” and “impact AI.” Only impact AI focuses on better clinical outcomes that actually helps patients, while glamour AI is merely something that gets everyone talking. He said the most impactful AI is “autonomous AI,” which makes an accurate diagnosis by itself or helps a physician make a more accurate diagnosis.
  • Abramoff reported that the Food and Drug Administration (FDA) has signed off on some 700 “Artificial Intelligence and Machine Learning-Enabled Medical Devices” in specialties such as radiology, cardiology, ophthalmology, gastroenterology, urology, neurology, and other specialties.
  • For the 40 million Americans who have diabetes and are at risk for diabetic retinopathy (an eye condition that can cause vision loss), an FDA-cleared autonomous AI device can now diagnose diabetic retinal disease early and without even requiring a doctor. A qualified technician can operate the device. Abramoff is a co-creator of such a device. (See more about this at https://www.digitaldiagnostics.com/products/eye-disease/lumineticscore/.)
  • According to Abramoff, AI software has been shown to have a high success rate in identifying skin cancer and precancerous lesions. So high that the FDA has given it “breakthrough designation status.” (See https://sklip.ai/.)
  • In a study published in Lancet Oncology, Swedish scientists found that AI software detected 20 percent more breast cancers than standard mammography. This was based on data from millions of mammograms. (Find out more at https://www.icadmed.com/breast-health/.)

So, yes, there are positive and promising developments in the nexus of AI and your health care. The above are just a few examples. The key is to also remain aware of AI’s limitations and susceptibility to error, practice healthy skepticism, and maintain solid and honest “old-fashioned” relationships with doctors and other health care providers you trust. AI may be able to add to those relationships but should not replace them.