Beware of Using Chatbots for Medical Information
As artificial intelligence (AI) chatbots continue to attract interest, many people are tempted to use them to acquire health or medical advice. But reports are also surfacing that this is potentially risky. The chatbots, such as Open AI’s ChatGPT and Google’s Bard, are “trained” on troves of information pulled from the internet and have been shown to answer questions with a range of misconceptions and falsehoods. A specific concern is that these systems tend to amplify forms of medical racism. That is, they perpetuate old tropes that have persisted for generations, reinforcing long-held but false medically related beliefs about variances between Black and White patients. As one example, old and now debunked beliefs once led medical providers to rate Black patients’ pain lower compared to Whites and recommended less relief. There are also long-held myths about kidney function, lung capacity, and skin thickness differences that chatbots still pick up. In a July 2023 letter to the Journal of the American Medical Association, researchers said future research should investigate “potential biases and diagnostic blind spots” of chatbots. And Dr. John Halamka of the Mayo Clinic stressed the importance of testing commercial AI products to ensure they are fair, equitable, and safe.
Did you find our information helpful?
We are always adding new information and resources and are open to suggestions for improvement. We can do even more with your contributions.