When AI Repeats the Same Old Story: Why Women Should Be Cautious Asking ChatGPT for Health Advice
- Dr. Brenda Tapp

- 3 days ago
- 3 min read

A few months ago, a patient told me she had uploaded her blood work into ChatGPT and asked it to recommend how to improve her health based on her results.
This was the first time I had ever heard of someone doing this. On one hand, it makes sense; it's resourceful! It’s a free and quick way to have your test results assessed and a starting place to learn about how to improve your health based on your unique test results. I love it when someone feels empowered and confident to take their health into their own hands. I do feel like we need to be cautious of these AI search functions. I often wonder how something so great might come back and kick us in the butt later on as we learn more about it. For me, that pendulum is starting to swing in the other direction now, and I am very cautious about how I use it.
I feel that my lab results contain deeply personal information. Insights about my body, my hormones, my risks, and my story. Once that information is shared online, you can’t be certain where it goes or who can access it. Large language models like ChatGPT store and learn from enormous datasets. Even though OpenAI states that data isn’t used to train newer models when “chat history” is turned off, the lines around data privacy are blurry.
This is one naturopathic doctor's very friendly reminder that your health information should be held with the same confidentiality as a medical chart, and not treated like search history.
At our most recent team meeting, the team spent a significant amount of time discussing various AI platforms and how to use them safely and effectively. One of my team members brought up some interesting information, and it got us thinking about how this pertains to health, particularly women's health. This is too important not to share. A massive thank you to Kristen (who does our social media check out her business Aligned Social. She works at my second location, The Village Apothecary in Millbrook. She’s also my neighbour and I am lucky to call her a friend.) for bringing this up.
Artificial intelligence is often sold as the ultimate neutral voice. It’s logical and data-driven. Yet when tested, the truth is less flattering. Two recent studies, one from the Technical University of Würzburg and one from the University of California, found that when identical job descriptions were submitted to ChatGPT with only the gender changed, the AI consistently recommended lower salaries for women than for men.
AI didn’t invent that gap; it learned it. These systems are trained on massive text datasets filled with decades of social, cultural, and institutional patterns. If the data reflects inequity, so will the output.
The parallel in medicine …
In medicine, women have long been under-represented in research, from early drug trials that excluded female participants to studies that failed to track how conditions present differently in women. Many standard reference ranges, treatment thresholds, and “typical” symptom pictures were built around the male body.
That historical imbalance echoes in digital systems today. If AI draws from medical literature written through a male-centric lens, it risks giving women health guidance that is incomplete, inaccurate, or simply tone-deaf. The result: a technology that sounds authoritative but subtly reinforces the same blind spots women have fought to dismantle.
I am sure you will agree with me when I say that women deserve better than default settings.
So yes, use tools like ChatGPT. Let them spark ideas, organize thoughts, or summarize studies. But never let it decide what your body needs … or what your work is worth. Technology is only as progressive as the data it’s built on, and that means it still has catching up to do.





Comments