Many AI Chatbots May Give Risky Medical Advice But SurvivorNet’s ‘My Health Questions’ Helps Patients Navigate Care Safely
- A new BMJ Open study found that nearly half of mainstream AI chatbot answers to medical questions were “problematic,” often due to misinformation or overly confident but incorrect explanations.
- SurvivorNet’s doctor‑supported platform, “My Health Questions,” was built specifically for cancer care and avoids these pitfalls by grounding every response in clinical guidelines, medically reviewed research, and personalization based on a user’s health profile.
- As AI expands across oncology, from radiology to treatment planning, experts emphasize transparency, equity, privacy, and human clinical judgment — principles that My Health Questions already embodies by pairing AI efficiency with rigorous medical oversight to deliver safe, accessible, and actionable guidance for people facing cancer.
- Dr. Beth Mittendorf, Chief of Multidisciplinary Oncology at Dana‑Farber Cancer Institute, says AI should “help us identify risk earlier” and “tailor prevention more intelligently,” augmenting – not replacing – clinical judgment.
Fortunately, not all AI tools built for patients, caregivers, and clinicians operate the same way.
Read MoreBreast cancer survivor Christine Santasiero and her sister and caregiver, Lauren, describe My Health Questions as “the perfect second opinion” during Christine’s diagnosis.
The tool was built for exactly that gap: offering on‑demand explanations of treatments, clinical trials, side effects, insurance issues, and more.
Users can type or speak their questions and receive responses tailored to their personal health profile. If they’re unsure where to begin, the platform provides prompts to guide them.
How Generative AI Chatbots Performed?
Researchers evaluated several major generative AI systems — Google’s Gemini, High‑Flyer’s DeepSeek, Meta AI, OpenAI’s ChatGPT, and XAI’s Grok — to see how well they handled medical information, citations, and readability.
Each chatbot was asked 10 questions across cancer, vaccines, stem cells, nutrition, and athletic performance.
Key findings:
- 49.6% of responses were problematic, including 19.6% that were highly problematic and potentially harmful if taken literally.
- No chatbot clearly outperformed the others, though Grok produced more highly problematic answers than expected.
- Answers about vaccines and cancer were generally stronger, while responses about stem cells, athletic performance, and nutrition were the weakest.
- Many answers were written at an advanced level, making them hard for the average patient to understand.
- Citation quality was subpar across the board, with some incomplete references.
The takeaway: general‑purpose chatbots are not yet reliable for medical guidance, especially in areas where misinformation is common or evidence is evolving.
RELATED: MHQ Clinical Intelligence. Another Opinion. When You Need It. Before the Next Patient.
Why SurvivorNet’s ‘My Health Questions’ Stands Apart
“My Health Questions” was built specifically for the realities of cancer care experienced by patients and caregivers, and not as a general‑purpose chatbot.
It handles both complex clinical issues and everyday logistical concerns with accuracy, clarity, and personalization.
Users can create a tailored health profile by entering details such as age, gender, and location, allowing the platform to refine responses over time. This reflects SurvivorNet’s mission: pairing cutting‑edge technology with physician‑driven expertise to make complex medical information accessible and actionable.
Crucially, the tool is doctor‑supported. Leading oncology experts review the information to ensure it is accurate, safe, and easy to understand.
The goal is not to replace clinicians, but to help patients arrive at appointments better prepared with informed questions and a clearer understanding of their care journey.
This combination of AI efficiency and medical oversight is already making a difference.
Real Patients, Real Impact
When longtime public health educator Dr. Maurice Franklin recognized the warning signs of prostate cancer, a routine prostate-specific antigen (PSA) screening found that he had elevated PSA levels, he turned to “My Health Questions” for clarity between appointments.
The tool guided him with the same safety‑first approach that clinicians emphasize.
“Have you had the chance to talk to your healthcare provider about these symptoms yet?”
When Dr. Franklin shared that he had but still felt anxious, the tool acknowledged the fear that often accompanies new side effects, grounding him in evidence‑based reassurance.
Patients like Gabby Cooper, a Penn State graduate undergoing treatment for stage 2 Hodgkin lymphoma, use “My Health Questions” to prepare for appointments.
When complications like colitis raised concerns about her chemotherapy regimen, the tool helped her generate specific, informed questions to bring to her oncologist.
“Should we consider any alternative regimens if colitis remains a problem despite supportive care?”

Gabby said having questions like these “would be super helpful” as she navigates her next appointment.
As SurvivorNet continues expanding its ecosystem of patient‑focused resources, “My Health Questions” represents a major step forward that shows how AI, when built responsibly and guided by medical expertise, can empower people facing cancer with clarity, confidence, and hope.
Expert Resources for Cancer Patients
- ‘Cancer Is Part of Life But So Is Hope’: When a Diagnosis Shakes Your World, Here’s How to Take Back Control With Added Clarity
- 7 Simple Tips People With Cancer Can Do To Care For Their Mental Health and Manage Stress
- “Always Get a Second Opinion” San Diego Resident Lynn Brooks’ Survivor Story
- Second Opinions on Your Cancer Diagnosis or Treatment: Do You Need One?
AI’s Expanding Role in Cancer Care
While many commercial AI chatbots still have a ways to go to better answer patient questions, SurvivorNet’s “My Health Questions” continues to impress patients, caregivers, and practicing physicians with its accuracy and personalization, providing digestible information that’s most relevant to the user.
Meanwhile, more broadly, AI is already used in radiology, pathology, treatment planning, and patient support.
In 2024, ASCO released six guiding principles for AI in oncology—emphasizing transparency, equity, privacy, accountability, and the continued centrality of human clinical judgment.
- Transparency – AI tools should remain transparent throughout their entire lifecycle.
- Informed Stakeholders – Patients and clinicians must know when AI is used in care or decision‑making.
- Equity and Fairness – AI must be designed and used to minimize bias and ensure equitable access.
- Accountability – AI systems must meet legal, ethical, and regulatory standards, with developers responsible for their performance.
- Oversight and Privacy – Institutions should enforce policies that protect clinical autonomy and patient privacy when using AI.
- Human-Centered Application – AI does not eliminate the need for human interaction.
One of AI’s most promising advantages is its ability to reduce disparities in care, especially among ethnically diverse patients.
“Traditional models often perform poorly for patients from diverse backgrounds who may not know their full family history,” Dr. Basak Dogan, Director of Breast Imaging Research at UT Southwestern’s Simmons Cancer Center, tells SurvivorNet.
“AI provides an objective assessment based on the individual’s biology, which can democratize access to high‑risk screening.”
Dr. Beth Mittendorf, Chief of Multidisciplinary Oncology at Dana‑Farber Cancer Institute and Chief of Breast Surgery at Beth Israel Deaconess Medical Center, underscores the balance ahead: “AI should help us identify risk earlier, tailor prevention more intelligently, and use specialist resources more effectively. The goal is not to replace clinical judgment, but to augment it.”
Learn more about SurvivorNet's rigorous medical review process.
