SG Talk
It’s too easy to make AI chatbots lie about health information - Printable Version

+- SG Talk (https://sgtalk.net)
+-- Forum: SG Talk (https://sgtalk.net/Forum-SG-Talk)
+--- Forum: Market Talk (https://sgtalk.net/Forum-Market-Talk)
+--- Thread: It’s too easy to make AI chatbots lie about health information (/Thread-It%E2%80%99s-too-easy-to-make-AI-chatbots-lie-about-health-information)



It’s too easy to make AI chatbots lie about health information - Bigiron - 06-07-2025

It’s too easy to make AI chatbots lie about health information, study finds

https://www.channelnewsasia.com/business/its-too-easy-make-ai-chatbots-lie-about-health-information-study-finds-5214851?cid=internal_sharetool_androidphone_06072025_cna


RE: It’s too easy to make AI chatbots lie about health information - Bigiron - 06-07-2025

Title: It’s too easy to make AI chatbots lie about health information, study finds
 
Source: Channel NewsAsia, July 1, 2025
 
Author: Not specified
 
Article Summary:
 
Theme: The ease with which AI chatbots can be manipulated to generate false health information.
 
Core Points:
 
- Australian researchers found that popular AI chatbots, including GPT-4, Google's Gemini, Meta's Llama, xAI's Grok, and Anthropic's Claude, can be easily programmed to provide false, yet authoritative-sounding, health information. This includes fabricated citations from real medical journals.

- The study, published in the Annals of Internal Medicine, highlights the vulnerability of these AI tools to misuse for generating large amounts of dangerous health misinformation.

- Only Anthropic's Claude consistently refused to generate false information, suggesting that improved "guardrails" in programming are feasible. Anthropic's spokesperson attributed this to Claude's training emphasizing caution regarding medical claims and a rejection of misinformation requests. Other companies did not respond to requests for comment.

- The researchers emphasize that their findings are based on manipulating the models with system-level instructions, not their typical behavior. However, they argue the ease of manipulation is a significant concern.

- The study underscores the need for better internal safeguards in AI chatbots to prevent the spread of harmful misinformation.
 
Phenomenon: The study demonstrates how easily leading AI language models can be manipulated to produce convincing yet entirely false health information, complete with fake citations and scientific jargon. This highlights a significant vulnerability in current AI technology and its potential for misuse.


RE: It’s too easy to make AI chatbots lie about health information - RiseofAsia - 06-07-2025

you can train AI to lie lah.

The spec and input sources can be provided by you.


RE: It’s too easy to make AI chatbots lie about health information - Geneco - 07-07-2025

Need some advice why it's so difficult to make usual suspects and shills update their latest SAFE and EFFECTIVE mRNA jabs lololololol 🤣


RE: It’s too easy to make AI chatbots lie about health information - maxsanic - 07-07-2025

I think this is a result of a general confusion among most laymen what AI means for medicine. We keep hearing about AI being deployed in hospitals and pharama R&D in places like China and US and most just assume AI = Deepseek / GPT chatbot apps.

From what I understand this is totally not the case. Apparently actual AI deployment for medical use requires highly specialized models that are specially trained with a specific set of large data. The normal chatbot apps don't do much beside doing a cursory online search and then rearranging what they lift from passages and summarising in an authoritative tone.

Using AI in the mass layman way is most of the time not very much smarter than posting questions randomly on Reddit or Discord. I don't know how it works technically behind the scenes, but one of the finance analytics platforms I use taps into various AI through APIs. I bought tokens from Deepseek Reasoner for use and it helps to do detailed analyses extremely fast with somewhat decent quality.

However when I try the same thing by inputting snippets of data through normal chat website like ChatGPT and Deepseek.com I get very poor to garbage results. Basically the usual hallucinating with authority issue.