It’s too easy to make AI chatbots lie about health information
#5

I think this is a result of a general confusion among most laymen what AI means for medicine. We keep hearing about AI being deployed in hospitals and pharama R&D in places like China and US and most just assume AI = Deepseek / GPT chatbot apps.

From what I understand this is totally not the case. Apparently actual AI deployment for medical use requires highly specialized models that are specially trained with a specific set of large data. The normal chatbot apps don't do much beside doing a cursory online search and then rearranging what they lift from passages and summarising in an authoritative tone.

Using AI in the mass layman way is most of the time not very much smarter than posting questions randomly on Reddit or Discord. I don't know how it works technically behind the scenes, but one of the finance analytics platforms I use taps into various AI through APIs. I bought tokens from Deepseek Reasoner for use and it helps to do detailed analyses extremely fast with somewhat decent quality.

However when I try the same thing by inputting snippets of data through normal chat website like ChatGPT and Deepseek.com I get very poor to garbage results. Basically the usual hallucinating with authority issue.
Reply


Messages In This Thread
It’s too easy to make AI chatbots lie about health information - by Bigiron - 06-07-2025, 07:34 PM
RE: It’s too easy to make AI chatbots lie about health information - by Bigiron - 06-07-2025, 07:36 PM
RE: It’s too easy to make AI chatbots lie about health information - by RiseofAsia - 06-07-2025, 10:12 PM
RE: It’s too easy to make AI chatbots lie about health information - by Geneco - 07-07-2025, 01:55 AM
RE: It’s too easy to make AI chatbots lie about health information - by maxsanic - 07-07-2025, 11:39 AM

Forum Jump:


Users browsing this thread: 1 Guest(s)