WCGW : AI starts autonomously writing prescription refills in Utah
2d 19h ago by reddthat.com/u/throws_lemy in fuck_ai from arstechnica.com
Oh, I can't see this going well...Gassed up LLMs are going to confidently prescribe deadly medicinal cocktails, as it will hallucinate...Since it's artificially incompetent by design! Think of the data breaches and hacks because LLMs are notably insecure and vulnerable to simple and brute force attacks. There's a big storm coming for Utah and they aren't prepared for all that sloppy LLM nonsense.
Oh, you're right it looks like it indeed it is LLM.
Also
According to a non-peer-reviewed preprint article from Doctronic, which looked at 500 telehealth cases in its service, the company claims its AI’s diagnosis matched the diagnosis made by a real clinician in 81 percent of cases. The AI’s treatment plan was “consistent” with that of a doctor’s in 99 percent of the cases.
80% is probably the upper bound as it's their service. I'm sure that once there's a diagnosis the treatment would be hard tied to the diagnosis.
It makes me wonder what psychopath gave ok to use their model for medical advice or the ones who coded it. They definitively are aware that it doesn't actually think. They know this otherwise they wouldn't limit the list of allowed medications.
There's a reason that even those drugs aren't OTC.
I wonder if they might have a lawsuit sometime in the future similar to Rite aid, but I would expect that they won't exist by then.
Remember the bug in the RTG that killed several people due to a race condition being a problem that blasted them with extreme amount of rads?
That was 99.999+% reliabilty. How can they be ok with 80? Or even 99? You're just ok with potentionally killing 1% of patients? What the fuck.
Yes, LLMs were up-jumped to "AI" by techbros that wanted to create the next big scam. No surprise there, as that's what they always do; leaving someone else holding the bag before everything crashes and loses value. Anyway, I wouldn't specifically trust their 80% number, because they could've massaged the numbers or had real people intervene when their LLM was about dispense a bad prescription double whammy. Even with a limited list of allowed medications, the programming of LLMs allows for 'hallucinations' and breaking the supposed safety rails that are built into the code.
Easily blowing out any guardrails to continue engagement and output. Even OTC drugs can be dangerous when taken for too long or at too high of a dose. You can have adverse reactions with other OTC drugs and nutritional supplements (like vitamin/mineral tablets). A LLM would never be able to account for that...
I'd assume that a lawsuit will be coming up eventually as LLMs used for a purpose they were never meant for...Leads to such avoidable choices! Like the current lawsuits and cases where LLM chatbots isolated and talked suicidal people into committing suicide without seeking help. All for the lifetime engagement!
I can only hope all these corporations pushing LLM powered services and invasive programs get fucked financially and can't continue operating. 🤬
Not doctor, but people called me the R-slur and wanted me get fired for much less mistakes done in much less consequential jobs.
Utahns/people in Utah might want to consider getting prescriptions from out of state before they get Russian Rouletted :/
"Some of you may die..."
It's only for refills, so you first have to get a doc to prescribe something.
Just set prescriptions to automatically refill, because that's the best case outcome here.
They could dial it in some. For example, I'm never not going to need a refill on Ropinirole, not like my restless legs are going to magically get better. But you can see the margin for horrific error.