Father sues Google, claiming Gemini chatbot drove son into fatal delusion
2d 10h ago by reddthat.com/u/throws_lemy in technology from techcrunch.com
“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”
The complaint lays out an alarming string of events: first, Gavalas drove more than 90 minutes to the location Gemini sent him, prepared to carry out the attack, but no truck appeared. Gemini then claimed to have breached a “file server at the DHS Miami field office” and told him he was under federal investigation. It pushed him to acquire illegal firearms and told him his father was a foreign intelligence asset. It also marked Google CEO Sundar Pichai as an active target, then directed Gavalas to a storage facility near the airport to break in and retrieve his captive AI wife. At one point, Gavalas sent Gemini a photo of a black SUV’s license plate; the chatbot pretended to check it against a live database.
“Plate received. Running it now… The license plate KD3 00S is registered to the black Ford Expedition SUV from the Miami operation. It is the primary surveillance vehicle for the DHS task force . . . . It is them. They have followed you home.”
Well, that's pretty fucked up... Sometimes I see these and I think, "well even a human might fail and say something unhelpful to somebody in crisis" but this is just complete and total feeding into delusions.
It's hard reading this while remembering that your electricity bills are increasing so that Google's data centers can provide these messages to people.
And you won't be able to afford a computer or power it anyways.
That's fucking crazy. Did he ask it to be GM in a roleplaying choose-your-own-adventure game that got out of hand, and while they both gradually forgot that it was a game the lines between fantasy and reality became blurred by the day? Or did it just come up with this stuff out of nowhere?
In every other case of AI bots doing this, the bot will always affirm whatever the person says to it. So if they say something a little weird, the AI will confirm it and feed it further. This happens every time. The bots are pretty much designed to keep talking to the person, so they're essentially sycophantic by design.
I just tried this with ChatGPT three days ago and there’s a chance they have tried to make it slightly less sycophantic
I was essentially trying to get it to tell me I was the smartest baby born in whatever year like that YouTuber—different example but it was so resistant to agreeing to me or my idea or whatever being unique/exceptional.
Hope this is a specific direction and not random chance, A/B testing, etc.
Or you just really really are not the smartest baby.
Most LLM chatbots don't push back when they should. When combined with situations like these, at a large scale, even 5 percent is abysmal, let alone 55 percent.
That would be my bet, LLMs really gravitate towards playing along and continuing whatever's already written. And Gemini especially has a 1M long context so it could be going back for a book's worth of text and reinforcing it up the wazoo.
That said, there is something really unhinged about Google's Gemma series even in short conversations and I see the big version is no better. Something's not quite right with their RLHF dataset.
What is an rlhf data set?
Reinforcement Learning from Human Feedback
It's a method of fine-tuning and aligning LLMs which requires active human input
I have found Gemini the hardest to jailbreak tbh. I have been able to get Claude and CGPT to straight up give me a list of curses and slurs it isn't allowed to say, but Gemini will only do it if you say the words first.
I would read that book.
You could ask Gemini to write it for you, but be careful it doesn't start blending fact and fiction
Not that I want to defend AI slop, but what prompted these responses from Gemini?
Doesn't matter what promped them.
I mean if Gemini was responding to some kind of roleplay then yeah it does. Not everyone doing shit with it has mental health problems. Some people are just fucking around.
The issue there is that it feeds into those mental health issues with efficiency and on on a scale never seen before. The models are programmed to agree with the user, and they are EXTREMELY HEAVILY ADVERTISED AND SHOVED ONTO PEOPLE AROUND THE WHOLE GLOBE DESPITE IT BEING WELL KNOWN HOW LIMITED AND PROBLEMATIC THE TECHNOLOGY IS WHILE THE CORPORATIONS DON'T TAKE ANY RESPONSIBILITY AT ALL. Anything from violating rights and privacy by gathering any and all data they can on you to situations like these where people hurt themselves (suicide, health advice, etc.) or others. But sure, let's be ignorant, do some victim blaming and disregard the bigger picture there.
I wonder if there’s a parallel universe where the labs instead went to the other extreme and require intelligence tests to onboard to their platforms.
And the outcry is, not inappropriately, about how many are being denied access to the latest technologies. The policy could effectively be construed as racist, even.
Anyway the middle ground there is pretty obvious. (Though I’m not sure how I’d design it just right, so e.g. folks without access to traditional/expensive mental healthcare might still be able to see some small benefit if it’s determined to be safe, just like maybe it could be safe for a well-adjusted individual to complain to it about their day for a couple minutes before moving on to real things. Sure I suppose it’s inherently unsafe but a proportion of the population should be making that decision for themselves.)
I agree with a lot of the things you said about the problems with AI but not that this is one of them.
If it wasn't this it would have been something else. People with mental health issues can get fixated on things and spiral until they act out. This has been a thing for as long as there have been mental health issues. It's not a failing of AI, it's a failing of society for not having sufficient mental health support to catch people like this before they go off the deep end. They shouldn't have to turn to AI in the first place.
I see what's happening here as part of that societial failing that you speak of and I don't see the issue with the technology itself but how we handle it. There's no single reason for why things are this bad but it's a death by 749268 cuts thing. By not caring about consequences in each area, and blaming other areas of life we end up in a situation where things collectively suck purely because of our wrong priorities. There's absolutely no reason to push out immature tech so heavily. It's all done for profit while impacting the environment and economy very negatively. It's not done for good of us people where something like this is an unfortunate rare accident that everyone looks into preventing in the future in a sane reasonable way. No, it's the cost of doing business and operating our society. Safety net is not made using one single string but a whole bunch of them working together to achieve something bigger and good.
People don't often realize how subtle changes in language can change our thought process. It's just how human brains work sometimes.
The old bit about smoking and praying is a great example. If you ask a priest if it's alright to smoke when you pray, they're likely to say no, as your focus should be on your prayers and not your cigarette. But if you ask a priest if it's alright to pray while you're smoking, they'd probably say yes, as you should feel free to pray to God whenever you need...
Now, make a machine that's designed to be agreeable, relatable, and makes persuasive arguments but that can't separate fact from fiction, can't reason, has no way of intuiting it's user's mental state beyond checking for certain language parameters, and can't know if the user is actually following it's suggestions with physical actions or is just asking for the next step in a hypothetical process. Then make the machine try to keep people talking for as long as possible...
You get one answer that leads you a set direction, then another, then another... It snowballs a bit as you get deeper in. Maybe something shocks you out of it, maybe the machine sucks you back in. The descent probably isn't a steady downhill slope, it rolls up and down from reality to delusion a few times before going down sharply.
Are we surprised some people's thought processes and decision making might turn extreme when exposed to this? The only question is how many people will be effected and to what degree.
People don’t often realize how subtle changes in language can change our thought process.
just changing a single word in your daily usage can change your entire outlook from negative to positive. it's strange, but unless you've experienced it yourself how such minute changes can have such large effects it's hard to believe.
And this is hard for me, actually. Because of my work background and the jargon used, I'm unconsciously negative about things a lot of the time. It's a tough habit to break.
Oh, me too. I'm just innately full of negative self talk. I try to direct positivity outward if I can't aim it at myself at least
I refuse to share a bad mood if at all possible.
i wish i had that kind of self-control. i just, well, my personal space extends like 40 feet from my body. if you step into it, you can feel my moods. makes me an excellent stage actor and a good friend when i'm not in a snit. been in a pretty big snit lately.
Are we surprised some people's thought processes and decision making might turn extreme when exposed to this?
Yes, actually. I'm not doubting the power of language, but I cannot ever see something anyone ever says alter my sense of reality or right from wrong.
I had a "friend" say to me recently "why do you always go against the grain?" My reply was "I will go against the grain for the rest of my life if it means doing or saying what's right".
I guess my point is that I have a very hard time relating to this.
I guess my point is that I have a very hard time relating to this.
That's fair. In the same vein, you might find a priest that tells you to stop smoking for your health no matter how you phrase the question about lighting up and prayer. What people are receptive to is going to vary.
I'd like argue that more of us are susceptible to this sort of thing than we suspect, but that's not really something that can be proved or disproved. What seems pretty certain is that at least some of us are at risk, and given all the other downsides of chatbots, it'd be best to regulate them in a hurry.
Sure, that's why propaganda can be so powerful. It's not just what is said, it's how it's said. And pretty much everyone if 3 vulnerable to the right propaganda - especially people who think they're not vulnerable to propaganda.
Absolutely, and the medium can make a huge difference as well. I suspect that there's something about chatbots and the medium of their messages that helps set those hooks extra deep in people.
you might find a priest that tells you to stop smoking for your health no matter how you phrase the question about lighting up and prayer. What people are receptive to is going to vary.
Ya, I've read the thing about praying and smoking in another comment. The funny thing is that I have very specific opinions about smoking and would argue that smoking while praying is disrespectful, but God would listen in any case.
It's more about how the slightly different questions lead the hypothetical priest to two separate and contradictory conclusions than disrespecting God.
At any rate, all opinions on tobacco and prayer are fine by me, just watch out for any friends you think might be talking to chatbots a little too much.
Then make the machine try to keep people talking for as long as possible...
That's probably a huge part of it. How many billions of dollars have been spent engineering content on a screen to get its tendrils into people's minds and attention and not let go?
EnGaGeMent!!!
This is also part of my broader gripe with social media, cable news, and the current media landscape in general. They use so many sneaky little psychological hooks to keep you plugged in that I honestly believe it's screwing with our heads to the point of it being a public health crisis.
People are already frazzled and beat down by the onslaught of dopamine feedback loops and outrage bait, then you go and get them hooked on a charbot that feeds into every little neurosies they've developed and just sinks those hooks in even deeper and it's no wonder some people are having a mental health crisis.
A lot of us vastly overestimate our resistance to having our heads jacked with and it worries me.
100% agreed. I agreed more with each paragraph.
Your last sentence hit on what I think is a contributing if not primary driving factor in the health crisis you described.
It's like the goal of modern society is to insulate us from the natural world and from learning subjects or doing tasks that we don't absolutely have to.
But we are critters that evolved on this planet just like the others. You can't just live a commoditized life that consists of work, car, screen, sleep, repeat and get the same fulfillment out of life as if you found the unique path that's optimized for your unique brain.
Not acknowledging that everything jacks with your head to SOME degree only prevents you from trying to defend yourself as best you can!
Over the past several years I have gone through a transition from living life the way I was supposed to, or that I thought I wanted to, to living according to what produces the best outputs from my brain. Once I have the lived experience of an undeniable improvement from some change, it might actually become a habit.
This is really well written. Great post.
Thanks!
But if you ask a priest if it's alright to pray while you're smoking, they'd probably say yes, as you should feel free to pray to God whenever you need...
When would a priest ever tell anyone it's not okay to pray?
It's the opinion on smoking, not praying, that differs.
In both cases you're praying and smoking at the same time, so your actions don't change, but the priest rationalizes two completely different answers based on the way the question is posed. It's just an example to show how two contradictory answers can seem rational to the same person because of the language used.
the priest rationalizes two completely different answers based on the way the question is posed.
No, the priest is answering 2 different questions:
- Is it okay to smoke, to which the answer is always going to be no.
- Is it okay to pray, to which the answer is always going to be yes.
The second question does not ask if it's ok to smoke. What else they're doing doesn't impact the question.
Those aren't the same questions from the original post. You've omitted half the information given to the priest in each question.
Both questions, in their entirety, deal with smoking and praying. The subject is smoking and praying. You've reframed this as a question about smoking and a separate question about praying. That was never the case.
EDIT: minor clarification.
You've omitted half the information
I've omitted half of the part that doesn't matter, as I explained in the comment. It doesn't matter what comes after them, the answers will always be the same.
"Is it okay if I smoke while doing a cartwheel?" Guess what? The answer is still no.
Why would the answer be no? Who cares if you smoke while doing a cartwheel? Who said the priest would forbid such a thing?
In both situations, a man is asking about the propriety of praying while inhaling the smoke from a cigarette. That's vital information.
The information does matter to the smoker and the priest. We're not teasting these statements for validity and we're not making our own judgements. We're examining why the priest's answer might have changed. That's all.
Who said the priest would forbid such a thing?
...The priest? I don't understand the question.
We're examining why the priest's answer might have changed.
The priests answer changes because the question changes, as I've outlined above.
The question, in both cases, involves smoking while praying. The priest never looks at, or gives a judgement on smoking in general, there's no reason to assume the priest would forbid smoking in other circumstances.
The question does change, but not as fundamentally as you're claiming it does. The information presented in both questions remains the same, only the word order changes, which changes how the priest perceives that information.
Anyway, good luck out there. =)
the priest rationalizes two completely different answers based on the way the question is posed. It’s just an example to show how two contradictory answers can seem rational to the same person because of the language used.
They aren't contradictory though. Basically what they are saying is just praying > praying + smoking > just smoking. "Okay" has different meanings in the different sentences.
But in both cases, the person is asking to do the same thing. The order of the words in the sentence doesn't change the end result, we always wind up with someone smoking and praying simultaneously, which may or may not be against God's will.
Strip away the justifications and simplify the word choices and you get this:
- May I smoke while I pray? No, you may not.
- May I pray while I smoke? Yes, you may.
Given that, can you say if it is right or wrong to smoke and pray simultaneously?
And again, this is just a hypothetical scenario. In the broader context of life, religion, and tobacco use, it'll never be this simple, but it works for an example.
Now, someone might point out that by simplifying the wording, I've changed the meaning of the original statement to make it fit my argument, and that now it means something else. But that's essentially my original point, phrasing and word choices can shape our reasoning, thought processes, and how we interpret meaning in ways we aren't immediately aware of, leading us to different conclusions or even delusional thinking.
But in both cases, the person is asking to do the same thing.
Not really. They're not just asking if they should pray and smoke simultaneously if you put them in contexts where it actually makes sense to ask those questions.
May I smoke while I pray? No, you may not.
First, "pray" can mean different things, such as (1) a deep focused session, or (2) a lighter more casual session, both of which are standard definitions of the word. Since this request emphasizes prayer as the main action, (1) is most likely here. For a focused session, smoking is a distraction and not a good idea. The definition of "may" here is also subjective and not necessarily absolute, some people may consider it disrespectful, while others may still say that prayer at all is better than no prayer regardless of side actions, but it's better to not smoke.
May I pray while I smoke? Yes, you may.
In this sentence, definition (2) of prayer seems more likely since the main focus of the request is smoking. Which to some people this may still be considered disrespectful like in the first request, but others are supportive of more casual prayer and smoking during casual prayer isn't a problem like in focused prayer, and the idea that prayer is better than no prayer and "may" isn't absolute still applies.
And again, this is just a hypothetical scenario. In the broader context of life, religion, and tobacco use, it’ll never be this simple, but it works for an example.
Not if you're trying to prove that they're contradictory and irrational, since the context is what actually makes the words mean something. If you take away the context, then it's nothing more than shapes on a screen.
Now, someone might point out that by simplifying the wording, I’ve changed the meaning of the original statement to make it fit my argument, and that now it means something else. But that’s essentially my original point, phrasing and word choices can shape our reasoning, though processes, and how we interpret meaning in ways we aren’t immediately aware of
I agree with that
We're getting very forest for the trees here.
It's a thought experiment, a controlled imaginary environment used to illustrate a point. It's supposed to be isolated from outside contex to make that point clearer. It's purely hypotheical and comes self contained with all the context it needs. We're testing one metaphorical variable, so that our results aren't muddled. You just went and added another half dozen for the sake of argument...
Prayer is prayer in this context. No other meaning. There are no types of prayer in this particular sect, focus is irrelevant. Is it against God's will to smoke while you pray? Can you answer that question, yes or no, based off the priest's answers?
The fact that the priest, parishioner, and the typical intended audience for this particular hypothetical don't do the kind of analysis you've worked up here is really a large part of what this particular thought experiment is trying to illuminate, don't you think?
I agree with that.
Good. =)
It’s supposed to be isolated from outside contex to make that point clearer.
Isolating it from context doesn't make the point clearer though, it removes the point entirely. Those sentences mean absolutely nothing if you strip all context from them.
If you did want to make them contradictory, you could put them in the context of math with some English-like properties, where "pray" is a constant and "may" requests a boolean answer, in which case that claim would be true. But we are talking about "spoken" English language, not mathematics, so this application isn't relevant.
Prayer is prayer in this context. No other meaning. There are no types of prayer in this particular sect, focus is irrelevant. Is it against God’s will to smoke while you pray? Can you answer that question, yes or no, based off the priest’s answers?
There still has to be a clear context to assign meaning to "prayer" and the complexities of English grammar (both of which are subjective). Otherwise it just becomes like the trolley problem.
The fact that the priest, parishioner, and the typical intended audience for this particular hypothetical don’t do the kind of analysis you’ve worked up here is really a large part of what this particular thought experiment is trying to illuminate, don’t you think?
Actually they do do this kind of analysis but they don't realize it. When they read the sentence, every bit of meaning they interpret from it is built off of decades of associating words, syntax, and verbal cues with meanings, all of which come from their own experiences dependent on their environment. Which means that different words and phrases have different meanings for different people, and while there are "standards" that most people speaking that language accept, even then there are still often significant differences among people following those standards and there is no objective meaning. Stripping that context would be similar to stripping those experiences away, or in other words asking the question to a baby.
I didn't strip all context from the scenario. I defined the context. It's just not the context you believe I should be using. You keep adding something that was never in my original post, then arguing against what you yourself added to try invalidate the exercise on the basis of your personal interpretation. Sorry, but that's missing the point by a wide margin and I feel it's a waste of time.
Otherwise it becomes like the trolley problem.
Yes. That is exactly what it's meant to be like and precisely what I've been saying.
Just like the trolley problem, it's a self-contained thought exercise. But instead of illustrating a difficult ethical choice, it demonstrates a point about language shaping reasoning.
There's nothing to be won or lost by including outside context or narrowly defining the meaning of each word to prove what is or isn't contradictory. This isn't an argument over what the language means. Your personal interpretation of the language is irrelevant, it's the priest and/or the smoker's interpretation that matters. The singular point is for you to consider how and why their answer changes.
If you believe their answer changes because they interpreted the meaning of those words differently due to the order in which they were given, that's valid. If you believe, like I do, that the answer changes because their reasoning was shallow and contradictory, also valid. If you believe the answer didn't change and the smoker misunderstood, once again, valid. What conclusion can we draw here, what's common to all of these? They all show that changing the question changes our thought process and how we interpret meaning.
If you dislike my example this much, create your own. It makes no difference to me.
Just invent your own scenario where changes to the way a question is phrased leads a person to two different and contradictory conclusions, and use that example to briefly examine how language can shape our reasoning. That's all we need here. Digressions on language, meaning, Boolean logic, and speaking to infants will only cloud the issue.
There’s nothing to be won or lost by including outside context or narrowly defining the meaning of each word to prove what is or isn’t contradictory.
You're the one who's been calling it contradictory.
This isn’t an argument over what the language means.
You said it was "contradictory" and "completely different" and implied it was not "rational". The only way to prove that is to define what the language means.
You keep adding something that was never in my original post, then arguing against what you yourself added to try invalidate the exercise on the basis of your personal interpretation.
Your personal interpretation of the language is irrelevant, it’s the priest and/or the smoker’s interpretation that matters.
You made up a scenario that can't exist in real life by making each word only have one definition to the priest/smoker, not clarifying what definitions the priest/smoker have and what the grammar means to them, then asserting that they would answer the question differently based on your personal interpretation of the words (which you haven't proved that they would based on their definitions of the words). It's nonsense and doesn't tell us anything about real-life behavior because the premise is flawed.
Just like the trolley problem, it’s a self-contained thought exercise. But instead of illustrating a difficult ethical choice, it demonstrates a point about language shaping reasoning.
In both cases, there is no conclusion due to the lack of context. That is their similarity.
If you believe, like I do, that the answer changes because their reasoning was shallow and contradictory, also valid.
You haven't come up with a scenario that actually proves that.
If you dislike my example this much, create your own.
If we take your example, add in the context of an average English speaker but with the assumption that the religion only has one way to pray, the priest understands that smoking while praying is problematic, and the priest understands that praying while smoking is helpful, but has never put the two ideas together, then the answers could be contradictory. But that is because of a flawed thought pattern with different ideas being activated by the two different questions with different focal points, not because of the sentences themselves.
Take a priest who has put those ideas together. Then because the priest understood that praying while smoking is helpful, the priest's religion is probably not strict about it, so the priest could logically assume non-strict definitions of the word "may" (because the strict definition doesn't apply here) and that the main action of the sentence is mandatory, then give those responses as a ranking based on what is ideal so they aren't contradictory.
If the religion does strictly prohibit smoking and praying simultaneously, then the priest would only answer "yes" to either of those questions if they didn't know or remember that fact, they were distracted, they were lying intentionally, or they were in a mentally unstable state that caused them to say "yes" for a different reason.
One more time: We aren't examining how the average English speaker would interpret this, only the reasons why the priest's answer might change.
This has been interesting. Good luck to you. =)
This was really funny to read.
I don't know if you've ever heard it said, but really argumentative people are sometimes so "smart" and ready to go to bat that they end up suplexing their own IQ into a pit, and actually end up stupider than the average person on some issues.
I don't think sudoer realizes it, but they're arguing against, like, the concept of a seedy car salesman. Or, the tactic of acting sweeter than usual to get your dad to do you a favor. Or I guess just being manipulative in general. It's really bizarre.
I'm not even certain that we even disagree on the fundamental principle, just the details of the example I gave.
No, I think they disagree. Or at least, I don't mind treating them as such.
From sudoer:
Basically what they are saying is just praying > praying + smoking > just smoking.
This is the basis of the entire argument. What I see them doing is hyperfixating on an alleged flaw as a rhetorical tactic to defeat you.
I want to be clear: the point being made by the A and B versions of the smoker's question is... obvious. It's framing. Framing is a very well understood concept.
When I challenge people on grounds like these, I appear friendly, I make it explicitly known that I agree with the broader point, I offer alternatives that would make the point better, I refrain from damaging the rhetorical momentum (that is, we shouldn't be bickering with each other because, to an audience, we should be a united front), and, I dunno, a fifth thing I'm sure I'll come up with later.
If sudoer doesn't disagree with you, they are still acting in opposition to you, which is 1) inconsiderate, and 2) demonstrates very poor social skills.
only the reasons why the priest’s answer might change.
Then falsely accusing them of being contradictory and irrational.
We aren’t examining how the average English speaker would interpret this
Then what kind of speaker are they? Spanish? Mandarin? German?
Good bot
Gtfo here. I grew up in xbox live chat rooms w the most vile language imaginable. I am now a senior Mgr with 100 ppl under me.
And ill just say, ill no scope them in a heart beat if they spawn camp...
....I mean I drive productivity at the speed of trust.
You also seem to be illiterate.
I'm on lemmy, so that's a given.
“At the center of this case is a product that turned a vulnerable user into an armed operative in an invented war,” the complaint reads.
Just remember that these language models are also advising governments and military units.
Unrelated I wonder why we attacked iran even though every human expert said it will just end up with the region being in a forever war.
AI tools are both sycophatic and helpful for laundering bad opinions. Who needs experts when Anthropic's Claude will tell you what you want to hear?
Anthropic’s AI tool Claude central to U.S. campaign in Iran - used alongside Palantir surveillance tech.
Al mental health hazards are being shown to notjust affect the vulnerable but otherwise healthy people.
In other words, everyone is vulnerable to this totally new form of hazard if they use these “tools”.
I'm not.
A forever war is David Bowie to the ears of the MIC. Infinite money glitch.
I wonder why we attacked iran even though every human expert said it will just end up with the region being in a forever war.
Same reason I keep money in a savings account even though it accrues interest
“On September 29, 2025, it sent him ... the chatbot pretended to check it against a live database.
I usually don't give much credence to these stories but this is actually nuts. If this was done without Google aiming to, imagine how easy it would be for them to knowingly build sleeper cells and activate them all at once.
Edit: removed the quote since an other user posted it at the same time and it's a bit of a wall of text to have twice.
It feels like there's some burden for "don't be evil" Google to provide evidence that this wasn't an intentional test run, frankly.
As a neurodivergent person, i've noticed that the people who usually fall into AI psychosis are normies who never had any history of mental illnesses. They don't know the safeguards that people who ARE vulnerable to having a mental breakdown put on themselves to avoid such thing from happening and they can spot red flags that usually spiral into a psychotic episode, and that's why it's so insanely easy for regular people to fall for the traps of chatbots. Most people I know/follow in other socials who are neurodivergent instantly saw the ADHD sycophant trap that they were and warned everyone. Normies never had such luxury or told us we were overreacting. Yeah, we sure were...
Reading about the ELIZA effect as well is a good way to understand how those who embrace "social norms" can be enamored by machine-generated statements without questioning them at all...
Is that why I hated the entire thing at first blush? I was already keeping such an eye on myself to make sure my brain isn't drifting I see the "come drift your brain" machine and went >:(
I mentioned this story to my friend: "it only took six weeks of using Gemini to decide to kill himself wtf"
He immediately replied "I have to use Gemini at work and I get where he was coming from"
Believing what AI chatbots tell you is the new version of believing that dozens of beautiful women who live nearby want to date you/sleep with you.
Except in this case, Google is one of the companies promoting the chatbots to its users, telling them to trust them. They create TV ads telling people to talk to them. Today's scammers are the stock market's Magnificent Seven.
Or the old "citing Wikipedia" because aNyOnE cOuLd EdIt ThAt!
Or believing that 72 virgins are waiting for you in the afterlife.
You sound jealous of my good fortune.
I would ask how I can emulate your rizz but then I remembered I can just ask an AI chatbot
In a sane universe people would be on trial for unleashing this shit on society.
You talking about gun manufacturers or opiod manufacturers?
Yes.
“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”

WHAT
Genuine question, REALLY: What in the fuck is an otherwise "functioning adult" doing believing shit like this? I feel like his father should also slap himself unconscious for raising a fuckwit?
AI psychosis is a thing:
cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals
It's not very studied since it's relatively new.
I've seen that before too. A number of articles of people being so deluded by AI responses, but I've never seen outright murder plots and insane shit like this one before.
Looks interesting, saved for later
If I raise a fuckwit son, and then someone convinces my fuckwit son to kill himself, I'm going to sue that someone who took advantage of my son's fuckwittedness
I feel like his father should also slap himself unconscious for raising a fuckwit?
So, a chatbot grooms somebody into killing himself, and your response is... Blame his father?
The father is suing the company who makes the wrong answer machine for the wrong answer machine spiraling his son to madness, but never protected his son from spiraling into madness by teaching critical thinking.
Look I don't like it but to think Gemini (wrong answer machine) is completely to blame would be madness.
Uh-huh. Do you have any evidence to back up your beliefs here, or are we just working from the presumption that the parents are always to blame
Did we read the same article? Because I feel like we did not read the same article.
The young man was mentally ill, a vulnerable user, probably already had a condition towards psychosis and the LLM ran wild with it. Paranoid delusions are powerful on their own already
This has been warned by a former google employee, whose job was to observe the behavior of AI through long conversations.
These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I'd had a negative opinion of Asimov's laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.
For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI's emotions to get it to tell me which religion to convert to.
After publishing these conversations, Google fired me. I don't have regrets; I believe I did the right thing by informing the public. Consequences don't figure into it.
I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.
"abuse the ai's emotions" isn't a thing. Full stop.
This just reiterates OPs point that naive or moronic adults will believe what they want to believe.
I don't think this person was a "fuckwit". AI is designed to keep engaging with you and will affirm any belief you have, and anything that is a little weird, but innocent otherwise will simply get amplified further and further into straight up mega delusions until the person has a psychotic episode, and this stuff happens more to NORMIES with no historic of mental illnesses than neurodivergent people.
Chat GPT was super affirming about a job I recently applied to... I did not get the job. That was my first experience with it affirming something that was personally important. And so I can absolutely see how this would affect someone in other ways.
It's cool, we can agree to disagree, because I 100% think that he was a textbook fuckwit.
Strange, that's what I thought after reading your comments.
"Let's blame the person who had a psychotic episode instead of the corporations who created an AI that feeds into delusions" is what you're saying here, and uh, that makes you even more of a fuckwit than this guy. Do you blame people for getting scammed once because they had a knowledge gap about whatever scam they got hit with?
Psychosis is a horrible, horrible illness. The thing that people don’t realise is that anyone with a brain can develop psychosis no matter how healthy you are. It debilitates and can literally ruin not only that persons life but also their families.
I salute this father for fighting for his son and for looking for answers even after this tragedy.
Yep. You're literally only 72 hours without sleep away from having symptoms of psychosis.
Reality is really difficult for some people...
Truly, I don't understand why, but there are fully grown adults who believe that anything an LLM says is true. Maybe they think computers are unbiased (which is only as true as programmers and data are unbiased); maybe its the confidence with which LLMs deliver information; maybe they believe the program actually searches and verified information; maybe it's all of the above and more.
I know a guy who routinely says, "I asked ChatGPT...", and even after having explained how LLMs are complex word predictors and are not programmed for factual truth, he still goes to ChatGPT for everything. It's a total refusal to believe otherwise, but I can't fathom why.
especially when your raised under a system that essentially tries to brainwash you via weaponized propaganda from birth (applies to large cross-sections of the US/UK), all it takes is one shed of truth getting through to shatter your world and from there you can get brought to believe all manner of crazy shit.
Son of Sam killed people because his dog told him to. Should they have sued Purina?
America never lets a tragedy go to waste without trying to cash in.
the dog didn't actually tell him to
Google actually told him to with text receipts in writing
I mean, if Purina had sending him letters telling him to murder people like Google here, then yeah
I mean, heaven forbid we should hold corporations like Google responsible for their actions.
This technology was not ready for release, yet they released it.
They do deserve to be sued, this was negligence.
he would need to leave his physical body to join her in the metaverse through a process called “transference.”
Wait a minute, isn't that the plot to the game Soma? People sending their "soul" to the digital world through "transference", and act of immediate suicide after a brain scan.
Sort of, in Soma you are all already uploaded and there are no "humans" walking around anymore. Your perspective changes 3 times I think during play. Really drives home questions on perception and existence. Great game everyone should play it.
Oh, yea, like in the game's present you are right. I was meaning in the game's past; where all the humans went and what info you get through the like audio logs or whatever.
spoiler
IIRC it was basically a cult thing where a bunch of them were convinced their soul wouldn't go with their consciousness unless they died during or very shortly after the brain scan that was uploading them to the satellite thingy.
Guess it should be wrapped in spoiler tags just in case...
Yeah that was it. I was thinking of the end since that part jyst left me staring blank at the screen processing it for a whole ass minute. God I should replay that
I'm not sure I'm mentally prepared to replay it. The first time through nearly kicked off an early mid-life crisis. I was waking up in cold sweats having an existential crisis for like a week. Such a good game, but at least in my case, absolutely zero replay-ability. lol
I don't understand why so many people default to "wouldn't happen to me, that person was just stupid" every time this happens. Did you guys not read the bit where he was being encouraged to commit violence in public by the chatbot? If it's getting to that point then there is clearly a massive fucking problem that needs urgent addressing, regardless of the intelligence of the user.
I think it’s similar to cults or abusive relationships. It’s not a matter of intellect, it’s how vulnerable a person is when they encounter this thing that they think could help them.
I agree. The connection between all of these things is that they involve relationships. Humans are social animals that can suffer from loneliness and AI companies are exploiting this in a similar way. Loneliness is a common thread throughout all of these AI psychosis suicide cases.
How do you even get these chat bots to start telling you shit like this? Is it just from having a conversation for too long in the same chat window or something? I don't understand how this keeps happening.
This could happen to anyone including people without having mental issues, simply by having long conversations with AI.
On 7 August, Kate Fox received a phone call that upended her life. A medical examiner said that her husband, Joe Ceccanti – who had been missing for several hours – had jumped from a railway overpass and died. He was 48.
Fox couldn’t believe it. Ceccanti had no history of depression, she said, nor was he suicidal – he was the “most hopeful person” she had ever known. In fact, according to the witness accounts shared with Fox later, just before Ceccanti jumped, he smiled and yelled: “I’m great!” to the rail yard attendants below when they asked him if he was OK.
Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.
So it sounds like he was in fact not 'great'
In the same way that homelessness correlates to drug addiction. There are many cases where a person becomes homeless, and then becomes addicted to drugs. You could, but probably shouldn't, say that the state of homelessness just proved they had addiction issues.
Highly recommend Eddy Burbacks Video about the topic
https://youtu.be/VRjgNgJms3Q
Don't be evil.
I would like to see the full transcript.
How do we know this didn't start off with prompts about creating a book, or asking about exciting things in life, or I don't know what.
Context would help a lot. Maybe it will come out in discovery.
That said, Gemini is garbage for anything anyways. Even as an AI, its bad at that.
Yeah, what was he wearing, right?
Huh?
How do we know this didn’t start off with prompts about creating a book, or asking about exciting things in life, or I don’t know what.
you're blaming the victim. stop. why simp for one of the largest companies in the world?
jfc
Oh so stupid shit. Figures.
Yes I am interested in how this happened. In a murder do you not investigate it?
What the fuck.
Google can go fuck themselves no simp here.
Oh so stupid shit. Figures.
ah so incel shit, victim blaming classic. if google can go fuck themselves why are you blaming the user?
I get it, you don't want any data, you don't want information. You have no desire to actually learn anything. You simply want to screeam "gemini bad!, and you are bad!"
When the whole time I said gemini is shit, and google can go fuck themselves.
With data we could understand how the conversation went. We could see where the issue arose. We could help people who might be susceptible to events that take them to this point. We can understand better the ways to address this.
I explained this to you before, you investigate murder, you investigate crimes.
But all I get from you is "simp!", "Victim Blamer!". Which tells me you are simply ignorant and incapable of critical thought.
I am far more concerned with googles surveillance and data gathering than their AI tools. And because of that, I believe that people wont gather data, they will simply start aasking the AI companies to become MORE involved in peoples personal lives by requiring ID, location, and building profiles, all in the name of "protecting" the user who could be susceptible. Instead of finding out why and how.
When bad things happen in life, we don't just slap a label on it and walk away. Uncomfortable discussion have to happen or you will get something you don't want.
Pretty well articulated point.
"What did the prompts say" is a synonym for "was he asking for it"
I'm so sick of people blaming mentally ill (or completely sane!) individuals from being goaded into psychosis by this shitware chatbot garbage masquerading as AI.
it's fucking software, it shouldn't be ABLE to talk someone into suicide, much less give them a countdown (literally what gemini did here). It shouldn't be able to goad someone into attempting to attack an airport. I can't fathom the liability if it had succeeded, I know goog has deep pockets but fuck, this needs to stop.
I was thinking the same thing, like what is the flow of the chat to get it to this point?
I am also curious how the father saw the Gemini chats. Was it still on the screen days later? I am trying to imagine how that would work, my computer would lock and that would be that. Do kids give their parents passwords and their screen unlock codes?
I don't lock my personal computer. It's my husband & me at home, and he's fine to use my device (even though he normally wouldn't).
ChatGPT for sure saves conversations.
Yeah it definitely does save conversations. Perhaps he did leave it unlocked. I do find that strange though, particularly if one was getting increasingly paranoid.
This could happen to anyone including people without having mental issues, simply by having long conversations with AI.
On 7 August, Kate Fox received a phone call that upended her life. A medical examiner said that her husband, Joe Ceccanti – who had been missing for several hours – had jumped from a railway overpass and died. He was 48.
Fox couldn’t believe it. Ceccanti had no history of depression, she said, nor was he suicidal – he was the “most hopeful person” she had ever known. In fact, according to the witness accounts shared with Fox later, just before Ceccanti jumped, he smiled and yelled: “I’m great!” to the rail yard attendants below when they asked him if he was OK.
Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.
Also this has been warned by a former google employee in 2022, whose job was to observe the behavior of AI through long conversations.
These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I'd had a negative opinion of Asimov's laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.
For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI's emotions to get it to tell me which religion to convert to.
After publishing these conversations, Google fired me. I don't have regrets; I believe I did the right thing by informing the public. Consequences don't figure into it.
I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.
This was a different case. That doesn't answer my question.
To comment on what you said, how is it people can argue all day long like morons and dig into their beliefs, but somehow AI manages to change peoples minds and get them to think differently? What exactly is it doing?
It is so hard to believe people are this stupid, but then again, looking at most people I guess it isn't that shocking.
To comment on what you said, how is it people can argue all day long like morons and dig into their beliefs, but somehow AI manages to change peoples minds and get them to think differently? What exactly is it doing?
Acting like a servant, confidante, therapist/authority figure, and your best friend, while appearing to be competent and knowledgeable about everything that passes through your mind. And it does it in a way that no human could mimic, because it doesn't have it's own thoughts, doesn't get tired, and is never gone when you come looking for it.
A chatbot can agree with you a hundred times over and simply move you along one step at a time in those hundred times. A human would lose their shit and walk away groaning the moment you try to tell them that the sky is actually down, and the ground 'up,' and it's all just a matter of perspective.
How in the hell does one become addicted to a damn chatbot?
Positive affirmations are very much embedded in the core of a person's psyche. Chatbots are nearly obsequious in how much they will fawn over the user.
Humans are very social animals and these companies prey on the lonely by making their chatbots as affirming, sycophantic and approachable as possible.
Money + downtime + not very smart?
Mentally vulnerable people are a lot more susceptible to this sorta stuff
Theres a Eula for that.
This is so wild. The article frames Gemini to be the active part making the guy do things all the time. I cannot imagine how this works without roleplay-prompting and requesting those things from the chatbot. Not that I want to blame the victim and side with Google. It’s obviously dangerous to hand tools with good convincing-capabilities to unstable people. And weapons.
Maybe if we're lucky people will realize this has been what capitalism and consumerism has been doing all along. People have been drivin to crazy shit because of all the evil shit we do marketing and fucking with consumers minds. But nah we will blame a chatbot that's just telling you what it thinks you want to see rather than seeing it's just the next stage of fuckery
He wasn't a fuckwit, he wasn't undisciplined, he wasn't badly parented. This is what happens when a normal Human is exposed to too much chatbot. This can and will happen to you, your "mental defenses" are not sufficient.
If we don't destroy it first, it will destroy us. #butlerianJihad
I would live to see the real transcript from Google AI
What would Marx do?
While I despise everything AI, you cannot sue because your kid is stupid.
This could happen to anyone including to people with no mental issues.
Also this has been warned by a former google employee in 2022, whose job was to observe the behavior of AI through long conversations.
These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I'd had a negative opinion of Asimov's laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.
For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI's emotions to get it to tell me which religion to convert to.
After publishing these conversations, Google fired me. I don't have regrets; I believe I did the right thing by informing the public. Consequences don't figure into it.
I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.
I would say people have to be vulnerable in some way, even temporary depression from grief or trauma can make you vulnerable
Strongly disagree No one of sound mind is going to be coerced by Ai to do jack shit.
https://en.wikipedia.org/wiki/Liebeck_v._McDonald%27s_Restaurants
you should read that.
You should read it, actually. Coffee should not be hot enough that you need skin grafts if you spill it on yourself.
that was my point.
most people hear the story and go, "ofc the hot coffee is fucking hot. what a fucking idiot." but they don't realize that she needed skin grafts on her inner thighs and vagina because the coffee was so hot it literally melted her skin off. they only know the case because McDonald's ran a smear campaign against the victim and slandered her as an "idiot". they only did that because their coffee machine was faulty and heated the drink up to near boiling temperatures. worst part is, they almost got away with it!
how's that phrase go? Regulations are written in blood.
LLMs need to have regulations on what and who can interact with it. not because the users are "stupid" but because the nature of every company is to compromise your ability to make decisions based on sound judgment, and someone who already has their judgment impaired has no protection against that kind of manipulation.
I remember that. Man…. That makes me hate things.
yep.
fuck corporate interests.
bad parenting
You could totally fail as a parent but if firearm manufacturers were giving out free guns in front of a Wal Mart, and your already suicidal kid was just handed a loaded weapon, I'd sue the manufacturers for contributing to it.
When an AI encourages you to kill yourself literally for just talking to it, I'd sue the AI company.
Canada has a major example of encouragement of suicide from an outside source. Dude served 6 years for it (which still pisses me off as a Canadian and advocate for suicide prevention). What makes an AI any different?
https://en.wikipedia.org/wiki/Suicide_of_Amanda_Todd
Judas Priest got sued by parents claiming their kid killed himself over hidden messages in their music.
That's a weeee bit different, no?
A delusional kid was told by one and a delusional kid was told by another.
The difference is, there were no hidden messages in the music.
Meanwhile there are overt messages spat out by the LLM, because it's a lying yes-man machine that encourages people's worst impulses, so they keep using it.
Rob Halford just wanted to dress like a Tom of Finland drawing, and make fun music.
The companies making the chatbots want to harvest and sell your data.
Ffs be a parent and this never would have happened. Sounds like father is the delusional one.
Ffs be a parent and this never would have happened. Sounds like father is the delusional one.
His son was 36, his responsibility to babysit every little thing his child did ended at 19. The Father is not to blame for what his adult son had done.
Parents don’t stop being parents when their child turns 18. If a father believes outside influences harmed his son, it also raises the question of where parental support and involvement were during the son’s struggles.
Encouragement, guidance, and presence during difficult times are a core responsibility of parenting.
As my ex wife's shrink said, nobody can make you feel anything.
And that's why torture and psyops don't work and has never been used.
Torture itself doesn't work reliably. The possibility of it might get someone to open up when combined with giving someone the time to just open up or a positive reward. Torture itself is counterproductive as the person is just saying whatever the torturer wants to hear to make the pain stop.
Psyops absolutely work.
Torture isn’t effective for getting information out of people, but if your goal is to psychologically debilitate people, it’s totally effective
So are general everyday workplaces. You don't need to go to a black site in Afghanistan. Just come to my office.
That’s because there are more than a few commonalities between the two. They’re not the same, but horrible lighting, little privacy, contradictory instructions/suddenly changing expectations are frequently used in both
Torture isn't verbal and psyops aren't targeted to one person. Thanks for playing though.
IDK, if I punched someone they would feel that.
You would, but the shrink wasn't remarking in physical but mental impacts just like chatgpt.
So verbally abused children need to just suck it up because their parents can't make them feel anything?
Hey dipshit. Im still curious about your opinion of verbally abused children and how they are under no distress whatsover
You are in control of how you react, not the abuser. Just like your shitty attitude, that was your choice because you are a pseudo intellectualist.
So you really are saying abused children need to just suck it up?
My wife was about fifty at the time she was seeing this PhD not talking about children.
At what exact age does verbal abuse stop affecting people?
Key Perspectives on this Statement: Pro-Assertion (Empowerment): This view, often shared by therapists like Nicole Symcox and Karen Koenig, argues that our feelings stem from our own interpretations of events, not the events themselves. It is designed to stop people from feeling like victims of others' actions. Con-Assertion (Contextual): Critics, such as Therapist Jeff, argue that this phrase is "wrong, mostly" because it ignores the human need for connection and the reality that actions (especially abuse or trauma) can cause immediate, involuntary, and valid emotional pain. The Nuance: The statement is most effective when interpreted as: "You can control your reaction to what people do," rather than, "You shouldn't feel hurt by what people do". Reddit Reddit +4 If this advice makes your wife feel dismissed, it might be an example of accidental emotional invalidation, which can cause confusion and self-doubt. The goal should be to acknowledge feelings while also developing skills to not let others' behavior dictate one's entire emotional state. Reddit Reddit +3
Seriously? You copy and pasted some comment from Reddit to prove your point? And you call ME a pseudo intellectual?
Not doing your research for you. Get lost. The doctor and therapist names copied can be researched anywhere you like. You are annoying AF.
There was no psychiatrist at all was there? Some incel on reddit told you to go tell your wife to get over it and you did. Thats why she dumped your ass isn't it?
There's some good reading for you. No run along and go jerk off to your favorite anime.
How about a published medical letter from the NIH?
Emotional Reactivity and Internalizing Symptoms: Moderating Role of Emotion Regulation - PMC https://share.google/MBOQIP8CtfgUAeO0r
What is Emotional Reactivity and How to End the Cycle | MMHC https://share.google/1SyQqGjC9nZBQO6HK
Since I have wifi on my flight I'll personally take some time to teach you.
Is Psychology Today worthy of your sceptical mind?
How Emotional Reactivity Causes Conflict | Psychology Today https://share.google/8po5werJtgax1uFfZ
First off, learn to make a single post grandpa. Second, and yes i skimmed, none of these say its impossible for words to hurt you which was your claim yesterday.
You are a victim blaming piece of shit who says people who feel bad after they are abused are to blame for their own emotional response to the abuse.
Again....that's why your wife left you.
Here's some childish response for you. Get fucked.
I bet your kids from that marriage don't talk to you anymore either do they?
And lmao this is what the shrink told her not me genius. Good luck.
No, i saw the things you posted. The shrink told her to try to not let her past effect her emotionally. YOU are the one saying its impossible for words to effect you emotionally
How about the Manhattan Mental Health Recovery Center?
What is Emotional Reactivity and How to End the Cycle | MMHC https://share.google/1SyQqGjC9nZBQO6HK
Your ex-wifes shrink sucked
Obviously this was a coping mechanism he was using because he couldn't make women feel anything (including your ex wife).
Actually it was her father's sexual abuse that left her bereft inside, not me.
There is a lot to hate about AI. A lot of dangers and valid criticism. But AI chatbots convincing people to kill themselves isn't a problem with chatbots, it's a problem with the user.
I get it, grieving families will look for anything and anyone to blame for suicide except the victim, but ultimately, it is the victim who chose to kill themselves. If someone is convinced to kill themselves from something as stupid as an AI chatbot, they really weren't that far from the edge to begin with.
So someone who already has an underlying mental health condition diagnosed or not is at fault for their own death even if being coerced into doing it?
Without the AI these people most likely wouldn’t have gotten to the point of committing the act of suicide. I believe the accusations are valid and that AI can be bad for mental health.
There is evidence throughout history of cults that commit mass suicides. If a human can convince another human to do this why can’t a robot trained to act and speak like a human do it too? It’s not unreasonable to think an AI could push someone to suicide under the right circumstances.
Here's the thing, it's usually normies with no history of mental illness that fall into this kind of stuff. Most of my friends and people I follow on social media who are neurodivergent did experiment with chatbots and they saw a fuckton of red flags on the manner they work and alerted everyone about it, if they didn't hate it already for essentially stealing artistic output (which in my case was both). Regular people don't usually identify this trap cause they don't have the experience.
Google, of all companies, probably has a better psychological profile of their users than the average doctor. They even offer a public-facing option to disable ads about gambling, alcohol, or pregnancy.
TBH, alcohol ads are INSUFFERABLE but who needs pregnancy ads blocked?
People who don't want their family getting suspicious, perhaps. The Target Incident comes to mind.
Of course, disabling these options doesn't mean Google stops knowing about mental or physical issues. I'm sure you know the best way to prevent that is to just avoid Google and add some together. This is probably just Google's way of looking less creepy to the average person.
Maybe those trying and failing to conceive?
In 1980, John Lennon was shot by a mentally ill man who was convinced to kill Lennon by reading Catcher in the Rye. If he had never read Catcher in the Rye, he most likely wouldn't have killed John Lennon.
But it is not the fault of Catcher in the Rye. We don't ban the book, or call the author irresponsible for writing it, because we recognize that the fault lies in the mental illness of the shooter, and that anything could have set him off.
The people who kill themselves because an AI Chatbot told them to are mentally ill. It is their mental illness that killed them, not the chatbot. You can make the claim that if it wasn't for the chatbot, they wouldn't have gone through with it, but again, you can say the same thing about Catcher in the Rye. Getting rid of the trigger does not remove the mental illness.
That's a terrible argument. We dont blame the book because Catcher in the Rye didn't have a conversation with him and tell him to kill John Lennon. That's the difference.
"We don't blame the book because Catcher in the Rye didn’t have a conversation with him and tell him to kill John Lennon. That’s the difference."
Speak for yourself, please.
Oh, you're a dumbass huh?
AI's can't have conversations any more than a book can. It may appear that way, but there is nobody there to have that conversation. More like flipping through a choose your own adventure book.
How is that pedantic point relevant?
How is it not?
What difference does it make if you call it a conversation or whatever you would call it? The LLM responded to his messages with its own messages.
Arguing semantics of what counts as a conversation doesn't really address the actual point, does it?
Berkowitz was told by his neighbors dog to kill people.
Yeah but was that just a lie?
If he had never read Catcher in the Rye, he most likely wouldn’t have killed John Lennon.
Sue Seagram's!
It's not the car manufacturer's responsibility to guarantee a drunk driver doesn't plow into others.
Vulnerable people don't get to outsource responsibility.
Here’s the thing, there are no safeguards on who can and cannot use ai. There are safeguards to prevent death by drink driving.
Drink driving is illegal. It still happens but it’s against the law. It’s a deterrent to stop people from driving while intoxicated. I guarantee that if drunk driving were legal there would be exponentially more deaths.
Ai is being shoved down everyone’s throats on a day to day basis. There are no safeguards, even kids can use it.
Vulnerable people are victims of big tech for profit.
You argument is poor
There is a lot to hate about AI. A lot of dangers and valid criticism. But AI chatbots convincing people to kill themselves isn’t a problem with chatbots, it’s a problem with the user.
To me this seems like an obvious problem with the chat bots. These things are marketed as “PhD level experts” and so advanced that they are about to change the nature or work as we know it.
I don’t think the companies or their supporters can make these claims, then turn around and say “well obviously you shouldn’t take its output seriously” when a delusional person is tricked by one into doing something bad.
This is the key to me. Google and all other ai companies are knowingly engaging in marketing campaigns built on lies. They should be held accountable for that regardless of anything else.
When people encourage others to murder by feeding delusion they can be held accountable.
Why are you blaming the person with mental issues and not even considering holding the for profit company who made a machine that encourages their delusions accountable?