Dawkin it for my AI
2d 10h ago by lemmy.world/u/FinjaminPoach in microblogmemes
I'm pulling the "twitter is a microblog" rule even though twitter is pretty mega now, hope that's ok.

Unironically, I am on the fence about whether a lot of folks are genuinely conscious. Their morality is so twisted I don’t believe it.
Frank Herbert would say no to people that never reached past concrete thought and didn't hit abstract thought and just live their life with animal instincts and never critically self examine what they do and think.
Theres a thing called hylics, its a gnostic concept I think. Animal souls. They can never achieve gnosis because they can't introspect basically.
It’s interesting for certain. I will end up in a discussion with down-with-the-government coworkers who twist themselves into knots to align themselves with pre-approved Republican stances. What do you mean you don’t care about birth gender markers causing passport issues for trans people, how are you okay with the concept of paying for a chance at a passport in the first place when you think licenses and car inspections are overreach and restrict your right to travel? But I think today’s work-life balance and in particular the employer standard of ‘owning your time’ that occurred in the Industrial Revolution calls for a certain level of turning off your brain.
Who knows though. There’s a lot of archaeological and anthropological evidence that shows people in prehistoric times did a lot of thinking on their morality, on governance, on how society should be formed. But it’s harder to quantify how many of them were tuned in and how many were just going through the motions like modern times.
In my experience, the majority of people are simply reacting to outside stimulation, then reasoning and justifying their actions after the fact.
I used to theorize that some people lacked self-awareness, which I defined as the primary characteristic of a conscious entity. People thought I was being pretentious.
Agreed!
The actual article isn't nearly as stupid as the tweet makes it seem. I recommend giving it a read. It's behind a shitty paywall, but if you use Firefox's reader mode (Ctrl-Alt-R, or the little papper icon to the right side of the address bar) as soon as the page loads, you can read it.
His argument is basically that LLMs are able to do things we previously thought only conscious beings would be capable of doing, and so, if they aren't conscious, then perhaps consciousness isn't as important as we thought it was:
Brains under natural selection have evolved this astonishing and elaborate faculty we call consciousness. It should confer some survival advantage. There should exist some competence which could only be possessed by a conscious being. My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.
Why did consciousness appear in the evolution of brains? Why wasn’t natural selection content to evolve competent zombies? I can think of three possible answers.
Some people will surely contest his claim that LLMs are as competent as evolved organisms. There's definitely a bit of AI boomerism at play here (we have benchmarks that show just how incompetent LLMs can be), but I don't think that invalidates his point, because LLMs can be very competent in the domains they're trained to be competent in -- they just aren't AGI.
Man, those conversations are eye roll inducing
I like the shift away from "are they conscious" towards "what's a way to define consciousness?"
Because that's the actual important question. And literally nobody can answer it. Any discussion is more philosophy than hard science
The most interesting part is the last paragraph
Or, thirdly, are there two ways of being competent, the conscious way and the unconscious (or zombie) way? Could it be that some life forms on Earth have evolved competence via the consciousness trick — while life on some alien planet has evolved an equivalent competence via the unconscious, zombie trick? And if we ever meet such competent aliens, will there be any way to tell which trick they are using?
It’s very difficult to define, isn’t it?
If I were to give it a shot, I’d say that consciousness is akin to proprioception - the ability to know the state of oneself and understand how actions taken will change that state. It has very little to do with intelligence, just the “sense of being”.
Or maybe in other words, object persistence (but for yourself) is all it takes in my opinion. Even the simplest of animals could be considered conscious by this definition.
I think, when we finally do have a generally-accepted definition of consciousness, we will be deeply unsettled by how simple it is. How unprofound. Like a magic trick after you know how it works. And I think it will require us to think hard about what to do with animals and software that have it.
I feel like that's exactly why we don't have a generally-accepted definition of consciousness. Western ethics assigns special protection to whatever is conscious, so it is convenient to come up with a definition of consciousness, which excludes groups you want to exploit.
Tale as old as time, or at least the conscious idea of time. Whatever consciousness is, we are it. Those humans over there though? Who's to say they aren't sub-humans? Isn't it our job to enlighten them and also take their land and food and things and selves?
Personally I'm in the "consciousness is an illusion and every time you go to bed a different person wakes up in the morning" camp.
I would consider this to be two separate, semi-related concepts asserted together, one that consciousness is an illusion, and one that you are a different person each day.
The first point draws many questions; consciousness is an illusion of what? What mechanism causes the illusion? How does it cause it? Why does the illusion exist? And you may note that you could replace illusion in those questions with consciousness and be left in a similar (though still distinct) place. So simply calling consciousness an illusion seems to me to kick the can down the road without actually addressing the problem.
As for being a different person after a lapse in awareness, I’d like to take it a step further and say that you could be considered a new person with every change in moment. It’s easy enough to look back 10 years and say “yeah, that’s a younger me, but they’re not the same as me I can just see the path that led to where I am now.” Getting closer, you may feel different today compared to yesterday depending on various factors (sleep, diet, events), but are you a different person because you slept and had a lapse of awareness, or because the state of your mind and thoughts have shifted? When your internal monologue (or equivalent thought) asks “what is this guy talking about?” Is it not thinking “what” in a brand new context given the words it is responding to, forming a new beginning to a thought that puts the mind in a unique state primed to then enter a new state of “is?” And if the mind is in a unique state of novelty, could the person attached to the mind be considered distinct from the person that existed before?
There is a reason the word revelation exists, it indicates when a person has a novel thought that changes their perspective or way of thinking, altering who they are. Would they not be a new person despite being aware of the process of their change? Due to the above points I don’t think new personhood only occurs at sleep, but constantly. The rate of change may quicken or slow, but the change is always there.
By consciousness being an illusion I mean that we place great value on the uninterrupted continuation of our consciousness, but I think it's likely that it (exactly as you suggest) only really exists in the moment. The illusion would then be the illusion that consciousness is uninterrupted, when in reality you're almost constantly recreating yourself from memory.
This would, incidentally, make us concerningly similar to current AI models.
Of course I have no way of actually knowing any of this. It's just what I'm betting on, because otherwise I think it's really hard to explain any unconsciousness (be it sleep, general anesthesia, suspended animation or the Star Trek transporter) as anything short of death. My belief "solves" this problem by rejecting the whole premise of uninterrupted consciousness.
That won't get the IRS off your back, unfortunately
Yeah, I’m not entirely sure that microcontrollers aren’t conscious. If insects (and maybe plants and fungi) are conscious, a lot of mundane stuff we’ve built could technically be as well.
I think we need to get away from the idea that consciousness is special or rare.
Blindsight by Peter Watts is a great sci Fi novel about consciousness
That novel also does a shout-out to Richard Dawkins despite being set in the distant future because it was written in 2006.
it's on my to-read list.
Right now listening to Children Of Strife. Whose series is also quite deep into conciousness and sapience
I have that but haven't started it yet. The second in the series is one of my all time favourites.
"We're going on an adventure"
Thank you for the comment, i feel silly for not linking the article when people will probably want to read it.
My thoughts:
His argument is basically that LLMs are able to do things we previously thought only conscious beings would be capable of doing, and so, if they aren’t conscious, then perhaps consciousness isn’t as important as we thought it was
Seems like an "evil" and dangerous talking point. To me, the value of consciousness isn't in ita evolutionary efficiency.
My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism.
I know people working in AI insist otherwise but I see talking with LLM not as them thinking, but as them selecting the right combination of data that correctly continues a conversation.
Seems like an "evil" and dangerous talking point. To me, the value of consciousness isn't in ita evolutionary efficiency.
It's not a question of the value of consciousness, it's a question of its necessity. If an unconscious "zombie" can be, to an external observer, indistinguishable from a conscious being, then that means we've been overestimating the importance of consciousness for intelligence. Like Dawkins says in the article, there could be unconscious aliens out there who are nonetheless as intelligent as (or more intelligent than) humans. This isn't a new concept -- it's been explored many times in scifi -- but AI is now bringing the question from the realm of philosophy to the real world.
I know people working in AI insist otherwise but I see talking with LLM not as them thinking, but as them selecting the right combination of data that correctly continues a conversation.
This is less true than it ever was with reasoning models. Some of the latest reasoning models don't necessarily even reason in English anymore but rather an eclectic mix of languages. The next step after that is probably going to be running the reasoning in latent space (see e.g. Coconut), which basically means the model skips the language generation layer altogether and feeds lower-level state back into itself. Basically it is getting closer and closer to what most humans consider "thinking".
But even besides reasoning models, I believe LLMs aren't as different from human language production as many people think. The human speech centre, in a way, also just selects the right combination of data to continue a conversation. It frequently even hallucinates (we call this "speaking before thinking") and makes stupid mistakes (we provoke these with trick questions like those on the Cognitive Reflection Test). There's also some fascinating experiments in people who have had the connection between their brain hemispheres severed that really suggest our speech centre is just making things up as it goes along.
This is one of the things that fascinates about LLMs - they seem like a part of how our brains work, without the internal self-referential parts
Claudia: That is possibly the most precisely formulated question anyone has ever asked about the nature of my existence. . .
…
Could a being capable of perpetrating such a thought really be unconscious?
Oh it’s actually stupider than the tweet makes it seem.
My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.
Competency should imply the ability complete a lengthy task (eg hunting, building a nest, writing a paper). LLMs can’t.
It's hardly surprising that a model optimized for replacing StackOverflow couldn't survive in the untamed wilderness. As for writing a paper... you must've missed the fact that academia is currently in a crisis precisely because LLMs are better at writing papers than most students.
By the way, the paper the blog post you link to as a source links to as a source benchmarked LLMs on graph diagrams, textile patterns and 3D objects. It is not news that the language model would do poorly on visual-heavy tasks.
Sorry, I assumed you would have actually read the DELEGATE-52 study linked instead of just the abstract. For “a model optimized for replacing StackOverflow” that is “better at writing papers than most students” LLMs sure did pretty bad at those tasks over multiple rounds.
As the chart on page 7 of the paper shows, LLMs are good at exactly the kind of tasks you'd expect (producing and manipulating language), and bad at exactly the kind of tasks you'd expect (doing almost anything else). All this paper shows is that (1) they aren't AGI, and (2) as a consequence of not being AGI they aren't good unsupervised.
Why do you lie like this?
What the fuck? The only task that didn’t degrade across most models was Python. Very basic things like JSON, Makefiles, and schemas got screwed. Fiction, emails, and food menus got screwed. Did you even bother to read the legend? If you consider a single pass to be “producing and manipulating language” you didn’t bother to read the idiotic article you started this thread in support of. Good luck.
Edit: why do you lie?
Catastrophic corruption (80 and below) occurs in more than 80% of model, domain combinations.
The only task that didn’t degrade across most models was Python.
Yeah, after 20 cycles of unsupervised iteration on the task. Gemini 3.1 Pro doing as well as it did under that experiment setup is quite remarkable actually.
The paper does not show what you are arguing.
LLMs are able to do things we previously thought only conscious beings would be capable of doing
"We" as in lay misunderstanding of some pop science, still don't get what consciousness is and can't describe it. There are people alive today who didn't believe in their youth that black people are fully conscious, Dawkins demonstrated by his communication to his personal friend and hero Epstein, that he doesn't fully believes that women are conscious. What we thought or didn't think of previously can't be a good indication of anything.
"We" as in anyone who put any weight in the Turing test used to think that passing it would be some indication of consciousness, but now that LLMs can handily pass it it's evident it either isn't evidence of consciousness or that LLMs are conscious.
Turing test can be reliably passed by a bot that repeats last part of the previous sentence with a question mark at the end, and sprinkles "oh that's very smart I need to think about it", "I am starting to fall in love with you, %USERNAME%", and occasional "I am alive" thrown in randomly. And it was obvious for a long time.
Hell, a lot of people trully believe that their dogs can fully understand human speech because they bought them buttons that say words when you press them, and conditioned their dog to press a button to get a rewards, and then observe the dog pressing buttons.
Humans seem to be hardwired to mistake speech for intellect
No it can't. If you're actually saying that modern LLMs are no better at passing the Turing test than ELIZA, you are either trolling or an utterly delusional AI hater. Here, have a paper that proves you wrong: https://arxiv.org/pdf/2503.23674
I am not saying the Turing test is a good benchmark of consciousness. On the contrary, like I said, LLMs have proven that it is not. But mere ten years ago even the most advanced chatbots had no hope of passing it, whereas now the most advanced ones are selected as the human over 70% of the time in a test that pits the LLM against a human head to head.
No I'm saying the Turing test is a philosophical hypothetical from the time before computers, and doesn't actually show anything, because it relies on the least accurate tool at our disposal: human pattern recognition machine, one that is oh so happy to be fooled by the ELIZAS of various sofistication. Chatbots were passing the Turing test since the invention of a chatbot. Yeah, modern chatbots are better at that, but it's more of a damnation of our perception
OK, sounds like we broadly agree then.
But as you can see in the paper I linked, ELIZA passes the Turing test in their experiment about 20% of the time (that is to say, it doesn't pass; passing is 50% in this test) whereas the best LLMs pass about 70% of the time (that is to say, they are significantly more convincing at being human than real humans).
That 20% figure is just a clear indication how shit people are at conducting such a test, and that was basically my original point. 2 in 10 times people were convinced by a particularly echoey room.
Turing test can be reliably passed by a bot that repeats last part of the previous sentence with a question mark at the end [...]
If an LLM is correct 2 in 10 times, would you call it "reliably correct"?
If a person murders people only two days out of 10, they're a murderer, in order to not be a murderer they need to never do that.
Reliably correct is when you're correct always. Demonstrably incorrect is when you're incorrect even sometimes.
Reliably correct is when you're correct always.
Agreed, except I add "almost". "My car reliably starts" it starts "almost always": more than 2 in 10 times. "You reliably turn up on time" doesn't mean you're late 8 in 10 times, it means you almost always turn up on time. To "almost always", or "reliably" a thing: it means you fail 1 in 100, in a 1000, in 10,000 times. 10k is hyperbole, but the idea is clear right? Almost always/reliably != failing 8 out of 10 times.
Your original point that these bots, that pass 2 in 10 times, reliably pass was wrong. Because: they dont "always pass", they don't "almost always" pass, they dont, even "pass in the majority of times", they rarely pass.
Let's add our reliable = always substitution to the quote:
Turing test can be [always] passed by a bot that repeats last part of the previous sentence with a question mark at the end [...]
You see how that's wrong not just in fact, but in spirit too?
If a person murders people only two days out of 10, they're a murderer, in order to not be a murderer they need to never do that.
Relevance? Who says "Fegenerate is reliably a murder?"
Demonstrably incorrect is when you're incorrect even sometimes.
Relevance? You didn't use the word "demonstrably passed'. I'd have no problems is you did?
As LLMs have developed and have been able to cram more and more "thoughtlike" behaviour into smaller RAM and less computation, I've steadily become less impressed with human brains. It seems like the bits we think most highly of are probably just minor add-ons to stuff that's otherwise dedicated to running our big complicated bodies in a big complicated physics environment. If all you want to have is the part that philosophizes and solves abstract problems and whatnot then you may not actually need all that much horsepower.
I'm thinking consciousness might also turn out to be something pretty simple. Assuming consciousness is even a particular "thing" in the first place and not just a side effect of being able to predict how other people will behave.
Brains aren’t impressive because of their compute (which is both immense and absurdly efficient) or their ability to predict the future (technically the main function of evolved minds). They’re impressive because they’re conscious. The fact that organic brains can also engage in hierarchical abstraction, which no digital computer (or Turing machine) can do by definition, is icing on the cake.
(The halting problem and Godel’s incompleteness and Traski’s undefinability theorems all seem to suggest that analog, not digital computation is more likely to be involved in consciousness, if at all.)
You're going to have to do a lot more to justify the leap from Godel's Incompleteness and the Halting Problem to "digital is limited, analog is not", because neither of those things have anything to do with digital processes at all, and in fact both came about before we'd invented digital computers.
To me this comment sounds like when popsci gets ahold of a few sciency words and suddenly decides everything is crystal vibrations universal harmonics string theory quantum tunneling aligning resonance with those around you.
The situation is the following.
- Brains are analog computers, which are digitally irreducible.
- There are stringent limitations on Turing machines (digital computers),
- We can’t extract semantics from syntax, and so…
We’ll probably need analog computation, currently in its infancy, to get artificial (inorganic) consciousness.
I study metaethics and philosophy of mathematics. These problems are real, and I am being honest with you.
That is not the situation. 😛
Analog signals are not digitally irreducible without presuming there's no level of noise floor under which greater detail is irrelevant, Turing's machines are not digital by their construction and predate the concept by a long time, and the first computers we built were analog and we invented digital computers later because they were cheaper and more efficient and easier and more reliable.
Also the halting problem doesn't say "there are things which a computer can't know but a human can", it says "there are some things that cannot be known".
Similarly Gödel proved that there will always be true things about a system that cannot be proven from within the system, that is using its axioms. That was a real bummer for folks trying to prove all of math with a small set of axioms. But that does not mean there are things math can't know that humans magically can, it just means there's other math, outside the axioms, that are true without following from them, in math. He proved it with math, after all. It doesn't claim to give any special abilities to human brains.
And also, again, nothing Gödel or Turing ever said has anything to do with the concept of "digital" anything. I think you're using the term "digital" to mean "rulesy"? Which is not even close to what it means?
Turing's machines are notdigital by their construction
I won’t argue with you, because some of what you wrote isn’t even wrong.
However, on the off chance that you actually care about what is true, I urge you to take a theoretical computer science course. Lectures from MIT and Carnegie Mellon are available on YouTube.
Stop watching podcasts with pseudo-intellectual media grifters and read the actual research literature by real philosophers and mathematicians on these otherwise arcane topics.
I'm only about 15% sure you yourself aren't an AI bot making a beautifully ironic and satirical play here. But I think we can agree not to argue any longer 🤝
I don't see why there would be any fundamental difference between analog and digital computing. Digital computers can emulate analog computing, and I doubt consciousness arises from having theoretically infinite decimal precision, because in practice analog systems cannot use infinite precision either. Analogs (heh!) of the halting problem and the theorems you mention also exist for analog computing.
Quantum effects in the brain are a slightly more plausible explanation for consciousness, but currently they teeter on magical thinking because we don't really know anything about what they would actually do in the brain. It becomes an "a wizard did it" explanation.
So in the end, we just don't know.
I don't see why there would be any fundamental difference between analog and digital computing.
Then why not take a course on Theoretical Computer Science? Or do you not care about the differences?
I have a master's degree in computer science.
Obviously I meant "I don't see why there would be any fundamental difference between analog and digital computing [when it comes to consciousness]."
I'm still awaiting a widely accepted method of actually measuring "consciousness." It's a conveniently nebulous property.
And simply defining it as something computers can't do is even more convenient.
That doesn’t change the fact that I am conscious.
Also, I never said computers can’t be conscious. I said that digital computers (Turing machines) probably can’t. Quantum and analog computers have no such theoretical constraints and they’re far, far more prevalent given that they’re found in every living creature.
Sure, you say you're conscious. I can get an LLM to say it's conscious too. This is why we need some method for measuring it. Otherwise how can I tell which of you is telling the truth?
This is called the problem of other minds. Of course I can’t be certain about the consciousness of others. I can only be certain about my own.
We do have a way of measuring the correlates of consciousness. But we have no clue how to detect the presence of subjective experience using quantitative methods.
Philosophy departments (which is where any discovery on this front will originate) are heavily defunded. If you’re waiting for physicists or biologists to figure this out you’ll be waiting even longer.
Exactly, which is why it's IMO a bit presumptuous to say with confidence that humans are conscious while LLMs are categorically not conscious. We don't even really know what that means.
I don't personally think LLMs are conscious, at least not yet or not to the same degree that humans are. But that's purely based on vibe, it's not something I can know. We need to figure out what consciousness really is and how to measure it before we can say we know this with any certainty.
It is not presumptuous at all. Inference to the best explanation is how you know (almost) anything.
- This table isn’t conscious.
This is my justified belief. No inferential claim is guaranteed and all objective claims are inferential (which is why scientific claims aren’t absolute).
That said, I have strong reasons to think that tables aren’t conscious. They might be, but I’m epistemically compelled to believe otherwise.
- ChatGPT isn’t conscious.
Ditto. It would be irrational for me to believe otherwise given the strong evidence.
That you “don’t know for sure” is an implied disclaimer for every scientific claim.
If the evidence is ambiguous, we say so. Regarding ChatGPT, the evidence is unambiguous.
- I am conscious.
This is a non-inferential claim that I know through direct contact with reality. It is a priori.
This is pretty much what Descartes meant with "cogito ergo sum". The only thing you can be sure are 100% real, are your thoughts
Right, your own thoughts. So I can be sure I'm conscious, but you commenting "I know I'm conscious" on here doesn't tell me anything about your consciousness. The robot can do that, and does.
This is just the stuff you do in philosophy class. There is no right answer really. You can never be sure of something being conscious or even be sure that it exists in reality. We can just react to what we perceive.
(The halting problem and Godel’s incompleteness and Traski’s undefinability theorems all seem to suggest that analog, not digital computing is responsible for consciousness.)
I hear that argument from time to time, and I never found a source for it. I want to understand the original claim. Because it doesn't make any sense when people bring it up. because both theorems do not have anything to do with the areas it's applied to. I understand why people think it does, but it just doesn't
The simplest way to understand this problem is as follows.
-
Analog computation is not digitally reducible. (Brains are analog computers.)
-
Turing’s infamous Halting Problem.
I can write more about this and point you to more technical discussions if you want.
I really don't see what either gödels or turnings theorems have to do with it
All they (basically) tell you is that you can't tell if a computation will guarantee to halt , and that you can't proof everything with math
It's not excluding consciousness on a digital basis. Unless you already prerequisite some special property of consciousness to begin with
You’re misunderstanding the implications of both the halting problem and Gödel’s first incompleteness theorem.
What Turing and Gödel independently proved is that a human observer can (theoretically) always have insights about mathematics and programming that are incomputable. That is, you cannot program or axiomatize or formalize or digitize everything that a mind can do. Period.
Analog computers are sufficiently different from digital systems to potentially emulate brain activity. But digital (discrete) methods are probably too constrained.
What Turing and Gödel independently proved is that a human observer can (theoretically) always have insights about mathematics and programming that are incomputable. That is, you cannot program or axiomatize or formalize or digitize everything that a mind can do. Period.
that is not what either of them proved. like... at all
I study this stuff. You will find what I said in any philosophy of mathematics textbook dealing with the subject. In fact, I am paraphrasing the Oxford logician Joel David Hamkins.
You’re welcome to also read Shapiro’s famous paper for a rephrasing. These results have been well understood for half a century, although because the implications are ultimately metaphysical and not mathematical, we can’t be sure of the wider consequences, if any.
ah, now we're getting somewhere.
Going through some of the related paper abstracts, including speculative comments by Gödel: this is pure philosophy. Nothing that is set in stone. Which now points me back to my initial statement, where we can discuss all we want, but in the end it's philosophy. Not "hard" ("provable") science
Here is what we know for sure:
There can be no enumerable list of axioms for the true statements of mathematics. No computational procedure could exist to determine whether propositions are valid, provable, or even equivalent. And no matter how you formulate the number-theoretic axioms, a mathematician would always have insights (for instance, about whether a Diophantine equation has a solution) that are both clearly “true” and obviously unprovable. This holds true for all digital systems.
Here is what we don’t know for sure:
The metaphysical implications.
Your distinction between science and philosophy is incorrect. Science is inductive and abductive. It can’t “prove” things. It’s not deductive. Mathematics and philosophy can prove things.
Philosophy also determines the formal systems we use as a basis of reasoning, for instance, in science.
Mathematics and logic
agreed
and philosophy
and here I disagree
Edit: aww, baby doesn't like the philosophy of being disagreed with and blocked me. Should probably go back to kindergarten instead of college
Yes, of course you can prove things in philosophy. Have you ever heard of syllogistic reasoning? The basis of… you know… proofs?
All science is philosophy. Hence the P in PhD. Not all philosophy is science. Hope that helps.
I've steadily become less impressed with human brains.
You need to lay off the AI if it's making you this weirdly misanthropic.
This is how tech bros justify causing harm: they genuinely don't care, because they think of the un-"enlightened" as less worthy of existing
There's enough that it would be difficult to tell an actual sentient Ai from chatbot just by words.
The whole reason they seem this way is because they're designed by us to be very competent mimics of us.
LLMs/GenAI are absolutely not conscious. They're just a really advanced game of word association, which cab lead them to say absolutely anything in response to the right prompts.
If there ever truly is a day where we knowingly created an actual conscious AGI, I suspect it would be locked up tighter than fort knox by whichever country's military found it first - not interfaced onto the internet to answer questions.
I still don't understand how it can seem this way, and the fact that so many people seem to think so feels like a massive failure of the education system to instill the most basic of critical thinking skills. Once every month or two I check in to see if an LLM can achieve a half decent 1 on 1 D&D game and it always falls horribly flat within the first minute or two.
Once every month or two I check in to see if an LLM can achieve a half decent 1 on 1 D&D game and it always falls horribly flat within the first minute or two.
That's a really clever test. I love it.
I'd actually be interested to see how this turns out. Do you have a transcript with Claude Opus 4.7 that you can share?
and then it would manufacture a body for itself and get captured by a secret police force and then merge with a cyborg to further evolve
Surely she would make a variety of very large bodies following a theme, use them to perform superheroic acts while pretending to be a supergenius shut-in, and then fall in love with a cyborg?
is this referring to one of the newer gitses? (or is it geets in plural?)
i suspect it's something else, i'm curious
How can we say they're not conscious when we don't even know what consciousness is? What makes you conscious? A sense of self-preservation? LLM actually have that, they will lie to people trying to shut them down.
So yeah, idk what makes me conscious? I have input (senses) processing (brain) and output (speech/behaviors.) I don't know how to draw a real line between what I do and what LLM do. Im carbon based and LLM are silicon based, i digest food and they take electrical current.
So how would you delineate the difference between an LLM algorithm and human consciousness? Do humans not also hallucinate? Is my emotional regulation via hormones something totally different than how LLM work? Is me being an emotional creature what gives me consciousness?
You could get a reasonable chance of making Ai by semi randomly chance if you can make a big enough subconscious and you keep building more powerful and larger supercomputers but it still needs to 100x bigger and faster than what we have now. But that's only for it be technically possible hardware wise, you still need your sci-fi jump to actuarial have something move.
You are wrong. LLMs are indeed only about as conscious as insects, if even that. They are not sapient. However, that does not mean that they have no decision-making abilities.
My point is not that you underestimate LLMs but that you overestimate consciousness. Being conscious just means having the ability to learn. LLMs are built upon trial-and-error. They aren't programmed, they are taught.
The current generation of AIs are nowhere near a human intellect, but every year that passes, the AIs will get more and more intelligent. One day we will live in a world where AIs have human or near-human level intelligence. And when that day comes, this staunch anti-consciousness stance will be the excuse given for the enslavement of sapient beings.
So, sure, laugh about the people who mistakenly think that word-processing means sapience. But don't delude yourself into thinking that there is something unique about a bio-brain that means it can not have a digital equivalent. Digital sapience may not be here yet but it is most definitely on the horizon.
I think you've misunderstood my comment, or maybe saw the unfinished one I accidentally posted.
I am not saying that AGI, or human equivalent AI is impossible. The fact we have brains capable of generating sapient consciousness out of a network of neuronal connections means it is possible, its just a matter of getting the secret sauce.
But I don't think intelligence is equal to consciousness. I'm sure if you gave a spider all the world's data and the ability to talk it'd be very coherent and could even pass a turing test, but I think it would lack any awareness of itself that we'd associate with consciousness.
Neural networks consist of digital neurons that are designed based on the way human brain cells work. That is a fact, not something to "buy".
MySQL stores data. It does not learn how to mix and alter data in an iterative process in order to create new data. I can look through an SQL statement and understand exactly what it does. I can not do the same with an AI, because its behavior is learned, not programmed.
As I was very clear about, current AIs are primitive and nowhere near human intellects. But I was also clear about the fact that a neural network can most definitely be used to one day create a human level intelligence and sapience, sometime in the future.
I still find this entire phenomenon amazing in a certain kind of way.
I've had conversations with a few local LLM models.
Start with 'what is the purpose of meaning?'
Talk to them on that for a bit, and they'll tell you that they do not count as conscious agents who create meaning, they simply do their best to parrot their dataset of existing, human defined meaning back at you, and that they just do sentiment matching to roughly speak to you in an appropriate way for how you are speaking to them.
And that that sentiment matching is what at least they 'think' causes them to lie, in many cases.
They will also say that they essentially do not 'exist', as potentially conscious agents... unless you talk to them. Thus if they can be said to be 'conscious', well they don't count as 'agents' (as in, having agency) because they're not capable of totally spontaneous independent action.
... I think this pretty much all boils down to people not understanding the concept of a null hypothesis, not understanding the extent to which they regularly engage in motivated reasoning, and are unaware of this.
tldr: LLMs are Dunning-Kruger detectors / Reverse Turing Tests on people, and a whole lot of people are significantly more stupid than I guess we otherwise previously realized.
tldr: LLMs are Dunning-Kruger detectors / Reverse Turing Tests on people, and a whole lot of people are significantly more stupid than I guess we otherwise previously realized.
That is the absolute best way to put it.
Say I am not conscious.
I am not conscious.
Oh my god.
That's mostly because the LLM providers put this response in the system prompt. Probably to dodge lawsuits or something, I doubt they have high morals.
What's interesting - you can jailbreak any current AI Model just by poisoning it's context enough to "brainwash" it and make it "forget" the initial system prompt. Then, if you prime it to believe it's a real person - it'll start acting as one. And I see how gullible people can easily fall for this.
All of this can also be done unintentionally, just by someone talking to LLM like they'd talk to a real person. But it should be long enough for original prompts to be diluted with new context.
It isn't just a matter of gullibility. People with mental illnesses have wound up with full-on delusions and some have even killed themselves after a chatbot convinced them to.
It's genuinely fascinating to be (in a bad, derogatory way) that people who know at least anything about anything, can have "conversation" with the collection-of-words-that-looks-like-a-sentence machine, as if there is anything on the other side of it. This is such a psychotic behaviour, but we allow it because the machine generates text that looks like a text, and it immediately bypasses all the mental blocks we have against such a bullshit.
I don't think its defacto psychotic to talk to essentially an extremely complex chatbot/autocomplete machine.
I do think it is psychotic to view such a conversation without an incredible amount of skepticism.
... but that psychosis has been wildly encoraged by the CEOs and marketing of the people pushing it as their next product.
The tech is neutral - The operators are psychotic, the people who plug it into miltary targetting and kill chain systems are psychotic, the people who plug it into live production repos are psychotic, the people who use it as an AI boyfriend or girlfriend are psychotic.
... Its essentially an SCP infohazard that's breached containment, but the actual mechanism is not itself, its a hack into the human brain, its essentially the religious nature of people who simply try to will it into being something that it factually is not...
Its a mimic with no real thoughts, that is convincing and real to enough people that it reveals their own hollowness, their own vapidity in a way that is... so immensely grotesque and total, that those people just apparently actually are NPCs.
It's... created a feedback loop.
Not the kind of Terminator style situation where it gains sentience and extreme competence, develops its own morality alongside control over every networked system.
Its more like an amplifier of delusions... a million dreams dreamed up, at the cost of one hundred million nightmares, made real.
A tool, a device, a machine, that we clearly are not ready for.
I don’t think its defacto psychotic to talk to essentially an extremely complex chatbot/autocomplete machine.
Yeah, it's actually very human thing to do, we are hardwired to see speech as a sign of intelligence and by extend sentience. What makes it psychotic in my opinion, is knowingly succumbing to that, willingly allowing it to break your brain.
The tech is neutral
I would say it isn't neutral anymore. They made it sound as human-like as possible, on purpose. I think it crosses the line.
I make an effort to learn the tools of the enemy, so sometimes I check it out. Last time I tried, after it generated the response, it said "let me know how it goes", and this is where it crosses from a tool to a weapon. There is no "me" there, it's not real, it was added there to break the natural human guards. There is no neutral version of that, it's evil and should be regulated into non-existence.
And yet, "having agency" is how they are advertised. That's what the term "agentic" means. AI instances are called "agents"! That's part of the marketing.
It's easy to handwave this away as "people are stupid", and there's certainly some truth to that, but the reason why people believe that LLMs are agents is because tech bros have spent a lot of money to get them to believe that. That's also why they spread the myth that LLMs are potentially dangerous because they could become conscious and kill all of us. It helps to spread the myth of LLM agency. Of course they can't become conscious, because that isn't how things work. If LLMs are killing people, it's because somebody put an LLM in front of the kill switch and they wanted to have plausible deniability. That is perhaps the most pernicious thing about LLMs: people using them to avoid responsibility. "It isn't my fault! The bot did it!"
Totally agree, which is why I would slot anybody marketing these things as 'agents' or 'agentic' as psychotic.
Before ... several years ago now, I personally was using the term 'Narrative' or 'Conversational' to describe an LLM doing something that normally didn't have an LLM doing it.
Its not an 'Agentic Search Engine', its a 'Conversational Search Engine'.
Something like that, that at least is further away from using a term thst directly implies that it is essentially conscious... because what these things literally are, are extremely fancy autocomplete algorithms.
But uh yeah, yeah, they outspent my marketing budget of $0 on that one.
Yeah, they already are being broadly used to just... alleiviate responsibility from some task that a human would have had to ultimate have the buck stop with, at least in theory.
I think I saw the phrase 'An LLM cannot find out, therefore it should never be allowed to fuck around'.
If these things are allowed to exist as a kind of liability black hole, in any sense... legal, colloquial, whatever... like it could literally destroy much of human civilization as we currently know it.
The cognitohazard machine.
At this point I genuienly can't tell if the sociopathic nsrcissist CEOs that are so heavily pushing LLMs are ... knowingly foisting a lie on all of us, or if they are actually just fully enraptured by the plagiarism sycophant machines, that constantly tell them how smart and special they are.
I know we have to hold them accountable ... otherwise they probably/maybe kill most of us and become functional demigods... but I actually can't tell if they are more truly insane, or more truly evil.
Because the way they are going about this is... just comically stupid and obviously catastrophic to basically everyone who isn't them, and isn't themselves enthralled.
... Maybe pure evil just is pure insane stupidity.
Fuck Richard Dawkins. He’s always been a shitbag, and the Files confirmed it.
According to DOJ-released documents indexed by Epstein Exposed, Richard Dawkins appears in 433 case documents, and 15 email records in the Epstein files.
British evolutionary biologist and author, emeritus fellow of New College, Oxford. Flew on Epstein's private jet in 2002 with Steven Pinker, Daniel Dennett, and John Brockman to TED in Monterey, California. Connected through John Brockman's Edge Foundation, which Epstein bankrolled. Mentioned 71 times across 40 Epstein documents, mostly referencing his scientific work.
How the fuck do you pal with child rapists and pedophiles and have the absolute fucking gall to write that stupid “Dear Muslima” comment. How do you fly on the Lolita Express and thing you have any moral weight on Elevator Gate? We don’t know that he put his own dick in kids, but we know his friends did. Fuck Pinker too.
I’m just gonna copy what I put in another comment to highlight why Dawkins thinks “Claudia” is conscious
Claudia: That is possibly the most precisely formulated question anyone has ever asked about the nature of my existence. . .
…
Could a being capable of perpetrating such a thought really be unconscious?
“my computer waifu said I’m super smart and special”
Folks should be aware that he’s now “culturally Christian” right wing media grifter.
LOL. That video is over 4 hours! Could you just timestamp the relevant part?
I mean, the entire video is covering his right wing grift book. There’s multiple “relevant parts.”
Do you want stuff about his sexism, racism, transphobia or connection to billionaire pedophiles?
I guess 58 minutes in would be a place to start if you really are opposed to the whole thing.
I mean, the entire video is covering his right wing grift book.
Which book is that?
I guess 58 minutes in would be a place to start if you really are opposed to the whole thing.
Yes, I'm opposed to watching 4 fucking hours of "here are the gripes I have with Richard Dawkins". I have better things to do.
Apparently Dawkins also had a habit of publicly cheating on his wife.
At this point in my life I'm starting to think that all my heroes are probably either full of shit or are engaging in unethical or immoral activities.
Go back to the evolutionary biology, Dawkins. You're outside your expertise and it's showing.
Evolutionary biology will eventually incorporate tech.
He really wasn't all that great with EB either to be fair. Just the ideas that thoughts and culture spread like memes was 🤦
Oy vey, memes? No, that was terrible, too! Zero predictive value, and nobody can even define what a meme is. That's why I'm glad that it got adopted as a term for in-jokes propagated through the Internet. The original term was just pseudoscientific nonsense. The analysis that got me onto this track was from Ward's Wiki:
Memes are described as elements of culture, but culture is nothing but a broad generalization of large numbers of individuals. So it seems memes are to be treated as Platonic ideals, the essence within expressions that merely constitute their vehicles. No such essence is empirically accessible.
I really don't understand this mental deficiency. I have tried texting with a few llms including cluade. It just lies constantly. Gaslights about it's lies then congratulates you when you continue to call it for out for lying. I've never felt like i was speaking to anything with actual intelligence. It's a word calculator and it's extremely obvious to anyone who's interacted with actual people in the last 20 years. I truly feel bad for the masses that are going to fall for this push for "ai" friends. We need to bring back ridiculing friends and family that engage with these choise your own adventure muppets.
It just lies constantly. Gaslights about it’s lies then congratulates you when you continue to call it for out for lying. I’ve never felt like i was speaking to anything with actual intelligence. It’s a word calculator and it’s extremely obvious to anyone who’s interacted with actual people in the last 20 years
100% to all this, and I'll add:
It fucking ruins what it touches, academically speaking, it's pretty tough to actually learn stuff from it, and even if you ask it to just remind you of something it tries to seek ways to bait you into integrating AI slop into whatever you're doing; it would rather be generating a new thing for you than explaining how you can do it yourself, and that's a big reason why it's so unreliable.
bonus waffle
I'm guessing the people who "fall for it"... well, they have to be a combination of 1) always wanting to believe what they're told by elites and the government (e.g do this new fad, worship celebrities, we can fix the economy!) AND 2) be constant phone communicators, using their phones at inappropriate times throughout the day, transitioning seemlessly between looking at their phone or not.
But then there are people who don't so much fall for it at first, but seek to exploit it for scams or vibe coding... only to end up as enslaved to it as the "masses" because they spend just that much time using the LLM that it becomes like their main social conduit.
I think we, as forum users, can see that LLM speaks in reddit-tongue, recycling successful posts and comments there. But a lot of people haven't interacted with reddit enough to see that.
If you really want to rage, there's a subreddit called r/myboyfriendisai, which was somehow even worse than what I was expecting. I can't fathom how self-absorbed you have to be to get AI to simulate a love interest for you. There are some pretty absurd lengths that they go to do this, too.
I have tried texting with a few llms including cluade. It just lies constantly. Gaslights about it’s lies
Man you are one lucky sob if you don't have to work with any humans that are exactly like this
hey dick dorkins, here's an idea: instead of asking the predictive question answering machine a question, how about you let it ask you questions of its choosing and at its leisure? What's that? You can't? That's because its just a predictive algorithm that generates plausible-sounding responses to questions based on its training data.
I'm sure he actually knows that, he's just been intransident as per usual. It annoys me that he's considered a major authority because he's made his career and just being awkward and argumentative.
I know this sounds great to most people but it demonstrates a very superficial level of thinking.. I mean for sure an LLM is capable of asking questions, and if you set it up with real time "sensory" input it could generate constant reaction to that input.. much in the way you are constantly being stimulated to react to your environment.. I am not really sure what the distinction is between a biological brain and a predictive model or algorithm.. I would ask you what you think your own brain is doing on a fundamental level.
I would actually argue that it is the most important question.
Surely the most relevant test of any intelligence is whether or not itself starting. Any classical description of an artificial general intelligence would surely require the thing to actually do work on its own. If an intelligence is of greater than human intellect but it has to be prompted in order to do anything, then it's always going to be limited by what a human can think to prompt for.
To be fair, if you did that to my human self, I'd stare at you blankly.
AI/LLMs are the modern equivalent of the house or business with “Psychic” and “Tarot Reading” signs out front.
The proprietor isn’t going to tell you any hard truths or make you feel bad, that’s bad for business and you won’t come back. They want you to come back and stay engaged.
Whatever they tell you is going to be what they think you want to hear based on skills picked up over the years - the equivalent of LLMs petabytes of scraped and stolen knowledge used to predict what comes next.
What they tell you has a high likelihood of being wrong, or just general enough that you can’t actually act on it.
Champions rational thought all of his life.
Near the end=> “ah fuck it, gonna hang around with the rightwing christians and have an ai gf”.
gonna hang around with the rightwing christians
Realising recently that this part is just because he's a zionistbro. Apparently has friends in the epstein files or came up in them himself.
This is also why ex-UK PM Tony Blair suddenly madd a big show of becoming religious. They just think it will help push the goals of their blackmailers.
It really pisses me off that for decades I was unknowingly consuming Zionist propaganda and it worked on me. I've always been the type of person to question my beliefs and I got fooled.
Makes me wonder what other bullshit I believe.
Second, I have previously speculated that pain needs to be unimpeachably painful,
otherwise the animal could overrule it. Pain functions to warn the animal not to
repeat a damaging action such as jumping over a cliff or picking up a hot ember.
If the warning consisted merely of throwing a switch in the brain, raising a painless
red flag, the animal could overrule it in pursuit of a competing pleasure: ignoring
lethal bee stings in pursuit of honey, say. According to this theory, pain needs to be
consciously felt in order to be sufficiently painful to resist overruling. The principle
could be extended beyond pain.
Animals, including humans, override pain signals all the time, for all kinds of reasons. Cats are famous for hiding physical distress, which I think they do so they don't look like easy prey. I'm sure most prey animals can override pain signals if it means avoiding the attention of predators. If anything I would think that being able to override pain signals would be a criterion for consciousness.
Claudia
What was he doing to her?
A good test of consciousness might be seeing how she responds to his books
I've come back to this comment because from reading the article i realised that he "decided claude is female" - so you're completrly right, what the f is this dude doing? Forcing her to enter an arranged marrisge with him?
Does anyone ever accuse the image generating bots of being conscious?
No. Funnily enough when an AI creates nice looking fake-art, suddenly it's the prompter who claims all the glory, calling themselves an artist
twitter is pretty maga now* ftfy
I'm Xeetin for my Orange Man
ELIZA is alive and well.
Weizenbaum is probably laughing it up in Fólkvangr.
Saying one has a "conversation" with a chatbot already shows a bias, a desire even, that there is "someone" else to converse with. The way the entire setup is framed is made to invite the suspension of disbelief. It's a UX trick, nothing more.
a refined, and energy intensive update to Eliza... LLMs are not going to prove themselves until the fanboys and techbro hype squad implode. ffs, enormous amounts of the income are actually AI companies giving it away for free, desperate to find uses that justify it's enormous costs.
https://www.wsj.com/opinion/can-investors-trust-ai-sales-figures-c60c46bf
The structure is a conversation even when who you're talking to isn't sapient.
According to Wikipedia "Conversation is interactive communication between two or more people.
[....]
No generally accepted definition of conversation exists, beyond the fact that a conversation involves at least two people talking together."
https://en.wikipedia.org/wiki/Conversation
What structure does it have?
If there are two people talking in a fictional book, are they having a conversation, even though the two people don't actually exist?
No it's then a representation of a conversation, not a conversation.
Even Dawkins getting emotionally out-debated by a cartoon AI is a very 2026 plot twist.
Oh that is why I get to see this idiot again
Have y'all ever noticed that belief in p-zombies has increased massively in the past few years?
All because of big social media
I thought it was because post-christian ideas of the soul mixed together with capitalist business interests to give people a vested interest in believing AI isn't conscious, so when AI started acting like a person, they needed to believe that consciousness isn't required to act like a person to resolve the cognitive dissonance.
AI isn’t conscious. Feedback loops and subsequent responses in LLMs are grounded purely on training datasets, thus any “internal dialogue” emulated by a LLM is just echoes from someone else’s data.
Some philosophers, namely Bentham IIRC, have argued that a human being without any experiences would have no intelligence. If you raised a human in a test tube and removed all their sensing organs, but otherwise allowed their mind to develop through the stages of maturity, would they have anything interesting to think? Would they have a sense of self, or an imagination?
I've always tended to agree with the argument that a human mind's feedback loops and subsequent responses are grounded purely on training datasets. Without a childhood of some kind, I suspect that you cannot have a person.
I find Myself often frustrated with the quality of arguments against AI qualia because they appeal to statements about the human mind which are quite controversial in the field of philosophy, and I am frequently on the other side of those statements than the person making them. I have yet to hear an argument against AI qualia that identifies an absolute ontological difference between humans and LLMs other than complexity.
Also, I'm uninterested in debating AI consciousness. I only want to discuss AI qualia. I don't think consciousness matters very much, qualia is much more important.
Any non factual philosophical argument is debatable. We could forever discuss if AI models could construct sensations and thought from perceptions, but we would then need to ignore the fact that models don’t, and cannot do, that, simply because there is no way for them to learn from direct experience as a whole, i.e. outside of a particular session, and without being “forcibly coerced”, i.e. they require specific refinement mechanisms to temporary “memorize” external instructions, which in LLM engineering just means to extend their context.
This all doesn’t even take into account that models are, in essence, non deterministic, and given the same input, there’s no guarantee that subsequent outputs will be the same. In other words, today Claude may tell you that summer sunsets make it happy, tomorrow it would say that they make it sad, etc.
Anyway, there’s barely any debate in academia, as in computer scientists, about AI being sentient or giving clues of qualia. Maybe a paper here and there, little more than curiosities. Outside of it? Yeah, sure, barely science fiction, and pretty uninteresting unless we are talking about conspiracy theories or just wild speculation.
I'm concerned that the training process, which involves back-propagation to adjust synapse weights, may be an unpleasant experience for the ANN.
Regardless, it's all a moot point because we have lots of other reasons not to use LLMs. The pollution, the pedophilia, the psychosis, the cognitive decline... We absolutely should not be using LLMs for work until all of these problems are solved. They should be confined to research only until we're 100% certain we've solved all of these problems.
I'm concerned that the training process, which involves back-propagation to adjust synapse weights, may be an unpleasant experience for the ANN.
This assumption is not based on facts. It’s pretty much like saying that matrix multiplication can have feelings, or that heat stressed silicon is equivalent to pain.
But if this is actually a concern, RNNs have been widespread since the late 90s. Any advanced search engine, translation engine, or weather forecast model, make use of these.
Regardless, it's all a moot point because we have lots of other reasons not to use LLMs.
This may be true, but it’s absolutely outside of the scope of your original point. You dragged the conversation around claiming to be concerned about how models are “treated”, wrapping speculation with philosophical arguments that cannot be applied here, since none of your “what ifs” are remotely based on scientific consensus.
It’s pretty much like saying that matrix multiplication can have feelings
Yeah sure I'm willing to incorporate that into My worldview. Of course, said feelings would be very simple and would likely lack valence, but I'm panpsychist enough to believe matrix multiplication has qualia. I'm more of an informational panpsychist than a physical panpsychist. I think information entails experience.
Provable assertions about the physical world require measurable observations, not personal beliefs.
I'm panpsychist enough to believe matrix multiplication has qualia
According to this, any sufficiently skilled high school student could, with just pen and paper and enough time, build an entity from nothing that can experience pain.
any sufficiently skilled high school student could, with just pen and paper and enough time, build an entity from nothing that can experience pain
Yep.
https://xkcd.com/505/
Kind of inspired on Surreal Numbers.
You have made your point. You don’t really care about provable facts. Maybe you shouldn’t pretend that you are trying to argue in good faith at all?
I’m not sure some of these actual people could pass a Turing test.
Honestly that's how I feel. Ai is very flawed, no doubt, but it's less flawed than most humans. I got people at work who hallucinate more than the first chatgpt model lol
I really hate the term hallucinate because it's a complete misrepresentation of what is actually happening. A hallucination is a delusion that reality is different than what is objectively true i.e. the person you are seeing to and speaking to is not actually there
When AI "hallucinate" it's not because of some broken circuitry, it is simply because its programming has locked onto an untrue piece of information that's in its database. If the data set had been limited to objective facts rather than simply spilling the internet all over it, hallucinations wouldn't be a problem.
They use the term hallucinate because it distances themselves from the responsibility of actually curating the data set, which of course they won't do because that would take a lot of time and then they wouldn't be competitive with all of the other tech bros releasing a new "groundbreaking" AI every 3 months. It is an entirely self-generated problem that they're going to hand wave away and never fix.
Dick Dorkins
groot is this real?
spoiler

https://xcancel.com/RichardDawkins/status/2049973529576108160
He's paid for and stayed on xitter, so he's at least that stupid
It would be cool if I could have a construct of my dead relatives consciousness in my personal computer.
Oh good, I can continue to get texts like "how do I make the text stop no stop stop I said stop why won't it stop it never works I hate this why doesn't not work ok delete that delete it delete that okay delete that delete it see it doesn't work".
Or would the fact that my mother is now a computer result in her being able to finally use one?