Anxiety around AI is growing rapidly in the US, research shows
2d 19h ago by lemmy.radio/u/sanitation in technology from uk.finance.yahoo.com
I'm so tired of every job posting frothing at the mouth over AI. "We're ai native" , "we want employees who are excited about ai tools", "agenic workflows"
Just fuck off.
Even if all of this stuff was a real productivity increase, who is keeping that extra production? Not the workers!
They always want you to be excited about things that don't benefit you in any way.
We're ai native
I always interpret it as "ai naive"

That's what people see
Yeah, there seems to be a campaign to deflect blame from Trump onto AI. I can't imagine it working, though. What average voter will connect rising prices to some new gizmo on the phone?
Because they’re blatantly using it to try and enslave us?
Like, not even metaphorically.
People find AI to be irritating because of its flaws and failure to deliver. They are also angry about big tech suggesting that AI will force real humans out of human spaces. The arts, media, research, science, the work force etc.
The "anxiety" is mostly fear of exactly what's being promised at the detriment of the people expected to fund it. Anyone who's got eyes and ears knows that the venture capital well will run dry eventually.
There is no return on investment for the vast majority of regular every day humans living in this world at this time. Not where AI is concerned. It isn't hard to follow what is being marketed to its conclusion. Tech Oligarchs have been saying the quiet part out loud since the begining.
AI will replace workers. AI will replace people who make art and music, and write things. AI will replace.
They even tell us they know it's a flawed replacement that they can't make better. And they pretty much tell us that they haven't found a way to monetize it so it's sustainable which basically means one way or another they will be looking for people to pay more for it.
People have started thinking about what that means and naturally they don't like it. Tech Bros are selling this dream of replacing us but we don't have any money to pay more for a product that doesn't produce anything worthwhile for the cost. Especially not if you're replacing them and there is no safety net.
There's often a tacit acknowledgment to the poor quality of AI output, but that they do not care, the strategy is to flood the zone with so much garbage as to make it irrelevant. It's a grift-conomy mindset, the focus is on "velocity" and "productivity" to the detriment of all else.
we're living in a gish gallop society - politics, AI, it's all overloading the polity with so many outrageous events they can't react to the last one, much less the outrage 4 days ago... and unfortunately it's working.
I don't know any solutions - damn near anything you do will be labelled insurrection and treason, jfc, they're suing SPLC for supporting white supremacist orgs for paying... informants.
ultra fucking stupid, but sadly effective, because most of america wants to stay out of politics and not confront the difficult shit ahead.
AI will replace workers. AI will replace people who make art and music, and write things.
This part made me think how I've commented recently that AI does the thing it was designed to do, but that the thing it was designed to do is generate something you could believe somebody wrote on the internet.
That doesn't mean the answer is correct, of course. It's often confidently wrong, just like real people online!
But when it comes to artistic expression, there is no clear right or wrong. Music, art, and the written word are some of the most human things we have, but you are absolutely right that they will be replaced. If a marketing director can pay Google a few dollars to generate a hundred concept drawings so they can do "I'll know it when I see it" design, that's a human artist job they won't budget for.
Yeah, the entertainment industry is the one at legitimate risk from what we currently call AI.
AI will replace workers.
While at the same time producing poorer quality work and consuming far more resources.
It's not AI that's the problem. AI is an amazingly powerful tool (I'm an AI researcher).
The problem is that it's in the hands of psychotic technofascist greedy subhumans that want to destroy basically all of society so their stock can go up 0.001%. If we can cut out the source of the cancer, the body can begin to heal itself.
Right! If you don't count the mass surveillance boost, the autonomous killing machines they're trying to make, the environmental impact, the pillaging of our individual experiences, and the destruction of all our shared spaces online, AI is a pretty cool tool.
Narrator: actually, no it was not.
e.g. it still spreads misinformation.
Making no mistakes is a much higher standard than that which we hold to ourselves. Why are people moving the goalposts of intelligence or usefulness behind perfection?
Bc when I use a calculator, I actually DO expect literal perfection. And when I use google search, I expect it to be "useful". And when I find information in Wikipedia, I expect it to be somewhat authoritative, even if incomplete. And if I use automative driving features, I expect them not to completely take over the wheel and crash me into a brick wall... or to a little child in a crosswalk right in front of me.
People who drive drunk lose their privileges to drive anymore. Employees who screw up that often get fired. Doctors who dispense incorrect medical advice lose their ability to practice medicine, plus get exposed to lawsuits. Counselors who tell their patients to kill themselves... Anyway, people DO experience the consequences of their actions, like ALL THE FUCKING TIME.
Whereas in contrast, AI is said that it is "going to be" great, not that it is great now. Fine, finish it and then we'll talk. In the meantime, stop shoving it in front of my face.
If AI is like a human, it's at best 2-year-old and at worst more like 6 months. It should not be "in charge", e.g. of dispensing medical advice. But since it takes so much time to check its results for errors, it is literally slower and more painful to use it than to not use it (sometimes, often in fact).
You have a point somewhere buried in your mind, as revealed by the insightful first sentence, but your phrasing in the second sentence reads like sea-lioning and is not helping. Nobody is asking for "behind perfection" as that is literally mathematically impossible, and that is not what "moving the goalposts" means. It should not be enough to sound intelligent - we need to actually be such (same for AI as well).
And you have calulators.
And Google search has been spotty since the beginning.
And Wikipedia article quality ... varies.
Like people, if you give AI a sufficiently complex problem, it won't get it 100% right on the first pass. But, if you give it enough detail to distinguish an acceptable solution from an unacceptable one, it might get 80% of what you're looking for on the first pass, boost that to 96% on the 2nd pass, 99% on the 3rd pass, and eventually what's left is simple enough that it finally does get it 100% right.
Anybody who accepts the first thing AI tells them with today's tech, is using it wrong.
Your "if" there is doing an awfully lot of the heavy lifting. Fwiw, I'm not talking special-purpose, custom-built LLMs - a large part of the problem is the lack of precision language uses to describe the concepts under discussion.
An example: https://lemmy.world/post/46390157

Another example: https://discuss.tchncs.de/post/59584533

Both of these would be better called "cheating" than "AI", but seeing as how AI both makes it easier and more to the point so many companies (such as Oracle) are literally pushing their programmers (those remaining anyway) to exclusively write programs using AI rather than by themselves, the very definition of "cheating" will need to be reexamined as a result.
In the examples also take note of how poor quality the LLM output is - e.g. regardless of whether the source is Grok or Claude or whatever, those therapy examples are not helpful in the slightest. Your counterargument might be that these are the "cheap" (aka free) AIs, but preemptively I will say in response: they still count as "AI", especially in the context of the OP.
As far as "cheating" goes, ever since I got out of the game of paying a bunch of academics to judge and label me, I have been actively encouraged to "cheat" by the people who pay me money... that's real life.
If you're using a Ginsu knife to knead dough, you might not have optimal results. Claude is pretty good at code, since about 4-6 months ago. Grok? last time I asked Grok for anything it was the fastest LLM on the market, and the most non-sensical - usless trash.
(I did not downvote you btw)
Okay but Grok is still surely part of the "Anxiety around AI is growing rapidly in the US, research shows" phenomena, as Grok is one of the various AIs that people are aware of, and anxious about.
Your words read to me like you have kept yourself aware of the positive benefits of using AI - which many people on Lemmy including to some degree myself - have done far less of.
But there are some negatives as well...
There's plenty of negatives to any new tech, anything can be carelessly or ignorantly mis-applied.
The computer has been coming for our jobs since it was created. Bob Cratchit no longer works for Ebeneezer Scrooge, he's been replaced with software.
People over-trusting software has been problematic since software became accessible to be over-trusted. A favorite (horrible) example from not-so-long ago, but pre-ChatGPT release I believe: https://www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal/
For the past year+ it has been popular sport to ask AI a question and poke fun at how wrong the answer is. I, too, get plenty of wrong answers from it - and anyone who trusts what it, or a Google search, or some post by some random troll with an axe to grind on some social media site, or even your high school whatever teacher, without verifying the results... gets what they deserve, in my opinion.
What changed for me within the last 12-16 months is: at least around questions in software development, the answers started being correct more than half the time. That was a critical watershed, because in essence that means that if you give your AI the tool to test its own work, it can work on hard problems that have easy methods to test for correctness (starting with compiler errors), and basically chip away at them - fixing problems until it has an answer that is correct enough to pass all the tests you have specified for it. Before that, an AI agent left to work on problems without guidance would more often get stuck in loops, or run off the rails altogether and never reach a viable solution.
In the past 6 months or so, tools like Claude have gotten much better - incorporating a lot of the kinds of things I (and many others) had to "tell them" manually 12 months ago to get good results into their normal response algorithms, anticipating and fixing problems in their work before presenting it as a solution for your consideration.
The language they present solutions in has been traditionally too over-confident, that's a huge fault which I attribute to being trained on blog posts by know-it-all blowhard people who similarly present their ideas as gospel truth rather than their potentially flawed best efforts.
Clue for the clueless: even the best human experts in their fields are still only providing potentially flawed best effort answers. Once you leave self-defined fields like mathematics, all we have are our best guesses about how things really work.
One thing that your comments touch on here is just how little of the "Anxiety around AI" actually has to do with AI.
When e.g. Oracle lays off 30k workers, how little of that truly has to do with AI? vs. instead market instability etc. What complicates the issue is that most often, the corporation will claim that the layoffs are to better streamline the company in a future where AI will need fewer workers, so to prepare for that now... they'll just go ahead and get rid of them immediately.
So this isn't even people using AI inappropriately, this is people blaming AI for what they wanted to do anyway, for reasons if profit.
Then again, events such as those presage what is to come: when AI truly can do it all, how will humans be able to earn a paycheck? Spoiler alert: not all of us will. And especially in the meantime there will be period of transition and upheaval.
This is what I felt your comments lacked acknowledgement of: not the downside to using the tools but the wider conversation that uses the keyword "AI" but has really barely anything to do with it, as opposed to political and social and economic forces.
I felt your comments lacked acknowledgement of: not the downside to using the tools but the wider conversation that uses the keyword “AI” but has really barely anything to do with it
Yeah, I get tunnel vision like that, when people say "AI is a problem" my focus is on the AI, not the people's underlying pre-existing problems that haven't gone away since AI "came out / got big".
The word itself keeps changing its meaning - it used to mean ML techniques, then looking forward to gen-AI, now it supposedly means "capitalism distilled"? See e.g. https://www.structural-integrity.eu/is-there-a-need-for-ai-after-capitalism/ for an excellent example of the kind of anxiety surrounding AI that we are talking about.
I agree with you that ML itself is not a problem, nor even is LLM technology. Although like nuclear power, as we advance towards true AI the more powerful the tool the greater danger its misuse portends, as you said. And also as you said, as it got big the discussion moved towards the latter topic, without bothering to be precise in what was being discussed, instead calling everything by the (clickbait?) buzzword "AI".
The "danger line" I perceive is when we give anything "agency". It can be a float-level-switch on a lake controlling the water release gates on a dam, such a simple thing, but if it has a malfunction (and nobody notices in time) the dam might get over-topped with water, or the whole lake might be emptied - potentially flooding downstream communities, or simply wasting valuable water needed to get through the next dry season... all that from a simple little (binary) bit of "artificial intelligence" - but when it's granted "agency" to operate the flood gates without competent oversight, it becomes dangerous.
May 6, 2010 a large collection of automated trading algorithms, acting with agency too fast for anyone to manage caused a dramatic flash-crash of the stock market.
Lately, we've got <a href="https://en.wikipedia.org/wiki/ELIZA">ELIZA</a> gone wild in advanced chat-bots. People who allow themselves to be sucked into the fantasy that the chatbot "is real" like a person they can trust are giving those chat-bots agency in their lives - and with a baseline of 132 suicides per DAY in the US alone, of course there will be some people whose decision to take their own life was influenced, both for and against, by their interaction with chat-bots.
I give the LLMs (limited) agency in the creation of software. I like to think I employ a risk-based approach, giving more agency and less oversight in simple applications with limited to near-zero risk while providing stricter oversight and review for LLM generated code which has more important functions / greater risk of harm should it malfunction... Of course, these are judgement calls, and with millions of people using LLMs to generate code, even if they all follow a similar risk-based approach to how much unrestricted agency the LLM is given, there will be those who make bad judgement calls...
Then there's the YOLOs - pushing the boundaries as hard and fast as they can in some sort of quest to be the first to achieve something great. As Olivander said to Harry Potter: "He who must not be named did great things, terrible to be sure, but also great."
I love the nuanced approach here - neither pessimistic nor optimistic but rather realistic. Then again, I would strongly question the utilityn here or even definition of "great" - except you were just using it in an explanatory sense, so I get what you mean, but like for a corporation to achieve "success", at the expense of an enormous number of workers let go... is that really "great", truly?
Beauty lies in the eye of the beholder and I see such ugliness, even while I also see potential for truly great good as well. It is definitely not the "fault" of the tool, but rather the wielder, although either way I see why people have anxiety, when they consider the ways that the tools are currently and actively being used against their interests.
Technology up to the dawn of the AI slop era was indeed expected to be perfect. When it wasn't, we fixed it so it would be.
Why should AI be exempt from this? Techbros have convinced you that it should be so that their favourite lines go up.
There's literally nothing more to it. A hammer is useless if it only drives 50% of the nails you hit with it. Why the fuck should we expect anything less than triple or quad 9 accuracy from AI if its so god damned "intelligent"?
B-b-be-be-because shut up you, that's why!
Won't someone think of the poor shareholders?
(/s)
All of that is because the incentives are coming from those with the most power/money who are the most psychotic cancer cells in the history of the world. You're only aware of such a tiny sliver of it because that's the most problematic and gets the most news. Those are all huge problems that need to be solved, but the cause isn't AI. AI is just an accelerant for a sick hypercapitalist society that is doomed to collapse. AI itself has been used for millions of great things that improve all of life on earth, but in the hands of these psychopaths it's just being used for the ultimate triumph of Capital over Labor, at the expense of literally everything else on earth.
AI is just an accelerant for a sick hypercapitalist society that is doomed to collapse.
I had, like, a bunch of paragraphs lined up because I thought you didn't understand this. But as it turns out, you seem to be perfectly okay with the world being raped to death.
I hope your academic field is entertaining, at least.
...I work in earth science...
I know. I am perfectly capable of reading more than one comment.
zd9, you are aware that AI is making things worse, you say so yourself, and yet you feel the unsatable need to stand here bitching that no one understands your unique, special use case. For what?
I. Do. Not. Give. A. Fuck. that academics are using machine learning to solve problems. That is their business. <- Is that what you wanted? There you go.
So do you feel this hatred towards Monte Carlo sampling methods, or Gaussian Mixture Models, or Finite Element Method solvers? It's all just math and it is being applied towards both how to grow crops better and how to make bombs. Seems pretty naive.
You know what all those methods have in common? FUCKING evaluation of smooth continuous functions based on a limited number of samples.
REAL MEN WRITE REAL PROOFS. They don't use God damned computational methods which completely IGNORE non-converging regions.
I used opus to generate this lean-verifiable proof that you in particular are full of shit!
import Mathlib
open Real
noncomputable def f (x : ℝ) : ℝ := sin (π * x) * exp (-x^2)
lemma f_smooth : ContDiff ℝ ⊤ f :=
(contDiff_sin.comp (contDiff_const.mul contDiff_id)).mul
(contDiff_exp.comp (contDiff_id.pow 2).neg)
lemma f_zero_on_ints : ∀ n : ℤ, f n = 0 := by
intro n
show sin (π * (n : ℝ)) * exp (-((n : ℝ))^2) = 0
rw [mul_comm π (n : ℝ), sin_int_mul_pi, zero_mul]
lemma f_ne_zero : f ≠ 0 := fun h => by
have h₁ : f (1/2) = 0 := congrFun h (1/2)
have h₂ : f (1/2) = exp (-(1/2)^2) := by
show sin (π * (1/2)) * exp (-(1/2)^2) = exp (-(1/2)^2)
rw [show π * (1/2) = π/2 from by ring, sin_pi_div_two, one_mul]
exact (exp_pos _).ne' (h₂ ▸ h₁)
theorem sampling_is_a_lie :
∃ f : ℝ → ℝ,
ContDiff ℝ ⊤ f ∧
(∀ n : ℤ, f n = 0) ∧
f ≠ 0 :=
⟨f, f_smooth, f_zero_on_ints, f_ne_zero⟩
Yes, of course. Monte Carlo killed my father.
You know what the problem is? You think that you're too smart to be caught with a meth addiction. See, your neighbor got fucked up, lost a bunch of his teeth, but you, you know about microdosing.
Your other neighbor fell off a construction site that was missing its guard rails, but that wouldn't happen to you; you have excellent balance.
The movie Jurassic Park is literally about people like you.
Do you have a reason to restrict Gaussian mixture models you'd like to give me, or are we just pissing in the same bush?
lol ok, please keep sharing how you don't understand anything about ML or even just... math/science in general, it's actually entertaining
Understand what? That you have a robot girlfriend you don't want to give up? That you would burn the world down for Her.
You know, human love is just a biochemical response to external stimuli, I'm sure there's a drug that can replace it.
ok buddy, best of luck to you I guess
All those things being true is enough for me to hate AI.
Edit: As my dad says, One aw shit wipes away a million attaboys.
Do you hate the concept of iron alloy? Because it was used for hundreds of years in swords and weapons to kill millions of people. See how silly that sounds?
Iron alloy doesn't convince people they shouldn't have their noose visible in case someone might see it and intervene. You're not going to change my mind. Once the bubble is popped and all our lives get worse and 3 people control all the technology it's not going to matter that it saves people time, or it creates efficiency.
You're not um... you're not even reading, but ok. Keep living in your echochamber I guess.
Just because you don't like my points doesn't mean I'm arguing in bad faith, and I find it a little insulting that you're trying to dodge instead of responding to my point by insinuating I am.
No I'm saying you're not even trying to understand, you're just saying you don't like it no matter what. To that I said, ok keep living in your echochamber. I'm not saying that's bad faith, it's just not trying to reach truth.
And what is the truth? You don't get to define away all the bad parts of the technology and just point out the good parts. My life is materially worse because of how this technology is developing and being implemented. Some extremely vague wins aren't enough to convince me to change my mind. I have heard your argument, I have measured it and found it wanting.
ok
Electricity -> electrocutions
Gasoline -> fire bombs
Axes -> axe murders
we really need to get back to throwing rocks at each other, it's much less environmentally impactful and puts us on a much more level playing field, only the rich control all these techno-marvels.
If you have anything else to add besides hyperbole now is the time. Otherwise I think we're done here.
I was excited about the idea of purpose-built systems trained on specific datasets to be help find complex patterns to diagnose diseases or suggest potential molecules for specific purposes.
Then the LLM shit started and everyone started fantasizing about intelligent "AI" just because it was able to reproduce patterns of language that seem relevant to a given input. Some of those funding it kept chasing that dream and are convinced that, if they just throw more compute at the problem, they can evolve the renaissance AGI that can do anything. Then they can fire every worker and be bazillionaires with robot slaves and never have to work another day of their lives... and fuck everyone and everything else.
It's amazing what we can ruin when we let greed and selfishness drive our society.
At 1million i could already stop working and live decent life :/. I really don't get why past 1billion they continue to search for more
Maybe it's because I've only ever had at most a comfortable income but I truly don't understand the mentality of needing so much money.
I don't get paid as much as my peers but I make enough to be comfortable. I am my own department and, aside from emergencies and other high priority situations, I manage myself and choose what to work on when. I have a decent work life balance. Because I make enough to be comfortable (in large part because my landlord promised not to raise our rent - early in the COVID lockdown - if we were "good tenants" and has managed to keep true to her word) I don't feel the need for more. That balance is worth not making the 20% more a year I might get somewhere else because I can't guarantee I won't have a shitty boss that doesn't let me have that work/life balance.
It's a sickness
They actually have a disorder or disease. However in this case their disorder is destroying the rest of the world. There's a fast approaching point that the world organism will self-heal to prevent its own death.
everyone started fantasizing about intelligent “AI” just because it was able to reproduce patterns of language that seem relevant to a given input.
They've been fantasizing about that ever since "computers" started growing in accessibility - in the 1960s....
The current crop is just the first time such things have been delivered with something resembling "average" human responses.
They've been fantasizing about that ever since "computers" started growing in accessibility - in the 1960s....
Fantasizing wasn't the best choice of words - I often understate what I mean to communicate at an attempt at humor. I should have said "everyone started fantasizing becoming so obsessed with intelligent “AI” that they're willing to dump a significant portion of the world's resources just because... "
The current crop is just the first time such things have been delivered with something resembling "average" human responses.
That's more or less what I meant by "patterns of language that seem relevant to a given input". I was attempting to understate this in order to exaggerate the villainous eagerness and stupidity of greedy, rich fucks.
becoming so obsessed with intelligent “AI” that they’re willing to dump a significant portion of the world’s resources just because…
The late 90s .com bubble was very eye opening for me.
Top 1% (and up) wealthy people I have known often think in terms of "getting to the next level" - and no matter where you are, there's always a next level. Even the wealthiest people in the world aren't the most powerful in various circles, the most popular, the most well liked, the most beautiful... there's always that next unattainable step to vie for.
When there's a chaotic upheval, like .com, or AI, that's opportunity to reposition - and as many of these people are older, YOLO - they'll put significant capital at risk to try - especially those wealthy enough to make siginificant plays with less than 10% or less than 1% of their current net worth...
During .com, starting programmers' salaries doubled within less than a year - pretty much directly as a result of this opening of the powerful people's wealth hoardes putting them in competition with each other to hire everyone they could who could help them try to capitalize "on that .com stuff."
We were "seeking investment" before .com hit, after it hit investment was seeking us: satellite calls with guys from their yachts in the South Pacific... wild times.
Once you hit the 0.01% most wealthy, it's beyond "villainous eagerness and stupidity of greedy, rich fucks." it's more of a free-for-all among those players for how they might get to the next level, or be passed by by others who climb while they stagnate. 0.001% of 8 billion, or even 350 million, is still a LOT of people.
The LLM craze is a natural maturation point of the AI field though, and now it's expanded into foundational models (FM) which you would still probably just call LLMs because most people don't know the differences. FMs are getting close to that point of a magical universal computer that you can tell it to do anything about anything and it just works. There are specific FM applications like FMs for earth science or remote sensing (which I work in), but the big money coming from this technofascist elite is pushing for FMs for everything along with Agentic AI, which is the ultimate state to replace pesky human workers overall. They seek the ultimate triumph of Capital over Labor.
There are competing incentives driving the industry, but by far the strongest one is coming from who has the most money, and those who have the most money are the worst possible people that should have no say in how anything works. Scary times we're in.
The LLM craze is a natural maturation point of the AI field
I don't see why that is. Using ML to generate models that accurately perform specific tasks is orders of magnitude away from attempting to feed the entirety of human text into ML and expecting superhuman intelligence to emerge.
now it's expanded into foundational models (FM) which you would still probably just call LLMs because most people don't know the differences.
While ML and "AI" is not my field, I'm fairly certain that what I was attempting to describe in layman's terms in my literal first sentence were these foundational models you are referring to.
FMs are getting close to that point of a magical universal computer that you can tell it to do anything about anything and it just works.
I have no direct experience outside of LLMs and I don't really take issue with what I understand FMs to be, so long as they keep their scope narrow and focus on accurating completing specific tasks to assist humans. As soon as we hand off control and trust it blindly without extensive trials ensuring it's reliability and failsafes in place to ensure inaccuracies are caught I start raising concerns.
My only experience is with LLMs - a few, minor attempts to "test the waters" of the major, publicly available LLM models. I've been frustrated with my search results and glanced at the AI results. Work gave us Gemini licenses and I used it in similar, desperate situatiuons for coding help and help with Google products foolishly thinking that if any LLM designed to help with such tasks would be passably useful it would be the LLM of the company that owns the products I seek help with. Unless something has changed drastically in the last month or so, every interaction has been a roll of the dice to such an extent that my occasional "testing the waters" caused me to jump out and avoid it as much as possible. I simply can't trust it to not halucinate and gaslight me.
What I see as the problem is moving way, way, way too quickly in trusting language models to do anything even remotely important. Human communication is extremely nuanced, complicated, fluid, and imperfect. Humans misunderstand each other during communication even when we have the context of in-person visual/audible cues and interpersonal history.
What the pro-AI people always tend to argue back at comments like yours is that:
-
you used the wrong AI - it should be <insert preferred model here> - probably Claude at this point in time, for programming? i.e. the implication being that you are some old man who yells at clouds and does not know what they only learned themselves <6 months ago, as if that knowledge entirely invalidates your own lived experiences even in the last ~4 weeks.
-
you used the wrong parameters / queries. When applied to the equivalent of Google searches this seems a false claim to me because those used to be fairly brainless, whereas sometime soon Gemini is going to start charging $$$ in return for being able to find anything remotely helpful on the internet, but for now they would like it pretty please if you would help them train their model, before they turn around and sell it to you, and others (isn't it glorious how you are allowed to help share in the work part, without proportionate access to the reward at the end?).
Tbf you probably did use the wrong queries for the programming questions. It seems to me to be like someone who actually lets a "self-driving car" drive by... itself? Like you are supposed to pay money for what is marketed one way but the reality after purchase is quite different, and if you e.g. run over little children then it's not the fault of those who sold you a "self-driving car", but rather (legally speaking) yourself who should not have allowed the car to drive by itself - how dare you not know better! (Despite being told precisely such with a nod and a wink)
The AI hype is real, and false, though despite that LLMs are quite a capable tool, if ignoring the hype and used under much more constrained circumstances than the hype would lead us to believe (despite the hype surrounding AI, rather than LLM technology itself, being the literal point of the OP though?).
I stumbled upon this randomly and enjoyed the read: https://www.structural-integrity.eu/is-there-a-need-for-ai-after-capitalism/.
The lack of regulation of AI is absolutely a serious problem, there are so many problems your comment isn't even funny.
Problems with people using it for health advice.
Problems with teens using it instead of friends.
Problems with AI giving absurdly incorrect advice to people in general, but also professionals like managers and CEO's.
Problems with data-centers that host these AI systems require enormous amounts of power. So much researchers have shown these data centers are drying up vast areas around the centers.
The techno-fascists are in all sorts of business, that's not special for AI. The problem is with AI the techno-fascists aren't regulated in any way.
Neither how their data centers impact the environment and the electric grid, or how AI has actual bad effects for their customers, because there is no regulation on the use or supply of AI services.
100% agree with every point you made. Everything you're saying is specific to this iteration of LLMs though. That's just one tiny piece (well large in terms of public perception and capital acquisition but small in terms of the research space).
amazingly powerful tool
Is it? I keep hearing people keep parroting this but what big advancements have we made cause of AI?
As a developer, I keep hearing this but all I see is low quality software that is all smoke and mirrors. Pumping out low quality code at a high pace is worse than pumping out less but higher quality code.
Dude, ChatGPT just solved an Erdős problem a few days ago and Mythos is exploiting decade old undiscovered 0-days in OSes and capable of pivoting 0-day Firefox bugs into full blown root access.
Yeah, I get that the viral "how many 'r's are in strawberry" stuff is funny, but the idea that historical issues with transformers is preventing them from accelerating peak capabilities way beyond what most experts thought was possible just years ago is borderline delusional.
The field is moving so fast at this point that if you are basing any sense of limitations on even ~2mo old sampling, your conclusions are likely out of date.
They aren't a silver bullet for everything (yet) but how capable they are at the things transformers are starting to be specialized into is well past the avg practitioner.
I've been writing software for well over a decade and the modern agents do a better job than I would around 90% of the time. Yes, I'll occasionally need to bring up issues with their work, but I'd say at this point around 50% of the times I think they made a mistake I was actually the one who was wrong.
This is only within around the last 3-4 months that it's been like this.
Dude, it's not even worth it. These crazy people want to live in their own realities. No matter how much you explain something, they'll continue to believe what they want to feel morally superior, even if it's completely naive.
Oh did it solve it? You didn’t really provide any sources so I had to look it up myself.
And in the example from 2 days ago, it just applied an existing formula in a different context.
Which is helpful for sure, but I wouldn’t say it solves it.
'Just'? It's been an open problem for decades that mathematicians have tried to solve over that time.
And now it is solved.
Because ChatGPT applied something no humans ever thought to do.
And Terence Tao and the other mathematicians that have reviewed it say it's solved. But I guess someone should let them know that grandwolf319 doesn't consider it solved?
I didn’t say it’s not solved, I said chatGPT didn’t solve it but gave a hint.
Literally name any single industry with anything, and AI has vastly pushed it forward. It's way to big to type here. Just off the top of my head: climate, pharmaceutical, other biomedical stuff (neuroscience, genetics, medical advances in every possible body system), energy (that alone has THOUSANDS of huge advances), science in general (astrophysics, geophysics, chemistry, agriculture, I mean every single scientific field). I'm listing every field I can think of, because it's that pervasive.
The most visible advances which is just in like business/productivity for the sake of making money, I'd argue is the least important. It's most important for a capitalist society that values profit over all else, but that's a recipe for collapse, which is where we're quickly headed.
You think AI has made improvements to our climate???
Can’t believe I read this on lemmy
lol please, go research something before you make any claims on it. No I'm not talking about datacenters fucking over the water supply or using fossil fuels, that's bad obviously. Literally right now go google "AI used in climate science". Just go do it. You'll learn.
Are we talking about machine learning which has been around for a decade or generative AI? People usually mean the latter. Machine learning isn’t what caused the AI craze.
I honestly am curious in how an LLM could improve the climate in anyway.
And imo leaving the datacenters out is kind of a bad faith argument, it’s the only reason why it’s everywhere. It wouldn’t be a problem if it was basically a new computation tool used by niche professions.
I know I'm being pedantic, but machine learning has been around for many decades
go google what I said
That is not how socializing on the internet works. You make the claim, you back it up or be discredited for inconsideration
lol
You're getting a lot of downvotes - I think it might be helpful if you explain you are using a different sort of AI rather than LLMs or gen AI.
People on this site are crazy, I understand. They see "AI" and instantly assume it's all Palantir self-targeting murder drones. No amount of explaining will change crazy people's minds, and they want to live in their own reality because it makes them feel morally superior.
I use all kinds of models, to include diffusion models (generative), vision transformers, LSTMs, CNNs, and all kinds of classical ML methods. It really doesn't matter if I say what the models are or not.
You are not helping your cause by emo-venting here. Go back up and re-read the OP title - I'll wait.
So long as people have anxiety over AI issues, including ethics and water usage, then the people asking questions have a firm foundation for their statements. Why not (gently) invite them in, to know what you do? Curiosity is an amazingly adaptive trait in humanity, and they might be genuinely ready to listen to a well-intentioned answer. But you are turning them away not so much with the content as the tone of your responses, essentially proving them right that pro-AI advocates froth at the mouth at how AI will overtake humans rather than use logical argumentation practices. But why put forth Musk's words here, on the Threadiverse?
If you can keep your head while the rest of the world loses theirs...
First, read all the responses. My initial tone is fine, but like 10 different posters were foaming at the mouth saying I personally am killing people because I work in the general space. There's no reasoning with people like that.
🙄👍
I want to agree with you, but AI is just another psychopath in a world where we don't need any more psychopaths.
Indeed.
To cut off their data and revenue streams, stick to Open Source, locally run, models/chatbots.
Almost all research sharing is done through open source. Of course there are specific agreements between two companies if they wish to collaborate on private products, but the vast majority is just sharing a code base on github, writing a paper, and letting others review and try it out.
It’s amazing how open source has benefitted the individual. The monopolization of compute is still a barrier we’ll have to crash through
The problem is that it’s in the hands of psychotic technofascist greedy subhumans
gee maybe people like you shouldn't have put those tools into the shitbag's hands?
I remember a decade ago multiple movements to reign in AI before it became uncontrollable, and any chance of that is long fuckin gone. we're gonna barrel forward heedless of the danger, because fuck you that guy wants profits and doesn't care about humanity.
and people like you made the tools and gave it to 'em.
That seems terribly extreme. Its not like its a bomb that is obviously for blowing people up. Someone made something with some cool applications, then some guys with many times more money and resources than anyone should be allowed to have, took the idea and ran with it toward a bunch of psychotic ends.
The problem isn't that people can use good things for bad purposes, nor is it the people that make or improve those things. The root cause is that western society is currently structured in a way that ends up rewarding certain types of madness, and the reward structure is set up such that individuals can get a vast undue amount of influence and power. Under these conditions, it is natural that even a tiny number of such individuals can overtake the system like a single cancer cell can eventually kill someone. All of these alarming things going on for over 60 years are symptoms of that societal illness. Please don't blame scientists for sciencing.
bingo
I fucking work on climate models you jabroni. You have no idea about the industry or really anything other than what your most echochambered influencers tell you to think.
Doubtful. And you thought that AI would stay in modeling? You made them something dangerous, and you thought it wouldn't be weaponized?
you fucking moron. you either made yourself their bitch, or were used as their bitch unknowingly. science is ashamed of idiots like you who enable the worst.
that's why you see lots of chemical and biowarfare?
or continued CFC use huh?
you simpering dolt.
lol thanks for the chuckle. Go outside for a bit.
enjoy your vibe coding clanker

It's cute that you think you're somehow different
aww thanks!
-is an AI researcher -immediately uses Nazi lingo after introducing themselves
you can't be more obvious than this about the ideology of AI💀
lol get offline a bit, not everything is Nazi everything. You're saying "subhuman" is Nazi coded?
It's always been that way, it's just that until now the general public could say "well at least they pay me."
So ironically this rise in anxiety is itself being driven by self-interest. People were fine with those people being in charge as long as they got a comfortable lifestyle out of it. A pattern seen throughout history.
It's truly amazing that when an expert in the field says something, they still cover their eyes and ears and say you're wrong, they're always right.
If someone did this with any field, they'd be called willfully ignorant. But because you work in Current Thing, you're now against them, for being honest with the reality of your job.
Bet these are the same people who think they're the rational ones and everyone else is a fool or paid actors.
If AI worked, we would have had self driving cars by now.
I can’t think of anything good that we have today cause of AI that we didn’t have 5 years ago.
Global warming. It is definitely better rn.
quarterly profits have been great or something
Will no one think of the shareholders. 🫤🙄 I am very much against this push of AI on everything without proper informed consent. I’m mainly thinking about how bad it is that AI is scanning people’s private photos like Google and Meta in the name of looking for child abuse. It’s an easy sell if you say anything is done to “save kids”, but it’s just mass surveillance and creating more harm than good.
It AI worked, we would have had self driving cars by now.
We don't have self-driving cars because no corporation is insane enough to take on the liability for driving a fleet of cars on our highways - it's a bloodbath out there (when you look at it from the large-scale view), and anyone operating 10,000+ vehicles out there is going to be involved in multiple fatal accidents per year.
When it's UPS operating a fleet of trucks, the liability for the 30-ish people killed per year in collisions with their trucks is handled driver-to-driver. When "the robot" is out there up against the world, who's the jury going to side with?
Yep juries will pick the person every time. You only need ONE that hits the headlines... bus load of kids, famous person etc and your brand is annihilated
If they have a similar rate of accidents as regular people, wouldn’t it be easier to mitigate risk through insurance since they are at scale?
You can go as far as to say that self driving manufacturers could insure their cars themselves since they have thousands of vehicles.
If what your saying is true, then insurance wouldn’t be profitable today
Insurance companies have resorted to denying everything and forcing their customers to sue them for their money. I'd say that's a pretty good sign it isn't actually profitable today.
Isn't profitable? Insurance companies are definitely making profits because of their tactics of doing that to their customers,
Insurance is a numbers game: actuarial tables, predictable risk, predictable liability, and they do pay out occasionally, they even pay out ridiculously over-valued claims occasionally, as part of a numbers game that keeps their overall costs as low as possible.
I rode in one last month, down the highway.
Even the most pessimistic reports of human involvement still puts them in the 'mostly self-driving' camp, and I'd rather have one with a fallback than one without.
Should I disbelieve my lying eyes?
mostly self-driving
Yeah I wouldn’t call that self driving.
Here is a genuine question for you, how did the cost compare to an uber ride? Was it a fraction of it?
Technological leaps have always provided huge reductions of cost, I do wonder how expensive robo taxis would be compared to regular ones
I don't think we'll ever stop moving the goal posts. You can still meet people who don't use computers and have never seen the use in them.
Moving the goal post? Self driving has the word self in it, if anything I’m insisting on keeping the goal post.
There are 70 drivers for 3000 vehicles. Which goal is good enough for you? We'll make a note, I'll tell you when we passed it, and you can tell me why it's not real. I'm willing to wait.
I would have imagined self driving means 0 drivers
It would also include all driving conditions
For the record, I’m not saying that’s not impressive, I’m just going by the definition.
I honestly thought we would have automated truck drivers by now, which imo is when shit really hits the fan.
The number of self-driving cars and trucks has been roughly doubling year over year, there are around 5000 right now.
FWIW, I don't think we will ever see safe in all driving conditions, there are plenty of driving conditions where it is fundamentally unsafe for cars and no man nor machine should be driving in them, so in your particular case, you get to wait for self driving cars for the rest of your life.
I think in 5 years people will be complaining about a lack of available open-source and self-hosted self-driving cars, but safe in all weather? Probably not.
lane keep isn't exactly self driving
Fun fact: Data centers in the US are completely vulnerable to UAS attack vectors.
What is UAS? drones? Yeah but iran I don't think can reach them, otherwise yeah
I'm not talking about Iran.
Keep talking, all kinds of people read open forums - and no matter how careful you think you are about protecting your identity, you're unlikely to escape identification if you have been identified as a "person of interest in a high value investigation."
Canada?
Anxiety or animosity?
It's happening in college students, too. We're seeing students asking for paper'n'pencil work again.
Both
Even from source documents fed to notebooklm, it has been confidently giving me wrong advice back to back. These non deterministic tools can be useful but can also be dangerous for our work.
it has been confidently giving me wrong advice back to back
You have been accepting its results as "confident" when you really should be verifying them independently.
Many things in this universe are NP hard - no way to solve without slogging through every possibility, but relatively easy to check once you have the answer.
People aren't right 100% of the time. LLMs trained on peoples' writings (often rando people on the internet) are also not right 100% of the time. You should verify anything you get from either source - it's much easier to verify than to do the basic research yourself.
non deterministic tools can be useful but can also be dangerous for our work.
The most useful thing I have found for non deterministic tools to do? Have them create deterministic tools for me.
For who is it growing? Is it the same people that use AI to "discuss" personal problems because the AI is always nice to them? (yes this is really a thing, especially with young people).
Or people who use AI to be "creative"?
Or the people that use AI to seek health advice?
There are many good reasons to worry about AI, but my guess is that most the people that worry, do it for the wrong reasons.
Apart from the bad advice, and annoying AI customer services, and possibly taking jobs and potentially being a danger to humanity because leaders trust the AI. There may be a much closer and more imminent danger.
The movie "Good Luck, Have Fun, Don't Die" seemed a bit stupid when I first saw it, but goddam the movie has a point, that's how it's actually turning out for some people. They choose to live with an AI generated fantasy, created specifically to make them feel good!! A fantasy where they are always right, and are amazing artists, and where the AI is a better "friend" than actual friends.
I predict that AI will be worse than any cult in taking away family and friends.
https://www.imdb.com/title/tt1341338/
Yeah in the people that needs it to be useful for SOMETHING!
Its not AI