AI promised to free up workers’ time. UC Berkeley researchers found the opposite.
17h 27m ago by sh.itjust.works/u/Valnao in science from newsroom.haas.berkeley.edu
Dealing with AI is like dealing with a highly motivated new employee that doesn't understand the subject matter.
A well read, egotistical, occasionally bullshiting, intern.
My SO was complaining that boss (LLM that sucker) put all the meeting notes into LLM and then asked it to make a presentation on it. Then my SO had to redo 90% of it because it was trash. So yay it saved 10% of time. Oh but wait it took time to read all that and run it through the AI sooo no it didn't.
Why redo it? Clearly the boss wanted a presentation on garbage.
For real, now the boss will be like that AI isn't half bad after all.
Tbh note taking is something LLMs are good at
Transcriptions, mostly decent.
Notes and summaries? Not if you care about accuracy.
And transcriptions usually aren't really even AI; speech-to-text has been around a while.
Speech to text is AI and always has been.
It wasn't always the current LLM slop bots that coopted the name, sure.
Yep, that’s a fact. Hidden Markov Models, LSTMs, and LLMs are all ML models, and ML is a branch of AI.
You're 100% right, and I should know that too. “Not LLM-based” is indeed what I was intending to say.
It gets hard to remember the (correct) broader definition when slop is being shoved into your brain through every possible orifice. Even for us that vehemently disagree, it still subconsciously molds the frameworks and language we use. It's insidious, really.
See this article by a fellow lemming which I highly recommend.
Transcriptions, mostly decent.
Yup, quite good
Notes and summaries?
It works mostly to shrink the meaning. Something that LLMs are trained for in the first place.
And that's all. 2 separate steps that are both good and reliable to an extent. One of the best applications of ai so far
There was someone at work who was using read.ai for technical discussion and the few summaries I read were like someone who didn't understand the topic and couldn't tell what details were important. We would summarize the decisions and next steps and each one had at least one really important thing changed or left out.
A transcription getting words wrong but still phonetically right is still more helpful than a misleading summary.
Not if you care about accuracy.
Now I understand the last part. Agreed, something deeply specialized could be in danger here. I had corpo speak in mind writing the previous messages
I wish I remembered where I read this, but a while back I heard a hypothesis about a similar idea to this article: basically that humans only have a certain amount of high cognition tasks in them in a given day, no matter how much time we have. Basically, those low effort rote tasks that take up a lot of the day are less taxing than more complex tasks.
The idea was that even if AI reduces these kinds of tasks, that doesn't actually mean we can fit more complex tasks in their place without burning out.
This research seems to support that idea.
It's worth noting that this is an ethnography, so the researchers seem to be essentially taking for granted that the increase in productivity is real and are merely asking what the human effects of using AI are. The vastly increased busyness that the researchers describe follows other descriptions of using AI that I have heard and sounds horrifying. I personally need time to breathe and think, and it sounds like the goal of AI is to take that away.
In Capitalism there is no free time
Time is money
i never saw it as less work but different . like any new tech. ai does busy work so we can put more on our plate
It's an interesting read
Big if true^*^.
^*^Is true.