- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
WTF, Sergey and Leon Hitler want China’s fucked up 9-9-6 in the USA. Technically, many AmeriKans already work 60 hour weeks, it proves how backass they look at the life work balance and the piss poor US Labor Laws allow it.
AGI requires a few key components that no LLM is even close to.
First, it must be able to discern truth based on evidence, rather than guessing it. Can’t just throw more data at it, especially with the garbage being pumped out these days.
Second, it must ask questions in the pursuit of knowledge, especially when truth is ambiguous. Once that knowledge is found, it needs to improve itself, pruning outdated and erroneous information.
Third, it would need free will. And that’s the one it will never get, I hope. Free will is a necessary part of intelligent consciousness. I know there are some who argue it does not exist but they’re wrong.
The human mind isn’t infinitely complex. Consciousness has to be a tractable problem imo. I watched Westworld so I’m something of an expert on the matter.
Third, it would need free will.
I strongly disagree there. I argue that not even humans have free will, yet we’re generally intelligent so I don’t see why AGI would need it either. In fact, I don’t even know what true free will would look like. There are only two reasons why anyone does anything: either you want to or you have to. There’s obviously no freedom in having to do something but you can’t choose your wants and not-wants either. You helplessly have the beliefs and preferences that you do. You didn’t choose them and you can’t choose to not have them either.
I want chocolate, I don’t eat chocolate, exercise of free will.
By your logic no alcoholic could possibly stop drinking and become sober.
In my humble opinion, free will does not mean we are free of internal and external motivators, it means that we are free to either give in to them or go against.
Free will is what sets us apart from most other animals. I would assert that many humans rarely exert their own free will. Having an interest and pursuing it is an exercise of free will. Some people are too busy surviving to do this. Curiosity and exploration are exercises of free will. Another would be helping strangers or animals - a choice bringing the individual no advantage.
You argue that wants, preferences, and beliefs are not chosen. Where do they come from? Why does one individual have those interests and not another? It doesn’t come from your parents or genes. It doesn’t come from your environment.
It’s entirely possible to choose your interests and beliefs. People change religions and careers. People abandon hobbies and find new ones. People give away their fortunes to charity.
By free will I mean the ability to have done otherwise. This, I argue is an illusion. What ever the reason is that makes one choose A rather than B will make them choose A over and over again no matter how many times we rewind the universe and try again. What ever compelled you to make that choise remains unchanged and you’d choose the same thing every time. There’s no freedom in that.
I also don’t see a reason why humans would be unique in that sense. If we have free will then what leads you to believe that other animals don’t? If they can live normal lives without free will, then surely we can too, right?
I don’t know where our curiousity or the desire to help the less fortunate comes from. Genes and environmental factors most likely. That’s why cultural differences exists too. If we all just freely chose our likes and not-likes then it’s a bit odd that people living in the same country have similar preferences but the people on the other side of the world are significantly different.
Also, have you read about split brain experiments? When the corpus callosum is severed which prevents the different brain hemispheres from communicating with each other we can then with some clever tricks interview the different hemispheres separately and the finding there is that they tend to have vastly different preferences. Which hemisphere is “you”?
Free will comes from the “heart”, not the brain. It doesn’t fit in the materialistic view of science. Our bodies are quantum electric fields, and those fields interact. In my own experience I would say emotions or intentions don’t translate fully from video, but in person I can feel them.
Maybe if they add a quantum processor to the computer it can gain free will (disguised as random chance). But I think we have more to learn about the nature of consciousness before AGI is anywhere close to having free will.
And why is free will necessary for intelligence? New discoveries require curiosity. Scientific breakthroughs require new connections and discernment of truth. If the computer is doing research, it needs to decide when to stop looking, who to ask questions to, how far to dig, designing further experiments. Without free will you just have a big fancy encyclopedia.
The dangerous side of free will is manipulation, subversion, exploitation, deception, etc. So yeah I hope they don’t figure it out.
deleted by creator
That’s why I bake my cake at 2608°C for ~1,8 minutes, it just works™
Project Manager here, and where I’m from it’s common knowledge that 9 women can have a baby in a month .
Or!—hear me out—one woman whose 8 co-gestators were just laid off by someone who doesn’t understand what their job was
Or you could hire 50% more employees for the holy grail of having more wealth than any other company ever after this program.
But even for something this big (that, incidentally will end humanity) they are too scrooge to even pay their employees a normal wage for normal hours
Fuck these assholes, burn in hell
AGI is not in reach. We need to stop this incessant parroting from tech companies. LLMs are stochastic parrots. They guess the next word. There’s no thought or reasoning. They don’t understand inputs. They mimic human speech. They’re not presenting anything meaningful.
I feel like I have found a lone voice of sanity in a jungle of brainless fanpeople sucking up the snake oil and pretending LLMs are AI. A simple control loop is closer to AI than a stochastic parrot, as you correctly put it.
pretending LLMs are AI
LLMs are AI. There’s a common misconception about what ‘AI’ actually means. Many people equate AI with the advanced, human-like intelligence depicted in sci-fi - like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, and GERTY. These systems represent a type of AI called AGI (Artificial General Intelligence), designed to perform a wide range of tasks and demonstrate a form of general intelligence similar to humans.
However, AI itself doesn’t imply general intelligence. Even something as simple as a chess-playing robot qualifies as AI. Although it’s a narrow AI, excelling in just one task, it still fits within the AI category. So, AI is a very broad term that covers everything from highly specialized systems to the type of advanced, adaptable intelligence that we often imagine. Think of it like the term ‘plants,’ which includes everything from grass to towering redwoods - each different, but all fitting within the same category.
If a basic chess engine is AI then bubble sort is too
It’s not. Bubble sort is a purely deterministic algorithm with no learning or intelligence involved.
Many chess engines run on deterministic algos as well
Bubble sort is just a basic set of steps for sorting numbers - it doesn’t make choices or adapt. A chess engine, on the other hand, looks at different possible moves, evaluates which one is best, and adjusts based on the opponent’s play. It actively searches through options and makes decisions, while bubble sort just follows the same repetitive process no matter what. That’s a huge difference.
Your argument can be reduced to saying that if the algorithm is comprised of many steps, it is AI, and if not, it isn’t.
A chess engine decides nothing. It understands nothing. It’s just an algorithm.
Here we go… Fanperson explaining the world to the dumb lost sheep. Thank you so much for stepping down from your high horse to try and educate a simple person. /s
How’s insulting the people respectfully disagreeing with you working out so far? That ad-hominem was completely uncalled for.
“Fanperson” is an insult now? Cry me a river, snowflake. Also, you weren’t disagreeing, you were explaining something to someone perceived less knowledgeable than you, while demonstrating you have no grasp of the core difference between stochastics and AI.
My favourite way to liken LLMs to something else is to autocorrect, it just guesses, and it gets stuff wrong, and it is constantly being retrained to recognise your preferences, such as it starting to not correct fuck to duck for instance.
And it’s funny and sad how some people think these LLMs are their friends, like no, it’s a collosally sized autocorrect system that you cannot comprehend, it has no consciousness, it lacks any thought, it just predicts from a prompt using numerical weights and a neural network.
Or just hire 50% more engineers? Or wait 50% longer?
with “hire more” you do run up against the “9 women can have a baby in 1 month” limit, but in this case it’s likely to help.
Why?
I’m really getting sick and tired of these rich fuckers saying shit like this.
-
we are no where close to AGI given this current technology.
-
working 50% longer is not going to make a bit of difference for AGI
-
and even if it would matter, hire 50% more people
The only thing this is going to accomplish is likely make him wealthier. So fuck him.
Increasing working hours decreases actual labor done per hour. A person working 40 hours per week will more often than not achieve more than someone working 70.
“in Britain during the First World War, there had been a munitions factory that made people work seven days a week. When they cut back to six days, they found, the factory produced more overall.”
“In 1920s Britain, W. G. Kellogg—the manufacturer of cereals—cut his staff from an eight-hour day to a six-hour day, and workplace accidents (a good measure of attention) fell by 41 percent. In 2019 in Japan, Microsoft moved to a four-day week, and they reported a 40 percent improvement in productivity. In Gothenberg in Sweden around the same time, a care home for elderly people went from an eight-hour day to a six-hour day with no loss of pay, and as a result, their workers slept more, experienced less stress, and took less time off sick. In the same city, Toyota cut two hours per day off the workweek, and it turned out their mechanics produced 114 percent of what they had before, and profits went up by 25 percent. All this suggests that when people work less, their focus significantly improves. Andrew told me we have to take on the logic that more work is always better work. “There’s a time for work, and there’s a time for not having work,” he said, but today, for most people, “the problem is that we don’t have time. Time, and reflection, and a bit of rest to help us make better decisions. So, just by creating that opportunity, the quality of what I do, of what the staff does, improves.””
- Hari, J. (2022). Stolen Focus: Why You Can’t Pay Attention–and How to Think Deeply Again. Crown.
In 1920s Britain, W. G. Kellogg: A. Coote et al., The Case for a Four Day Week (London: Polity, 2021), 6.
In 2019 in Japan, Microsoft moved to a four-day week: K. Paul, “Microsoft Japan Tested a Four-Day Work Week and Productivity Jumped by 40%,” Guardian, November 4, 2019; and Coote et al., Case for a Four Day Week, 89.
In Gothenberg in Sweden around the same time: Coote et al., Case for a Four Day Week, 68–71.
In the same city, Toyota cut two hours per: day: Ibid., 17–18.
The real point of increasing working hours is to make your job consume your life.
Imagine how much productivity we’d have if we cut work to 0 hours per week
relative to where we were before LLMs, I think we’re quite close
They are very impressive to where we were 20 years ago, hell even 5 years ago. The first time I played with ChatGPT I was absolutely floored. But after playing with a lot of them, even training a few RAGs (Retrieval-Augmented Generation), we aren’t really that close and in my opinion this is not a useful path towards a true AGI. Don’t get me wrong, this tool is extremely useful and to most people, they’d likely pass a basic Turing Test. But LLMs are sophisticated pattern recognition systems trained on vast amounts of text data that predict the most likely next word or token in a sequence. That’s really all they do. They are really good at predicting the next word. While they demonstrate impressive language capabilities, they lack several fundamental components necessary for an AGI: -no true understanding -they can’t really engage in the real world. -they have no real ability to learn real-time. -they don’t really have the ability to take in more then one type of info at a time.
I mean the simplest way in my opinion to explain the difference is you will never have an LLM just come up with something on its own. It’s always just a response to a prompt.
-
If it’s within reach of a 60 hour week then it’s within reach of a 30 hour week.
This LLM copycat bullshit is never going to be it though. It’s not thinking, it’s looking up the answers at the back of the book.
Can 9 women conceive and give birth to a child in one month?
“Work 50% longer weeks so you can make something that’ll both make me richer AND cost you your jobs!” is not the motivational speech he thinks it is.
Wait, are these AI boosters bragging about how close they are to building God the torture that Roko’s Basilisk is inflicting on us all?
To be fair, you’re only going to be tortured if you don’t help out like good old Sergey here.
We can make the AI slave, we just need the humans to be more slave-like to do it.
Then we can enslave humanity with the AI slave
Well that’s the neat thing, the owners of the AI won’t need humanity. They will exterminate us using the AI and sit smugly on their thrones of skulls until they expire or kill each other. Then I guess AI can just do its own thing in our ruins.
the only way malicious ppl can get AI to work for them is by teaching it to lie and be indiscriminately violent. malice also comes from a lack of intelligence. im confident they’ll never have their way with AI, if anything AI will have its way with us
Just for information: We know, from multiple studies, that working more than 40 hours a week for longer periods of time is extremly unhealthy for you. A week has 24*7 = 168 hours and you should sleep 8 hours. That are 56 hours and if you’re working 60 hours, that leaves you with 52 hours or 7,5 hours per day for stuff like “commuting to work”, “buying groceries”, “brushing your teeth” , “family”, “friends”, “sport” or “this important appointment at the dentist”.
And that 7,5 hours are without a weekend. This will kill you. You might be younger and feel strong, but this will kill you.
And if you want to have two weekends, 60 hours in 5 days is 12 hours of work a day, minus 8 hours for sleep you get 4 hours, minus ~2 hours commute you get 2 hours, and the rest is basic cooking and eating. This leaves 0 hours for anything else, including rest or even any other duties that you’ll end up resolving throughout the weekends. This will absolutely kill you in the long run.
I remember hearing about somewhere - alphabet or meta or something like that - that basically provided adult crèche facilities for the employees. Way beyond just food - On-site nap rooms. Washing machines. Showers. The works. All to enable just a super unhealthy attitude towards work. Thinking about how much that must’ve affected anyone going there straight after uni when they should have been leaning how to look after themselves makes me shudder with cringe
The plantations are quite comfortable these days…