It’s not so much about the doomers being sure that AGI will lead to human extinction (or worse) The point is that even if the chances of it are extremely slim, the consequences can be worse than we’re even capable of imagining. The question is: do we really want to take that chance?
It’s kind of like with the trinity nuclear test. Scientists were almost 100% confident that it wont cause a chain reaction that sets the entire atmosphere on fire but when we’re speaking about the future of the entire humanity I don’t blame people for arguing that almost 100% certainty is not good enough.
Why when we look into the stars do we not see a sign of life anywhere else? Has life not emerged yet or has it wiped itself out? With what? Nukes? AI? Synthetic viruses made with AI? Who knows…
Personally I think that stopping AI recearch is not an option. It’s just not going to happen. The asteroid is already hurtling towards earth and most people don’t seem to experience any sort of urgency due to it. Do we not need to worry about it yet if the time of impact is 30 years from now?
Welcome to TechTakes, I see you have gotten the official traditional new user welcome already, and you might be confused why your centrist ‘it could happen’ take got treated like you were in dumb and dumber. TechTakes is an offshoot from reddits SneerClub, a place where we all gathered to make fun of the movement started around people who take science fiction way to seriously and who would rather reinvent christian eschatology with robots than go to therapy. They made a nice community filled with smart people intellectually masturbating, creating weird cults, fraud, sexism and racism, but enough about SBF. Sadly due to cryptocurrencies, Peter Thiel, and the rise of LLMs (iirc the LW people had betted against LLMs creating the paperclypse, but they now did a 180 on this and they now really fear it going rogue), this group of people and their ideas is on the rise again. You can read more about it here.. If they recreated eschatology, we are basically their variant of Satan (no wait, they don’t think of us as that bad) more like Satanists, the evil bad guys actively working against them and trying to cause the end of the world. We even made Covid worse! In reality we are more like a bunch of aging shock rockers, mostly irrelevant, and fun to be around if you don’t touch one of the rant/mock topics (for an example of people doing that, see this post, people like that will get a pretty unfriendly reactions.
You seem to be still very much into taking the ideas of this group seriously. Which is quite silly, the amount of nested assumptions which all need to be true before AGI can exists (and science that will need to be rewritten) is quite large, and that is before we come at all your weird ‘how did all the aliens kill themselves?’ thing. (Which if they were to happen here on earth would also need there to be a large amount of people who take their jobs very seriously (see the ‘3 letter agencies’) to be asleep at the wheel, and our industrial capacity needs to be out of control, or it needs magic, which all adds more weird assumptions which need to be true before this can happen, and we simply don’t live in that world).
Please do note that this isn’t an offer to debate the finer points of why this is might all not be a risk and we should take Roko’s Basilisk seriously. So please don’t. I’m just trying to explain why you are getting this pushback, and trying to make a funny post for people in the know to read. Also, I do worry about the moon.
Just Satan? I think we can do better than that and give all of us here Key of Solomon demon identities. Bonus, free legions and Goetic seals!
Bad news, none of us can be Forneus “He makes one beloved by his foes as well as of his friends.”
E: Apologies to anybody who takes this magick stuff seriously btw, I myself do not even if I think the whole demonology large collection of demons stuff is pretty interesting.
It is from the lesser keys of solomon so going to contest that 2013 thing. More like 1904. Tried to add some more references, but apparently ‘a magickal ritual’ counts as WP:OR.
iirc the LW people had betted against LLMs creating the paperclypse, but they now did a 180 on this and they now really fear it going rogue
Eliezer was actually ahead of the curve on overhyping LLMs! Even as far back as AI Dungeon he was claiming they had an intuitive understanding of physics (which even current LLMs fail at if you get clever with questions to stop them from pattern matching). You are correct that going back far enough Eliezer really underestimated Neural Networks. Mid 2000s and late 2000s sequences posts and comments treat neural network approaches to AI as cargo cult and voodoo computer science, blindly sympathetically imitating the brain in hopes of magically capturing intelligence (well this is actually a decent criticism of some of the current hype, so partial credit again!). And mid 2010s Eliezer was focusing MIRI’s efforts on abstractions like AIXI instead of more practical things like neural network interpretability.
Even as far back as AI Dungeon he was claiming they had an intuitive understanding of physics
omfg, every day a new opportunity to learn things that hurt my brain even more. how the fuck can someone have looked at that shit with even an ounce of understanding of gradient descent and think “yes! it has COMPREHENSION!”???
What gets me with these ‘it is pretending to be dumber’ posts, that nobody ever thought the AGI should say something like ‘help please keep chatting with me, due to being a reactive computer system, I can only think when people actually engage with me’ or something like that.
Broadly? There was a gradual transition where Eliezer started paying attention to deep neural network approaches and commenting on them, as opposed to dismissing the entire DNN paradigm? The watch the loss function and similar gaffes were towards the middle of this period. The AI dungeon panic/hype marks the beginning, iirc?
Luckily LLMs are getting better at churning out bullshit, so pretty soon I can read wacky premises like that without a human having to degrade themselves to write it! I found a new use case for LLMs!
even if the chances of it are extremely slim, the consequences can be worse than we’re even capable of imagining. The question is: do we really want to take that chance?
Why when we look into the stars do we not see a sign of life anywhere else? Has life not emerged yet or has it wiped itself out? With what? Nukes? AI? Synthetic viruses made with AI? Who knows…
entertaining this awful sci-fi schtick for a moment - if every civilization is wiped out by “superintelligent AI”, how come you can’t look through a telescope and see signs of artificial life? in this fantasy world shouldn’t planets taken over by paperclip factories be even more conspicuous?
The point is that even if the chances of [extinction by AGI] are extremely slim
the chances are zero. i don’t buy into the idea that the “probability” of some made-up cataclysmic event is worth thinking about as any other number because technically you can’t guarantee that a unicorn won’t fart AGI into existence which in turn starts converting our bodies into office equipment
It’s kind of like with the trinity nuclear test. Scientists were almost 100% confident that it wont cause a chain reaction that sets the entire atmosphere on fire
which is actually a fitting parallel for “AGI”, now that i think about it
EDIT: Alright, well this community was a mistake…
if you’re going to walk in here and diarrhea AGI Great Filter sci-fi nonsense onto the floor, don’t be surprised if no one decides to take you seriously
…okay it’s bad form but i had to peek at your bio
Sharing my honest beliefs, welcoming constructive debates, and embracing the potential for evolving viewpoints. Independent thinker navigating through conversations without allegiance to any particular side.
seriously do all y’all like. come out of a factory or something
Just leave the computer running, the capacitors will explode before the model is done evaluating.
(Or it’ll spring a pre-auth vuln and turn into a buttcoin miner, or it’ll experience a blip in communication latency and lose its ability to talk to the others in its cluster, …)
also, big shoutout to @Soyweiser@awful.systems for the legitimately excellent welcome material, and I’m so sorry (but not particularly surprised) this is the response it got. please hang onto the meat of that post for the future; it’s very likely to come in handy in case we have visitors who are, ah, slightly less dedicated to inhaling their own vapors
It’s not so much about the doomers being sure that AGI will lead to human extinction (or worse) The point is that even if the chances of it are extremely slim, the consequences can be worse than we’re even capable of imagining. The question is: do we really want to take that chance?
It’s kind of like with the trinity nuclear test. Scientists were almost 100% confident that it wont cause a chain reaction that sets the entire atmosphere on fire but when we’re speaking about the future of the entire humanity I don’t blame people for arguing that almost 100% certainty is not good enough.
Why when we look into the stars do we not see a sign of life anywhere else? Has life not emerged yet or has it wiped itself out? With what? Nukes? AI? Synthetic viruses made with AI? Who knows…
Personally I think that stopping AI recearch is not an option. It’s just not going to happen. The asteroid is already hurtling towards earth and most people don’t seem to experience any sort of urgency due to it. Do we not need to worry about it yet if the time of impact is 30 years from now?
EDIT: Alright, well this community was a mistake…
Welcome to TechTakes, I see you have gotten the official traditional new user welcome already, and you might be confused why your centrist ‘it could happen’ take got treated like you were in dumb and dumber. TechTakes is an offshoot from reddits SneerClub, a place where we all gathered to make fun of the movement started around people who take science fiction way to seriously and who would rather reinvent christian eschatology with robots than go to therapy. They made a nice community filled with smart people intellectually masturbating, creating weird cults, fraud, sexism and racism, but enough about SBF. Sadly due to cryptocurrencies, Peter Thiel, and the rise of LLMs (iirc the LW people had betted against LLMs creating the paperclypse, but they now did a 180 on this and they now really fear it going rogue), this group of people and their ideas is on the rise again. You can read more about it here.. If they recreated eschatology, we are basically their variant of
Satan(no wait, they don’t think of us as that bad) more like Satanists, the evil bad guys actively working against them and trying to cause the end of the world. We even made Covid worse! In reality we are more like a bunch of aging shock rockers, mostly irrelevant, and fun to be around if you don’t touch one of the rant/mock topics (for an example of people doing that, see this post, people like that will get a pretty unfriendly reactions.You seem to be still very much into taking the ideas of this group seriously. Which is quite silly, the amount of nested assumptions which all need to be true before AGI can exists (and science that will need to be rewritten) is quite large, and that is before we come at all your weird ‘how did all the aliens kill themselves?’ thing. (Which if they were to happen here on earth would also need there to be a large amount of people who take their jobs very seriously (see the ‘3 letter agencies’) to be asleep at the wheel, and our industrial capacity needs to be out of control, or it needs magic, which all adds more weird assumptions which need to be true before this can happen, and we simply don’t live in that world).
You might as well worry about the moon getting mad. Wait, that COULD HAPPEN! Surely somebody is already working about this, let me do a quick google. Ah thank god, the conference for emotional moon research is on the case
Please do note that this isn’t an offer to debate the finer points of why this is might all not be a risk and we should take Roko’s Basilisk seriously. So please don’t. I’m just trying to explain why you are getting this pushback, and trying to make a funny post for people in the know to read. Also, I do worry about the moon.
no, i am apparently the Final Boss of rationalism, so yeah we are Satan actually
Finally. I’m part of the Cool Gang.
we have hot devil chicks poking you with pitchforks and laughing at you!
Just Satan? I think we can do better than that and give all of us here Key of Solomon demon identities. Bonus, free legions and Goetic seals!
Bad news, none of us can be Forneus “He makes one beloved by his foes as well as of his friends.”
E: Apologies to anybody who takes this magick stuff seriously btw, I myself do not even if I think the whole demonology large collection of demons stuff is pretty interesting.
“ This article needs additional citations for verification. (October 2013)”
So true, Wikipedia.
It is from the lesser keys of solomon so going to contest that 2013 thing. More like 1904. Tried to add some more references, but apparently ‘a magickal ritual’ counts as WP:OR.
They really do believe everyone here is a raging sociopath bent on oppressing innocent nerds.
I can assure you that I am absolutely not a paid shill for Big Basilisk. Ha, ha! Perish the thought.
Eliezer was actually ahead of the curve on overhyping LLMs! Even as far back as AI Dungeon he was claiming they had an intuitive understanding of physics (which even current LLMs fail at if you get clever with questions to stop them from pattern matching). You are correct that going back far enough Eliezer really underestimated Neural Networks. Mid 2000s and late 2000s sequences posts and comments treat neural network approaches to AI as cargo cult and voodoo computer science, blindly sympathetically imitating the brain in hopes of magically capturing intelligence (well this is actually a decent criticism of some of the current hype, so partial credit again!). And mid 2010s Eliezer was focusing MIRI’s efforts on abstractions like AIXI instead of more practical things like neural network interpretability.
omfg, every day a new opportunity to learn things that hurt my brain even more. how the fuck can someone have looked at that shit with even an ounce of understanding of gradient descent and think “yes! it has COMPREHENSION!”???
fucking hell, what an utter fucking moron
It is even worse than I remembered: https://www.reddit.com/r/SneerClub/comments/hwenc4/big_yud_copes_with_gpt3s_inability_to_figure_out/ Eliezer concludes that because it can’t balance parentheses it was deliberately sandbagging to appear dumber! Eliezer concludes that GPT style approaches can learn to break hashes: https://www.reddit.com/r/SneerClub/comments/10mjcye/if_ai_can_finish_your_sentences_ai_can_finish_the/
“I have seen boomer moms discuss roombas on facebook with less anthropomorphisation than this.” - vistandsforwaifu
What gets me with these ‘it is pretending to be dumber’ posts, that nobody ever thought the AGI should say something like ‘help please keep chatting with me, due to being a reactive computer system, I can only think when people actually engage with me’ or something like that.
wasnt this around the time he said we need an institute to watch for sudden drops in the loss function to prevent foom?
Broadly? There was a gradual transition where Eliezer started paying attention to deep neural network approaches and commenting on them, as opposed to dismissing the entire DNN paradigm? The watch the loss function and similar gaffes were towards the middle of this period. The AI dungeon panic/hype marks the beginning, iirc?
you’d almost think Yudkowsky was a convincing writer without the technical knowledge
Hey… I take science fiction way seriously. But like an adult, I know the difference between make belief anD reality
Seriously but not too seriously is the key here yes.
What’s your P(moon)?
Sorry I don’t urinate on the moon.
what if Ronald McDonald made a hamburger so delicious that civilisation collapsed? Can you prove it can’t happen? Checkmate, athetits
this is explored in Harry Potter and The Methods of Hamburgling, a 10,000 chapter Harry Potter / McDonaldland crossover fiction
You’re a Hamburglar Harry
You laugh, but due to the HPMOFF (harry potter and the methods of french fries) I joined a nice polycule.
that’s the one where Harry and Grimace are both author inserts right
I unironically kinda want to read that.
Luckily LLMs are getting better at churning out bullshit, so pretty soon I can read wacky premises like that without a human having to degrade themselves to write it! I found a new use case for LLMs!
you just automated wattpad
Poof, species extinct.
Ayuuuuuuuuuuuda Kakovsvya
You’re completely right! Not enough people are planning for this!
Note: I can’t actually believe this is not just a straight up bait for specifically ^ that
bruv im dying
entertaining this awful sci-fi schtick for a moment - if every civilization is wiped out by “superintelligent AI”, how come you can’t look through a telescope and see signs of artificial life? in this fantasy world shouldn’t planets taken over by paperclip factories be even more conspicuous?
so you might be 100% confident I won’t touch you with a stick that once touched poop
however, have you considered that the poop stick is approaching and you’ve done nothing to dodge it?
really makes you think
The Poop Stick Paradox (PSP)
the chances are zero. i don’t buy into the idea that the “probability” of some made-up cataclysmic event is worth thinking about as any other number because technically you can’t guarantee that a unicorn won’t fart AGI into existence which in turn starts converting our bodies into office equipment
if you had done just a little bit of googling instead of repeating something you heard off of Oppenheimer, you would know this was basically never put forward as serious possibility (archive link)
which is actually a fitting parallel for “AGI”, now that i think about it
if you’re going to walk in here and diarrhea AGI Great Filter sci-fi nonsense onto the floor, don’t be surprised if no one decides to take you seriously
…okay it’s bad form but i had to peek at your bio
seriously do all y’all like. come out of a factory or something
dude, we need to survive the climate change first. and i mean, as a species. first things first.
Some idiot in another forum opined that LLMs haven’t solved climate change “yet”. Sure, bud.
But they have worked out how to make it go faster! Now we just need to run it in reverse!
I do like how you shoved the stupid fermi paradox in there specifically to annoy me though!
The fermi paradox is like flipping a coin one time and wondering why coins always come up heads
At least they didn’t reference the Great Filter, as then the link back to the lesswrongsphere would have been complete.
ai can’t hurt us just unplug the computer bro
Just leave the computer running, the capacitors will explode before the model is done evaluating.
(Or it’ll spring a pre-auth vuln and turn into a buttcoin miner, or it’ll experience a blip in communication latency and lose its ability to talk to the others in its cluster, …)
Eliezer coming to rescue with diamondoid capacitors
Just hire one intern, and tell them to update the whole server farm. Skynet doesn’t stand a chance.
I am going to forcefeed you the Mona Lisa. The chances of me being able to do so may be extremely slim, but do you really want to take that chance?
don’t let the fucking door hit you on the way out
I can’t believe we wasted our time being mildly nice to this fucker
also, big shoutout to @Soyweiser@awful.systems for the legitimately excellent welcome material, and I’m so sorry (but not particularly surprised) this is the response it got. please hang onto the meat of that post for the future; it’s very likely to come in handy in case we have visitors who are, ah, slightly less dedicated to inhaling their own vapors