sometimes a dragon

he/they, queer, furry, ζ, vegan

  • 0 Posts
  • 49 Comments
Joined 1 year ago
cake
Cake day: September 7th, 2024

help-circle










  • Claim 1: Every regular LLM user is undergoing “AI psychosis”. Every single one of them, no exceptions.

    I wouldn’t go as far as using the “AI psychosis” term here, I think there is more than a quantitative difference. One is influence, maybe even manipulation, but the other is a serious mental health condition.

    I think that regular interaction with a chatbot will influence a person, just like regular interaction with an actual person does. I don’t believe that’s a weakness of human psychology, but that it’s what allows us to build understanding between people. But LLMs are not people, so whatever this does to the brain long term, I’m sure it’s not good. Time for me to be a total dork and cite an anime quote on human interaction: “I create them as they create me” – except that with LLMs, it actually goes only in one direction… the other direction is controlled by the makers of the chatbots. And they have a bunch of dials to adjust the output style at any time, which is an unsettling prospect.

    while atrophying empathy

    This possibility is to me actually the scariest part of your post.




  • The AI craze might end up killing graphics card makers:

    Zotac SK’s message: “(this) current situation threatens the very existence of (add-in-board partners) AIBs and distributors.”

    The current situation is so serious that it is worrisome for the future existence of graphics card manufacturers and distributors. They announced that memory supply will not be sufficient and that GPU supply will also be reduced.

    Curiously, Zotac Korea has included lowly GeForce RTX 5060 SKUs in its short list of upcoming “staggering” price increases.

    (Source)

    I wonder if the AI companies realize how many people will be really pissed off at them when so many tech-related things become expensive or even unavailable, and everyone will know that it’s only because of useless AI data centers?






  • It really sucks so much how many coders embrace it. At my work, there is the looming introduction of code LLMs very soon, and I’m anxious to learn how many of my colleagues will happily use it, and the consequences it will have for me to deal with the results (and generally, how it will make me feel to work in an environment where these tools are embraced). I was hoping that the corporate bureaucracy would be slow enough that the AI bubble collapses before it’s allowed to use the tools, but unfortunately management put a lot of pressure behind it and it all went faster than expected :(