• FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    1
    arrow-down
    4
    ·
    27 days ago

    Odd, no matter how many people keep insisting it’s a scam and it doesn’t work, it nevertheless keeps on working when I use it.

    Maybe they’re not using it right.

    • AmbitiousProcess (they/them)@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      27 days ago

      “doesn’t work” doesn’t mean the AI literally does not produce any output or do anything, it means it has so many flaws it’s just a fundamentally bad technology to be using.

      And don’t worry, I’ve got sources.

      LLMs still routinely hallucinate, and even implementations being used by AI safety researchers can’t help but automatically wipe email inboxes without permission. They atrophy your brain the longer you use them, cause both general dependency and emotional dependency, as well as deskill you at your job, they produce content favored worse by both humans and the AI models searching for trustworthy sources, and to top it all off, scaling laws are already failing to improve AI models enough to fix these problems, companies aren’t seeing returns, the economy gained essentially nothing from AI investment, usage, and growth, and public perception by the people actually affected most by AI is only getting worse while the people financially incentivized to keep building it say it’s going to get better, all while datacenters accelerate global warming and LLMs keep killing people.

      I don’t know about you, but I’d rather not support a technology that makes you get fundamentally worse at most cognitive tasks, damages the planet, burns money that could otherwise go to something more valuable, all while randomly killing mentally vulnerable people.

      • ikt@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        27 days ago

        “doesn’t work” doesn’t mean the AI literally does not produce any output or do anything, it means it has so many flaws it’s just a fundamentally bad technology to be using.

        that’s odd i use it daily and it works fine

        • AmbitiousProcess (they/them)@piefed.social
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          27 days ago

          The doctors who used it daily said it worked fine, and it did. Then those doctors became 20% less capable at identifying tumors in their patients.

          The Meta AI security researcher literally said, and I quote: “It’s been working well with my non-important email very well so far and gained my trust on email tasks” when asked why she’d give it access to her primary email, where it subsequently started trashing her whole inbox.

          All of the participants in the cognitive debt paper’s research had the AI actually produce the results they were looking for, but they all became less capable mentally as a result.

          And when a woman in South Korea killed two men using advice given to her by ChatGPT, it worked fine for her, didn’t it?

          That’s not to say your use of AI makes you a murderer. Far from it. But we have quite well documented evidence of LLMs simply making people dumber. You are not an exception to that, unless your brain biologically operates entirely differently from everyone else’s.

          When you use neurons less, the connections become weaker, and less new connections get made. When you offload work to something else, like an LLM, you stop training your brain to get better, and you let parts of it slowly die.

          Using AI is like using a hydraulic robot to bench press for you. You’re going to move the weights, but your muscle mass ain’t growing.

          The more you outsource the very function of thinking to a chatbot, the more reliant your brain will become on that chatbot to think as well as it used to, and when that chatbot regularly hallucinates faulty answers and logic, ignores best practices, inefficiently implements solutions, and gets things wrong, your brain is not improving as a result of that.

          This doesn’t mean you should never use AI. I use it to automatically clean up the transcriptions of my voice notes sometimes, and all that does is save me time from correcting the output of the text I just spoke. It’s genuinely helpful, and doesn’t meaningfully deskill me in any way. But if I used it to try and do everything for me, not only would it have made a ton of mistakes, but I’d then be even less capable of fixing them.

          • Victor@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            26 days ago

            I use it to automatically clean up the transcriptions of my voice notes sometimes, and all that does is save me time from correcting the output of the text I just spoke. It’s genuinely helpful, and doesn’t meaningfully deskill me in any way.

            But still, it does deskill you at that task, lest we forget. So if that was a meaningful task at which you wanted to stay adept, you would lose that meaningful skill. AI consistently deskills us at everything we ask it to do instead of doing it ourselves. Anything we are not doing, we are getting worse at doing.

    • GreenKnight23@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      27 days ago

      I have a theory that supporters of genAI or LLMs are lonely angry neets who just want a sense of control in their radically tumultuous lives.

      care to weigh in on my theory of when AI started to help out with this moment in your life?

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          27 days ago

          “Why are they pushing AI? Nobody wants this!” Meanwhile chatgpt.com is the fifth-most-visited website in the world.

          But I suppose people can just wrap themselves in a social media bubble where anyone who say something positive about AI gets downvoted through the floor, and then their view of the world gets curated to look a bit more like how they want it to be.

          • Australis13@fedia.io
            link
            fedilink
            arrow-up
            1
            ·
            26 days ago

            There’s a big difference between having a website that you can choose to engage with and having LLMs jammed into your device’s operating system or programming IDE that make you jump through hoops just to disable them (or your email and then be told your emails are going to be used in training and if you don’t want that you have to turn off all the smart features, including the ones that aren’t LLM-based).

            There would be certain use cases I’d be open to, but at least give me a choice when deploying it out as to whether it’s on or off, what it has access to and make it easy to change those settings.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              26 days ago

              Right. The website that people choose to engage with shows that people are choosing to engage with AI without being forced to. It shows that the demand for AI is organic and real. Lots of people want to use AI.

              • lath@piefed.social
                link
                fedilink
                English
                arrow-up
                1
                ·
                26 days ago

                Of course they do. People want comfort and AI as it is marketed is the ultimate comfort. Doesn’t change the harm it does at all, but lots of people are eager to dismiss the harm as long as their comfort is assured.