• WhatAmLemmy@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    4 months ago

    The AI we know is missing the I. It does not understand anything. All it does is find patterns in 1’s and 0’s. It has no concept of anything but the 1’s and 0’s in its input data. It has no concept of correlation vs causation, that’s why it just hallucinates (presents erroneously illogical patterns) constantly.

    Turns out finding patterns in 1’s and 0’s can do some really cool shit, but it’s not intelligence.

    • Monstrosity@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      This is not necessarily true. While it’s using pattern recognition on a surface level, we’re not entirely sure how AI comes up with it’s output.

      But beyond that, a lot of talk has been centered around a threshold when AI begins training other AI & can improve through iterations. Once that happens, people believe AI will not only improve extremely rapidly, but we will understand even less of what is happening when an AI black boxes train other AI black boxes.

      • Coldcell@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        I can’t quite wrap my head around this, these systems were coded, written by humans to call functions, assign weights, parse data. How do we not know what it’s doing?

        • The_Decryptor@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          Yeah, there’s a mysticism that’s sprung up around LLMs as if they’re some magic blackbox, rather than a well understood construct to the point where you can buy books from Amazon on how to write one from scratch.

          It’s not like ChatGPT or Claude appeared from nowhere, the people who built them do talks about them all the time.

          • Monstrosity@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            3 months ago

            What a load of horseshit lol

            EDIT: Sorry, I’ll expand. When AI researchers give talks about how AI works, they say things like, “on a fundamental level, we don’t actually know what’s going on.”

            Also, even if there are books available about how to write an AI from scratch(?) somehow, the basic understanding of what happens deep within the neural networks is still a “magic black box”. They’ll crack it open eventually, but not yet.

            The ideas that people have that AI is simple and stupid & a passing fad are naive.

            • The_Decryptor@aussie.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              If these AI researchers really have no idea how these things work, then how can they possibly improve the models or techniques?

              Like how they now claim all that after upgrades that now these LLMs can “reason” about problems, how did they actually go and add that if it’s a black box?

        • Kyrgizion@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          Same way anesthesiology works. We don’t know. We know how to sedate people but we have no idea why it works. AI is much the same. That doesn’t mean it’s sentient yet but to call it merely a text predictor is also selling it short. It’s a black box under the hood.

          • Coldcell@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            3 months ago

            Writing code to process data is absolutely not the same way anesthesiology works 😂 Comparing state specific logic bound systems to the messy biological processes of a nervous system is what gets this misattribution of ‘AI’ in the first place. Currently it is just glorified auto-correct working off statistical data about human language, I’m still not sure how a written program can have a voodoo spooky black box that does things we don’t understand as a core part of it.

    • Pornacount128@lemmynsfw.com
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      3 months ago

      Humans are just nurons, we don’t “understand either” until so many stack on top of each other than we have a sort of consciousness. The it seems like we CAN understand but do we? Or are we just a bunch of meat computers? Also, llms handle language or correlations of words, don’t humans just do that (with maybe body language too) but we’re all just communicating. If llms can communicate isn’t that enough conceptually to do anything? If llms can program and talk to other llms what can’t they do?