• self@awful.systems
    link
    fedilink
    English
    arrow-up
    24
    ·
    4 months ago

    FAR AI tries to imply that the Go bots are still not merely superhuman, but far superhuman: “this result demonstrates that even far superhuman AI systems can fail catastrophically in surprising ways.” Uh huh. [FAR AI]

    fuck it’s so disappointing that everything in this space has to be communicated through the cracked lens of critihype — that even the utterly normal failings of a misengineered system must be misrepresented, in true techfash style, as further proof that the system is powerful and is only one more breakthrough away from perfection

    • Codex@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      4 months ago

      Anyone working deep in this kind of tech is basically doing a dream job. Despite heavy stress, you get to push the boundaries of your field and work with exciting new tech, and you get paid so well too! … Until the money stops, which it will at any moment when the investors get scared. So, without really even trying to you become dishonest, you start to stretch truth a little, get a little too optimistic in your estimates. “No no please, just one more year of funding, 6 months, and we’ll have world changing results” and then you just need to get an impressive demo together and try to keep kicking that can down the road.

  • TheAlbatross@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    13
    ·
    4 months ago

    Wish they hadn’t figured out that the 3x3 invasion works well enough, though. Shit is annoying and happens like every game now

  • kbal@fedia.io
    link
    fedilink
    arrow-up
    13
    ·
    4 months ago

    The usual AI pumpers have suggested the bots are superhuman!

    They are technically correct. The AI is superhuman when it comes to playing Go, by most measures. I don’t know about “far” superhuman - the usual type of machine wins against professional human players every single time if the humans are trying to play well, but it’s not as if it’s off in another dimension playing moves we could never possibly comprehend. Many strong Go players probably still disagree with me there, but it is at least not as unlikely as some assumed at first that we can understand what it’s doing. Its moves can usually be analysed and understood with enough effort, and where they can’t the difference measured in points won or lost in the game is often small. Their main advantage is being inhumanly precise, never making the kind of small errors in judgement that humans always do. Over the course of a lengthy game of Go that gradually adds up to an impressively large margin of victory.

    Katago is not an artificial general intelligence. It is a Go-playing intelligence. And this class of flaw that’s been found in it is due to the particular algorithm it uses (essentially the same one as AlphaGo.) It lacks basic human common sense, having found no need or ability to develop that in its training. Where humans playing the game can easily count how much space a group has and act accordingly, the program has only its strict Monte Carlo-based way of viewing the game and has no access to such basic general-purpose tools of reasoning. It can only consider one move at a time, and this lets it down in carefully constructed situations that do not normally occur in human play, since humans wouldn’t fall for something so stupid.

    Its failing is much more narrow than those of the LLM chatbots that everyone loves so much, but not so different in character. The machines are super-humanly good at the things they’re good at. That’s not too surprising; so is a forklift. When their algorithms fail them, in situations that to naive humans appear very similar to what they’re good at, they’re not. When it works, it’s super-human in many ways. When it goes wrong, it’s often wrong in ways that seem obviously stupid.

    I suspect that this problem the machines have with Go playing would be an excellent example for the researchers to work with, since it’s relatively easy to understand approximately why the machines are going wrong and what sort of thing would be required to fix it; and yet it’s very difficult to actually solve the problem general way through the purely independent training that was the great achievement of AlphaGo Zero rather than giving up and hard-coding a fix for this one thing specifically. With the much more numerous and difficult failure modes they have to work with, the LLM people lately seem busy hacking together crude and imperfect fixes for one thing at a time. Maybe if some of them have time to take a break from that, they could learn something from the game of Go.

    • David Gerard@awful.systemsOPM
      link
      fedilink
      English
      arrow-up
      12
      ·
      4 months ago

      technically superhuman, the best kind of superhuman!

      we could worry about a super-forklift leveraging its technically superhuman abilities to go FOOM, except that’s “Killdozer” and I don’t know if Eliezer’s read that one.

    • o7___o7@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 months ago

      The AI is superhuman when it comes to playing Go, by most measures.

      Except beating humans, apparently.

      • kbal@fedia.io
        link
        fedilink
        arrow-up
        6
        ·
        4 months ago

        Yeah, aside from everything else it’s very satisfying to see the humans win this one.

      • ryven@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        5
        ·
        4 months ago

        It had a winning record for like 8 years in a row before humans found a strategy that beats it, that seems pretty good.

      • Deebster@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        7
        ·
        edit-2
        4 months ago

        Humans can’t beat AI at Go, aside from these exploits that we needed AI to tell us about first.

        Lee Sedol managed to win one game against AlphaGo in 2016 (and AlphaGo Zero was beating AlphaGo 100-0 a year later). That was basically the last time humans got on the scoreboard.

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          ·
          4 months ago

          did you know that humanity has been staring at numbers and doing math for millennia, and yet we still pay mathematicians? fucking outrageous, right? and yet these wry fuckers still end up finding whole new things! things in areas we’ve known about for centuries! the nerve of them! didn’t they know we have computers to look into this now?!

          • Deebster@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            5
            ·
            4 months ago

            You’re arguing against a point I’m not making.

            I play Go, and have since I learnt about the game when it was discussed in my Computer Science degree course (then computers were considered 50+ years away from beating humans).

            Overall, AlphaGo has been a good thing for human players, with it validating a lot of what we thought was right, but also that some tactics we’d thought not worth playing do work out. Having a superhuman, free advisor has made improving much easier.

            The negatives include that there’s less individual style amongst those that play like AIs, and also that it’s easier to cheat at the game.

            As in chess, humans have been outclassed by computers in Go for years now, but that doesn’t stop us playing and enjoying it.

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              7
              ·
              4 months ago

              this is not debate club, and that sound you didn’t hear on account of your noise-cancelling headphones was you missing your stop

        • BigMuffin69@awful.systems
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 months ago

          Humans can’t beat AI at Go, aside from these exploits

          kek, reminds me of when I was a wee one and I’d 0 to death chain grab someone in smash bros. The lads would cry and gnash their teeth about how I was only winning b.c. of exploits. My response? Just don’t get grabbed. I’d advise “superhuman” Go systems to do the same. Don’t want to get cheesed out of a W? Then don’t use a strat that’s easily countered by monkey brains. And as far as designing an adversarial system to find these ‘exploits’, who the hell cares? There’s no magic barrier between internalized and externalized cognition.

          Just get good bruv.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 months ago

      The machines are super-humanly good at the things they’re good at. That’s not too surprising; so is a forklift.

      Amazing quote, I’m gonna steal it.

      Where is my Big Forklift lobby. What’s your P(doom) from forklifts lifting us so high we escape the atmosphere and all die. Should there be a ban on forklift development?

  • antifuchs@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    4 months ago

    I don’t see any of them play an if err != nil { return err } so they can’t be all that smart now, can they

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 months ago

      the world would’ve been a better place if pike never got goog to greenlight his third attempt at neophp

      • antifuchs@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        4 months ago

        Bit of a wash, tbh: I like that the “but simplicity!” computer touchers now write their trash code in a memory-safe language; it sure reduces the amount of extremely preventable issues by introducing some other extremely preventable issues.

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          circa '06 my then-boss wanted me to “implement sawzall for mailserver logs”

          (I was extremely green at the time and I’ve since wondered whether that was one of my first gartner quadrant moments)

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            4 months ago

            I should also note that this place was xen on gentoo on refurb p3s. it involved adventures with gluster and nfsv3 and almost-all-the-docs-were-still-russian nginx imap/pop proxies and… fun times

            (I did learn a fair deal there. foremost among what I learned was how willing someone would be to pay me dogshit money if they could get away with it)

  • Evil_Shrubbery@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    4 months ago

    Is AI rally not capable of “miscounting” the score or at least furiously flipping the game board or table??

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      4 months ago

      GoBots, truly the hydrox of transforming robots. I only know this reference from reading a certain webcomic based around the sale of the oreo in this analogy. I want those hours back, dammit.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 months ago

        I’ve never watched the show or owned any of the toys (we have gobots at home and they’re the knockoffs you can’t figure out how to transform, and when you finally do a piece breaks off) but the theme song from the commercial’s been stuck in my head for 23 hours