Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

  • BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    ·
    4 months ago

    https://www.nature.com/articles/d41586-024-02218-7

    Might be slightly off topic, but interesting result using adversarial strategies against RL trained Go machines.

    Quote: Humans able use the adversarial bots’ tactics to beat expert Go AI systems, does it still make sense to call those systems superhuman? “It’s a great question I definitely wrestled with,” Gleave says. “We’ve started saying ‘typically superhuman’.” David Wu, a computer scientist in New York City who first developed KataGo, says strong Go AIs are “superhuman on average” but not “superhuman in the worst cases”.

    Me thinks the AI bros jumped the gun a little too early declaring victory on this one.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      4 months ago

      See, in StarCraft we would just say that the meta is evolving in order to accommodate this new strategy. Maybe Go needs to take a page from newer games in how these things are discussed.

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      4 months ago

      this is simple. we just need to train a new model for every move. that way the adversarial bot won’t know what weaknesses to exploit

      • BigMuffin69@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        4 months ago

        In chess the table base for optimal moves with only 7 pieces takes like ~20 terrabytes to store. And in that DB there are bizzare checkmates that take 100 + moves even with perfect precision- ignoring the 50 move rule. I wonder if the reason these adversarial strats exists is because whatever the policy network/value network learns is way, way smaller than the minimum size of the “true” position eval function for Go. Thus you’ll just invariably get these counter play attacks as compression artifacts.

        Sources cited: my ass cheeks

        • sc_griffith@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          4 months ago

          i don’t think that can be quite right, as illustrated by an extreme example: consider a game where the first move has player 1 choose “win” or “hypergo.” if player 1 chooses win, they win. if player 1 chooses hypergo, begin a game of Go on a 1,000,000,000 x 1,000,000,000 board, and whoever wins that subgame wins. for player 1, the ‘true’ position eval function must be in some sense incredibly complicated, because it includes hypergo nonsense. but player 1 strategy can be compressed to “choose win” without opening up any counterattacks

          • sc_griffith@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            ·
            4 months ago

            more generally I suspect that as soon as you are trying to compare some notion of a ‘true’ position eval function to eval functions you can actually generate you’re going to have a very difficult time making correct and clear predictions. the reason I say this is that treating such a ‘true’ function is essentially the domain of combinatorial game theory (not the same as “game theory”), and there are few if any bridges people have managed to build between cgt and practical Go etc playing engines. so it’s probably pretty hard to do

            (I know there’s a theory of ‘temperature’ of combinatorial games that I think was developed for purposes of analyzing Go, but I don’t think it has any known relationship to reinforcement learning based Go engines)