A big biometric security company in the UK, Facewatch, is in hot water after their facial recognition system caused a major snafu - the system wrongly identified a 19-year-old girl as a shoplifter.

  • Suavevillain@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    4 months ago

    People who blindly support this type of tech and AI being slapped into everything always learn the hard way when a case like this happens.

    • squid_slime@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      4 months ago

      Sadly there won’t be any learning, the security company will improve the tech and continue as usual.

      This shit is here to stay :/

      • MajorHavoc@programming.dev
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        4 months ago

        Agreed on all points, but “improve the tech” probably belongs in quotes. If there’s no real consequences, they may just accept some empty promises and continue as before.

        • intensely_human@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          Just listened to a podcast with a couple of guys talking about the AI thing going on. One thing they said was really interesting to me. I’ll paraphrase my understanding of what they said:

          • In 2020, people realized that the same model, same architecture, but with more parameters ie a larger version of the model, behaved more intelligently and had a larger set of skills than the same model with fewer parameters.
          • This means you can trade money for IQ. You spend more money, get more computing power, and your model will be better than the other guy’s
          • Your model being better means you can do more things, replace more tasks, and hence make more money
          • Essentially this makes the current AI market a very straightforward money-in-determines-money-out situation
          • In other words, the realization that the same AI model, only bigger, was significantly better, created a pathway for reliably investing huge amounts of money into building bigger and bigger models

          So basically AI was meandering around trying to find the right road, and in 2020 it found a road that goes a long way in a straight line, enabling the industry to just floor the accelerator.

          The direct relationship this model creates between more neurons/weights/parameters on the one hand, and more intelligence on the other, creates an almost arbitrage-easy way to absorb tons of money into profitable structures.

          • MajorHavoc@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 months ago

            That makes a lot of sense, and I agree that’s where this is going.

            One issue, and not a particularly new one, is that virtually everyone is overselling how far that straight line improvement actually takes us.

            As someone with LLM expertise, my gut assessment is that almost everyone is wildly overselling the usefulness of the next generation of improvement.

            The current error and hallucination rate is probably going to get dramatically better. And certain tasks that no AI can do will have breakthroughs that allow a decent AI to emerge.

            But those coming improvements aren’t going to make current AI suck that much less at the tasks it’s currently not well suited to, in the majority of cases.

            Source: decades of experience finding clever ways to make previously impossible automation work well enough, and a solid amount of direct LLM experience.

            Edit: And that’s a fascinating summary, and a great write up of an important and enlightening aspect of all this. Thanks for sharing it!