A big biometric security company in the UK, Facewatch, is in hot water after their facial recognition system caused a major snafu - the system wrongly identified a 19-year-old girl as a shoplifter.

  • Telodzrum@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    28
    ·
    4 months ago

    If it works anything like Apple’s Face ID twins don’t actually map all that similar. In the general population the probability of matching mapping of the underlying facial structure is approximately 1:1,000,000. It is slightly higher for identical twins and then higher again for prepubescent identical twins.

    • MonkderDritte@feddit.de
      link
      fedilink
      English
      arrow-up
      39
      ·
      edit-2
      4 months ago

      Meaning, 8’000 potential false positives per user globally. About 300 in US, 80 in Germany, 7 in Switzerland.

      Might be enough for Iceland.

      • starchylemming@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        4 months ago

        no,people in iceland are so genetically homogeneous, they probably match thanks to everyone being so related

      • ramjambamalam@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 months ago

        I can already imagine the Tom Clancy thriller where some Joe Nobody gets roped into helping crack a terrorist’s locked phone because his face looks just like the terrorist’s.

      • Telodzrum@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        37
        ·
        4 months ago

        Yeah, which is a really good number and allows for near complete elimination of false matches along this vector.

        • 4am@lemm.ee
          link
          fedilink
          English
          arrow-up
          26
          arrow-down
          1
          ·
          4 months ago

          I promise bro it’ll only starve like 400 people please bro I need this

          • Telodzrum@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            16
            ·
            4 months ago

            No you misunderstood. That is a reduction in commonality by a literal factor of one million. Any secondary verification point is sufficient to reduce the false positive rate to effectively zero.

            • AwesomeLowlander@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              19
              ·
              4 months ago

              secondary verification point

              Like, running a card sized piece of plastic across a reader?

              It’d be nice if they were implementing this to combat credit card fraud or something similar, but that’s not how this is being deployed.

            • BassTurd@lemmy.world
              link
              fedilink
              English
              arrow-up
              16
              ·
              4 months ago

              Which means the face recognition was never necessary. It’s a way for companies to build a database that will eventually get exploited. 100% guarantee.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      4 months ago

      Yeah, people with totally different facial structures get identified as the same person all the time with the “AI” facial recognition, especially if your darker skinned. Luckily (or unluckily) I’m white as can be.

      I’m assuming Apple’s software is a purpose built algorithm that detects facial features and compares them, rather than the black box AI where you feed in data and it returns a result. Thats the smart way to do it, but it takes more effort.

      • CeeBee@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        people with totally different facial structures get identified as the same person all the time with the “AI” facial recognition

        All the time, eh? Gonna need a citation on that. And I’m not talking about just one news article that pops up every six months. And nothing that links back to the UCLA’s 2018 misleading “report”.

        I’m assuming Apple’s software is a purpose built algorithm that detects facial features and compares them, rather than the black box AI where you feed in data and it returns a result.

        You assume a lot here. People have this conception that all FR systems are trained blackbox models. This is true for some systems, but not all.

        The system I worked with, which ranked near the top of the NIST FRVT reports, did not use a trained AI algorithm for matching.

        • Cethin@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          I’m not doing a bunch of research to prove the point. I’ve been hearing about them being wrong fairly frequently, especially on darker skinned people, for a long time now. It doesn’t matter how often it is. It sounds like you have made up your mind already.

          I’m assuming that of apple because it’s been around for a few years longer than the current AI craze has been going on. We’ve been doing facial recognition for decades now, with purpose built algorithms. It’s not mucb of leap to assume that’s what they’re using.

          • CeeBee@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            I’ve been hearing about them being wrong fairly frequently, especially on darker skinned people, for a long time now.

            I can guarantee you haven’t. I’ve worked in the FR industry for a decade and I’m up to speed on all the news. There’s a story about a false arrest from FR at most once every 5 or 6 months.

            You don’t see any reports from the millions upon millions of correct detections that happen every single day. You just see the one off failure cases that the cops completely mishandled.

            I’m assuming that of apple because it’s been around for a few years longer than the current AI craze has been going on.

            No it hasn’t. FR systems have been around a lot longer than Apple devices doing FR. The current AI craze is mostly centered around LLMs, object detection and FR systems have been evolving for more than 2 decades.

            We’ve been doing facial recognition for decades now, with purpose built algorithms. It’s not mucb of leap to assume that’s what they’re using.

            Then why would you assume companies doing FR longer than the recent “AI craze” would be doing it with “black boxes”?

            I’m not doing a bunch of research to prove the point.

            At least you proved my point.

            • Cethin@lemmy.zip
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 months ago

              You don’t see any reports from the millions upon millions of correct detections that happen every single day. You just see the one off failure cases that the cops completely mishandled.

              Obviously. I don’t have much of an issue with it when it’s working properly (although I do still absolutely have an issue with it still). It being wrong and causing issues fairly frequently, and every 5 or 6 months is frequent (this is a low number, just the frequency of it causing newsworthy issues) with it not being deployed widely yet, is a pretty big issue. Scale that up by several orders of magnitude if it’s widely adopted and the errors will be constant.

              No it hasn’t. FR systems have been around a lot longer than Apple devices doing FR. The current AI craze is mostly centered around LLMs, object detection and FR systems have been evolving for more than 2 decades… Then why would you assume companies doing FR longer than the recent “AI craze” would be doing it with “black boxes”?

              You’re repeating what I said. Apples FR tech is a few years older than the machine learning tech that we have now. FR in general is several decades old, and it’s not ML based. It’s not a black box. You can actually know what it’s doing. I specifically said they weren’t doing it with black boxes. I said the AI models are. Please read again before you reply.

              At least you proved my point.

              You wrongly assuming what I said, which is actually the opposite of what I said, is the reason I’m not putting in the effort. You’ve made up your mind. I’m not going to change it, so I’m not putting in the effort it would take to gather the data, just to throw it into the wind. It sounds like you are already aware of some of it, but somehow think it’s not bad.

    • 4am@lemm.ee
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      2
      ·
      4 months ago

      And yet this woman was mistaken for a 19-year-old 🤔

      • Telodzrum@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        20
        ·
        4 months ago

        Shitty implementation doesn’t mean shitty concept, you’d think a site full of tech nerds would understand such a basic concept.

        • Hawk@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          Pretty much everyone here agrees that it’s a shitty concept. Doesn’t solve anything and it’s a privacy nightmare.

    • chiisana@lemmy.chiisana.net
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 months ago

      I think from a purely technical point of view, you’re not going to get FaceID kind of accuracy on theft prevention systems. Primarily because FaceID uses IR array scanning within arm’s reach from the user, whereas theft prevention is usually scanned from much further away. The distance makes it much harder to get the fidelity of data required for an accurate reading.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 months ago

        Yup, it turns out if you have millions of pixels to work with, you have a better shot at correctly identifying someone than if you have dozens.

      • CeeBee@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        I think from a purely technical point of view, you’re not going to get FaceID kind of accuracy on theft prevention systems. Primarily because FaceID uses IR array scanning within arm’s reach from the user, whereas theft prevention is usually scanned from much further away. The distance makes it much harder to get the fidelity of data required for an accurate reading.

        This is true. The distance definitely makes a difference, but there are systems out there that get incredibly high accuracy even with surveillance footage.