• Eggyhead@kbin.run
    link
    fedilink
    arrow-up
    2
    ·
    4 months ago

    The safeguards, for anyone like me who didn’t know about them until now.

    Basically, the guidelines include:

    1. Ensuring AI systems are safe before public release.
    2. Building AI systems to address issues like bias and discrimination.
    3. Using AI to enhance security and protect privacy.
    4. Sharing best practices across the industry.
    5. Increasing transparency and providing clarity about AI’s capabilities and limitations.
    6. Reporting on the risks and impacts of AI.
        • Angry_Autist (he/him)@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          I think they’re making a joke about how AI generated code is ridiculously insecure and shouldn’t be used by anyone.

          That said, AIs with the ability to pen test will be a hell of a lot better at finding obscure exploits than any human, so the joke is kind of damaging.

          I mean it holds a kernel of truth, but only in one specific use case.

          And I can tell you from personal experience if enough people bandwagon the joke, it will kill any interest in developing actually useful AI penetration testing products.

          Just like how you chucklefucks broke NFTs.

          They could have been THE SOLUTION to protect content creators from platform abuse, but because everyone focused on ONE use case (links to pictures) and joked about it, all the actual useful NFT development to secure creator’s rights and force cross platform compatibility has been completely abandoned and a shitton of you will downvote me for even mentioning it.

          • asudox@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            3 months ago

            I’m sorry but NFTs are fundamentally flawed due to blockchain usage and they provide no real value to anyone.