• FalseMyrmidon@kbin.run
    link
    fedilink
    arrow-up
    66
    arrow-down
    6
    ·
    4 months ago

    Who’s ignoring hallucinations? It gets brought up in basically every conversation about LLMs.

    • 14th_cylon@lemm.ee
      link
      fedilink
      English
      arrow-up
      83
      arrow-down
      3
      ·
      4 months ago

      People who suggest, let’s say, firing employees of crisis intervention hotline and replacing them with llms…

      • SkyezOpen@lemmy.world
        link
        fedilink
        English
        arrow-up
        26
        arrow-down
        2
        ·
        4 months ago

        “Have you considered doing a flip as you leap off the building? That way your death is super memorable and cool, even if your life wasn’t.”

        -Crisis hotline LLM, probably.

      • Voroxpete@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        17
        ·
        4 months ago

        Less horrifying conceptually, but in Canada a major airline tried to replace their support services with a chatbot. The chatbot then invented discounts that didn’t actually exist, and the courts ruled that the airline had to honour them. The chatbot was, for all intents and purposes, no more or less official a source of data than any other information they put out, such as their website and other documentation.

        • 14th_cylon@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          i approve of that. it is funny and there is no harm to anyone else other than the shareholders, so… 😆

      • L_Acacia@lemmy.one
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        They know the tech is not good enough, they just dont care and want to maximise profit.

    • Neato@ttrpg.network
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      4 months ago

      It really needs to be a disqualifying factor for generative AI. Even using it for my hobbies is useless when I can’t trust it knows dick about fuck. Every time I test the new version out it gets things so blatantly wrong and contradictory that I give up; it’s not worth the effort. It’s no surprise everywhere I’ve worked has outright banned its use for official work.

      • DdCno1@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        4 months ago

        I agree. The only application that is fine for this in my opinion is using it solely for entertainment, as a toy.

        The problem is of course that everyone and their mothers are pouring billions into what clearly should only be used as a toy, expecting it to perform miracles it currently can not and might never be able to pull off.

    • Teodomo@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      4 months ago

      Maybe on Lemmy and in some pockets of social media. Elsewhere it definitely doesn’t.

      EDIT: Also I usually talk with IRL non-tech people about AI, just to check what they feel about it. Absolutely no one so far knew what hallucinations were.