• athatet@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Do I really have to ask ai and then still go elsewhere to actually verify the information because LLMs are actually just full of shit?

      • Tanis Nikana@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        You never have to ask AI. You can always go look it up in Wikipedia, use an encyclopedia, or talk to someone who fights kangaroos professionally.

        In fact, I’d go so far as to say, never use AI for anything ever.

      • TotallynotJessica@lemmy.blahaj.zoneOPM
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Sorry, I don’t trust language machines to ever know anything. It’s not their fault, you just can’t expect them to do what they aren’t capable of. If you really wanna think AI is useful, recognize the areas where it objectively isn’t useful.

        • kofe@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          I really dont get this argument. You can ask it for live link sources to verify whatever info it generates, depending on the topic.

          • TotallynotJessica@lemmy.blahaj.zoneOPM
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            In which case it doesn’t need to give a summary in the first place. Too often people won’t click on those sources over the ai blurb. It’s designed to exploit laziness by making being misinformed easy. Seeing people just give AI blurbs without a source as evidence is annoyingly common.

            • kofe@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              Too often people don’t read past the headline or even have their algorithm trained to consistently give dis/misinformation. I don’t see criticism of a tool, but rather how has developed and is used. This applies in so many areas that I think the more effective approach is teaching people how to think more critically and criticizing the companies for not doing their due diligence in promoting that. Otherwise it comes across like being upset people use social media, in which case I think we are far too beyond the Pandoras box being opened to spend time focusing on that aspect. If you have solutions other than telling people not to use it, I’m all ears.

              • Arthur Besse@lemmy.ml
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                1 month ago

                https://stopcitingai.com/

                😬

                That website was made by someone suffering from some cognitive dissonance. They correctly observe that LLMs “can produce convincing-sounding information, but that information may not be accurate or reliable” and then somehow immediately afterwards conclude that “summarize this for me” is the type of thing which LLMs “might” be “good at”.

            • lad@programming.dev
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              Unfortunately, I find that finding those sources by traditional search gets harder over time. Maybe the internet is now more garbage, maybe the search engines are more garbage, but a couple of times I failed to find a source on my own and used an LLM to find one (it may also fail, of course)

            • YoureHotCupCake@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              I think you are right that the problem is often just lazy people not wanting to understand or use the tool in a way that is beneficial to them. But there are some good use cases for it, when I am coding I will ask questions and my session instructions are to only provide relevant links to source documentation that can be helpful in my problem and also provide tutorial links that could be relevant, never provide code or advice. I would say 7/10 times it gets me to the correct spot in the docs and provides some useful tutorials on the subject. Not perfect but I am not using it and just blindly trusting its advice, just using it to be a slightly faster search engine that gets me to the information I am seeking without me having to dig into the docs or jump from site to site finding the information.

      • Doll_Tow_Jet-ski@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        Legend has it there are these things called “search engines”. People say they are doors to actual human knowledge

          • morto@piefed.social
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            We can still restrict the search to dates before 2023, in case we’re not looking for more recent information, and be fully ai-free

            • zikzak025@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              The problem for me is the AI curating and interpreting the results for you with every search.

              If you’re using a mainstream “AI powered” search engine these days, it will come to a conclusion on its own and then show you only results that corroborate it, rather than the other way around.

              • TotallynotJessica@lemmy.blahaj.zoneOPM
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 month ago

                That was happening before AI, it’s just that machine learning was harnessed to make it worse rather than better. Imagine if they used machine learning to combat SEO or recognize that it’s not giving you what you want after the first page. Instead it’s just a disinformation machine to maximize how easy it is to control the public.

        • we are all@crazypeople.online
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          wait, what’s a fursuiter? do I have to ask yet another series of if statements to attain contextual awareness for these discussions or, like, what.