AI could provide some minor deshitification of the internet by answering obvious questions implied by clickbaity titles. In other words, comb the link and pop up, in simplest terms, what a title baits you with.

For instance, a browser plugin that could pop up a balloon showing “It’s Portland, Oregon” when you hover your mouse over “One US city likes its food carts more than any other”. Or “Tumbling Dice” when you hover over “The Stones’ song that Mick Jagger hates to sing”. Even give “Haggle over the price and options” on the classic clickbait “Car dealers don’t want you to know this one trick!”. All without you having to sift through pages of crap filler text (likely AI generated) and included ads to satisfy trivial curiousity you might be baited by.

I wouldn’t even mind too much if the service collected and sold the fact that I did (or didn’t) get curious about the related topics. It would still be fewer ads in the face overall. So maybe monetizing like that could motivate someone to develop a service?

Or would that just make the net worse?

  • ChanchoManco@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    1 month ago

    That’s a great idea! I’d love if there’s a way to do it without giving traffic to the clickbait site.

    • Burninator05@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      If anything it might be better than not giving them traffic in the first place because their ad click though rate will be zero.

  • AA5B@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 month ago

    Seems like it would just feed the arms race of enshittification. It’ll help for a while until new smarter weapons arise against it, and we’re back to the same place while burning more electrons. I don’t see how it sustainably improves things

    Especially since ai search summaries are still so bad. All too often the ai result is wrong or misguided or hallucinating. Maybe I just have to get better at phrasing things, which used to be critical when search was still search, but isn’t the intent that you shouldn’t have to?

    I do use ai all the time and do think it’s useful, but only when keeping in mind its limitations. It can work well as a helpful step to a lot of things, but rarely as a final useful answer/result

  • peto (he/him)@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    A lot of the issue with this is that we are talking about a really energy-intensive way of solving this non-problem.

    A better way is to train humans to stop falling for the bait. That is also rather hard though but I’m pretty sure you can already get browser plugins that identify click bait headlines and just, hides them.

    If we can get the costs to read an summarize an article down (and get an AI that understands things like facts and source quality) then there are a bunch of things it could do for us. Interpreting contracts and TOS bollocks come to mind, but LLMs as we have them today can’t do that. They might end up part of the tool chain but they are presently insufficient.