Evidence for the DDoS attack that bigtech LLM scrapers actually are.

  • pcouy@lemmy.pierre-couy.fr
    link
    fedilink
    English
    arrow-up
    6
    ·
    12 hours ago

    I used to get a lot of scrappers hitting my Lemmy instance, most of them using a bunch of IP ranges, some of them masquerading their user agents as a regular browser.

    What’s been working for me is using a custom nginx log format with a custom fail2ban filter that mets me easily block new bots once I identify some kind of signature.

    For instance, one of these scrappers almost always sends requests that are around 250 bytes long, using the user agent of a legitimate browser that always sends requests that are 300 bytes or larger. I can then add a fail2ban jail that triggers on seeing this specific user agent with the wrong request size.

    On top of this, I wrote a simple script that monitors my fail2ban logs and writes CIDR ranges that appear too often (the threshold is proportional to 1.5^(32-subnet_mask)). This file is then parsed by fail2ban to block whole ranges. There are some specific details I omitted regarding bantime and findtime, that ensure that a small malicious range will not be able to trick me into blocking a larger one. This has worked flawlessly to block “hostile” ranges with apparently 0 false positives for nearly a year

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 hours ago

      the threshold is proportional to 1.5^(32-subnet_mask)

      what are you basing that prefix length decision off? whois/NIC allocation data?

      is the decision loop running locally to any given f2b instance, or do you aggregate for processing then distribute blocklist?

      either way, seems like an interesting approach for catching the type of shit that likes to snowshoe from random cloud providers while lying in agent signature

  • Monument@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    18 hours ago

    I’m caught on the other side of the whack a mole game. The tools I use at work to check the health of my site - specifically that links on my site aren’t broken - now render an extremely high false positive rate, as other sites serve up a whole slew of error messages to the bot that just wants to make sure the link points to a working page.

      • Monument@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        15 hours ago

        Sure!

        One of the things I do is monitor my organization’s website to ensure that it’s functional for our visitors.
        We have a few hundred web pages, so we use a service to monitor and track how we’re doing. The service is called SiteImprove. They track a number of metrics, such as SEO, accessibility, and of course, broken links. (I couldn’t tell you if the service is ‘good’ - I don’t have a basis for comparison.) So, SiteImprove uses robots to crawl our website, and analyze it for the above stuff. When their robots find a link on our site, they try to follow it. If the destination reports back an error, the error gets logged and put into a report that I review.

        Basically, in the last 6ish months, we went from having less than 5 false positives a month to having over a hundred every month.
        Before, a lot of those false positives were ‘server took too long to respond’ without a corresponding error code - which happens. Sometimes a server goes down, then comes back up by the time I’m looking at the reports. However, now, a lot of these reports are coming back with html status messages, such as 400: Bad Request, 403: Forbidden, 502: Bad Gateway, or 503: Service Unavailable. I even got a 418 a few months ago, which tickled me pink. It’s my favorite HTML status (and probably the most appropriate one to roll bots with). Which is to say that instead of a server being down or whatever, a server saw the request, and decided to respond in one of the above ways.

        And while I can visit the URL in a browser, the service will repeatedly get these errors when they send their bots to double check the link destinations, so I’m reasonably confident it’s something with the bots getting blocked more aggressively than they were in the past.

        Edit: Approximately 10 minutes after I posted this comment, our CDN blocked the bot, too. Now it’s reporting all internal links as broken, too. So… every link on every page. I guess I’m taking it easy today!

  • db0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    11
    ·
    19 hours ago

    Ye I had to deal with the same issue in lemmy. Even wrote the official guide on how to discover the agents and configure them accordingly in lemmy nginx. Personally send them an error code that just makes the page seem unresponsive, so they think the site is down instead of thinking they got caught

  • Jayjader@jlai.lu
    link
    fedilink
    English
    arrow-up
    19
    ·
    22 hours ago

    How feasible is it to configure my server to essentially perform a reverse-slow-lorris attack on these LLM bots?

    If they won’t play nice, then we need to reflect their behavior back onto themselves.

    Or perhaps serve a 404, 304 or some other legitimate-looking static response that minimizes load on my server whilst giving then least amount of data to train on.

    • raoul@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      19 hours ago

      The only simple possibles ways are:

      • robot.txt
      • rate limiting by ip
      • blocking by user agent

      From the article, they try to bypass all of them:

      They also don’t give a single flying fuck about robots.txt …

      If you try to rate-limit them, they’ll just switch to other IPs all the time. If you try to block them by User Agent string, they’ll just switch to a non-bot UA string (no, really). This is literally a DDoS on the entire internet.

      It then become a game of whac a mole with big tech 😓

      The more infuriating for me is that it’s done by the big names, and not some random startup. Edit: Now that I think about it, this doesn’t prove it is done by Google or Amazon: it can be someone using random popular user agents

      • jherazob@fedia.io
        link
        fedilink
        arrow-up
        5
        ·
        20 hours ago

        I do believe there’s blocklists for their IPs out there, that should mitigate things a little

    • raoul@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      19 hours ago

      A possibility to game this kind of bots is to add a hidden link to a randomly generated page, which contain itself a link to another random page, and so one.: The bots will still consume resources but will be stuck parsing random garbage indefinitely.

      I know there is a website that is doing that, but I forget his name.

      Edit: This is not the one I had in mind, but I find https://www.fleiner.com/bots/ describes be a good honeypot.