83
Excerpt from a message I just posted in a #diaspora team internal f...
pod.geraspora.deExcerpt from a message I just posted in a #diaspora team internal forum category. The context here is that I recently get pinged by slowness/load spikes on the diaspora* project web infrastructure (Discourse, Wiki, the project website, ...), and looking at the traffic logs makes me impressively angry.
In the last 60 days, the diaspora* web assets received 11.3 million requests. That equals to 2.19 req/s - which honestly isn't that much. I mean, it's more than your average personal blog, but nothing that my infrastructure shouldn't be able to handle.
However, here's what's grinding my fucking gears. Looking at the top user agent statistics, there are the leaders:
2.78 million requests - or 24.6% of all traffic - is coming from Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.2; +https://openai.com/gptbot).
1.69 million reuqests - 14.9% - Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/600.2.5 (KHTML, like Gecko) Version/8.0.2 Safari/600.2.5 (Amazonb...
Evidence for the DDoS attack that bigtech LLM scrapers actually are.
I’m not sure I understand your comment, mind elaborating on the details?
Sure!
One of the things I do is monitor my organization’s website to ensure that it’s functional for our visitors.
We have a few hundred web pages, so we use a service to monitor and track how we’re doing. The service is called SiteImprove. They track a number of metrics, such as SEO, accessibility, and of course, broken links. (I couldn’t tell you if the service is ‘good’ - I don’t have a basis for comparison.) So, SiteImprove uses robots to crawl our website, and analyze it for the above stuff. When their robots find a link on our site, they try to follow it. If the destination reports back an error, the error gets logged and put into a report that I review.
Basically, in the last 6ish months, we went from having less than 5 false positives a month to having over a hundred every month.
Before, a lot of those false positives were ‘server took too long to respond’ without a corresponding error code - which happens. Sometimes a server goes down, then comes back up by the time I’m looking at the reports. However, now, a lot of these reports are coming back with html status messages, such as 400: Bad Request, 403: Forbidden, 502: Bad Gateway, or 503: Service Unavailable. I even got a 418 a few months ago, which tickled me pink. It’s my favorite HTML status (and probably the most appropriate one to roll bots with). Which is to say that instead of a server being down or whatever, a server saw the request, and decided to respond in one of the above ways.
And while I can visit the URL in a browser, the service will repeatedly get these errors when they send their bots to double check the link destinations, so I’m reasonably confident it’s something with the bots getting blocked more aggressively than they were in the past.
Edit: Approximately 10 minutes after I posted this comment, our CDN blocked the bot, too. Now it’s reporting all internal links as broken, too. So… every link on every page. I guess I’m taking it easy today!