Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️ Anubis saved our websites from a DDoS attack
mrweasel 6 daysReload
Sadly it hard to tell if this is an actual DDoS attack, or scrappers descending on the site. It all looks very similar.

The search engines always seemed happy to announce that they are in fact GoogleBot/BingBot/Yahoo/whatever and frequently provided you with their expected IP ranges. The modern companies, mostly AI companies, seems to be more interested in flying under the radar, and have less respect for the internet infrastructure at a whole. So we're now at a point where I can't tell if it's an ill willed DDoS attack or just shitty AI startup number 7 reloading training data.


CaptainFever 6 daysReload
> To me, Anubis is not only a blocker for AI scrapers. Anubis is a DDoS protection.

Anubis is DDoS protection, just with updated marketing. These tools have existed forever, such as CloudFlare Challenges, or https://github.com/RuiSiang/PoW-Shield. Or HashCash.

I keep saying that Anubis really has nothing much to do with AI (e.g. some people might mistakenly think that it magically "blocks AI scrapers"; it only slows down abusive-rate visitors). It really only deals with DoS and DDoS.

I don't understand why people are using Anubis instead of all the other tools that already exist. Is it just marketing? Saying the right thing at the right time?


chrisnight 7 daysReload
> Solving the challenge–which is valid for one week once passed–

One thing that I've noticed recently with the Arch Wiki adding Anubis, is that this one week period doesn't magically fix user annoyances with Anubis. I use Temporary Containers for every tab, which means that I constantly get Anubis regenerating tokens, since the cookie gets deleted as soon as the tab is closed.

Perhaps this is my own problem, but given the state of tracking on the internet, I do not feel it is an extremely out-of-the-ordinary circumstance to avoid saving cookies.


butz 6 daysReload
As usual, there is a negative side to such protection: I was trying to download some raw files from git repository and instead of data got bunch of html. After quick look it turned out to be Anubis HTML page. Another issue was with broken links to issue tickets on main page, where Anubis was asking wrapper script to solve some hashes. Lesson here: after deploying Anubis, please carefully check the impact. There might be some unexpected issues.

vachina 6 daysReload
It’s not Anubis that saved your website, literally any sort of Captcha, or some dumb modal with a button to click into the real contents would’ve worked.

These crawlers are designed to work on 99% of hosts, if you tweak your site just so slightly out of spec, these bots wouldn’t know what to do.