Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.
The answer is, almost unfailingly, "this paper applies perfectly to our technique because we are just rehashing the same ideas on new modalities". If you believe it's unethical for GenAI models to train on people's music, isn't is also unethical to trick those people into posting their music online with a fake "defense" that won't actually protect them?
The Flashbulb - Parkways: https://youtu.be/C6pzg7I61FI
There's new models showing up regularly. Civitai recognizes 33 image models at this point, and audio will also see multiple developments. Any successful attack on a model isn't guaranteed to apply to another one, not even yet invented. There's also a multitude of possible pre-processing methods and their combinations for any piece of media.
There's also the difficulty of attacking a system that's not well documented. Not every model out there is open source and available for deep analysis.
And it's hard to attack something that doesn't yet exist, which means countermeasures will come up only after a model was already successfully created. This is I'm sure of some academic interest, but the practical benefits seem approximately none.
Since information is trivially stored, anyone having any trouble could just download the file today and sit on it for a year or two not doing anything at all, just waiting for a new model to show up.
It will be really interesting as this knowledge percolates into more and more fields, what domain experts do with it. I see ML as more of a bag of tricks that can be applied to many fields.