Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️ Benn Jordan's AI poison pill and the weird world of adversarial noise
Imnimo 4 daysReload
Any new "defense" that claims to use adversarial perturbations to undermine GenAI training should have to explain why this paper does not apply to their technique: https://arxiv.org/pdf/2406.12027

The answer is, almost unfailingly, "this paper applies perfectly to our technique because we are just rehashing the same ideas on new modalities". If you believe it's unethical for GenAI models to train on people's music, isn't is also unethical to trick those people into posting their music online with a fake "defense" that won't actually protect them?


janalsncm 4 daysReload
I like Benn Jordan because he’s clearly got a grasp on a functional understanding of machine learning, but that’s not his primary background. He comes from a music production background, so his focus is more practical and results-oriented.

It will be really interesting as this knowledge percolates into more and more fields, what domain experts do with it. I see ML as more of a bag of tricks that can be applied to many fields.


rcarmo 4 daysReload
Benn is one of my fave subscriptions on YouTube--both for the (now more occasional) music gear stuff and for the in-depth music industry education. The fact that he has been hacking away at IP and AI stuff for ages is just icing on the cake.

propter_hoc 4 daysReload
Benn has been one of my favorite electronic composers for almost 20 years. Probably my favorite track of his:

The Flashbulb - Parkways: https://youtu.be/C6pzg7I61FI


dale_glass 4 daysReload
All this stuff is snake oil, either already, or eventually.

There's new models showing up regularly. Civitai recognizes 33 image models at this point, and audio will also see multiple developments. Any successful attack on a model isn't guaranteed to apply to another one, not even yet invented. There's also a multitude of possible pre-processing methods and their combinations for any piece of media.

There's also the difficulty of attacking a system that's not well documented. Not every model out there is open source and available for deep analysis.

And it's hard to attack something that doesn't yet exist, which means countermeasures will come up only after a model was already successfully created. This is I'm sure of some academic interest, but the practical benefits seem approximately none.

Since information is trivially stored, anyone having any trouble could just download the file today and sit on it for a year or two not doing anything at all, just waiting for a new model to show up.