Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️
jph00 15 daysReload
It sounds like you do agree with affuture.org though. The proposed draft does not hold models used for non-academic reasons to a higher standard, and "models that can't be commercialized" are covered by it. It will be far harder to academics to work on large models under this draft.

jph00 15 daysReload
What's with the ad-hominem? I can't see where you're getting that from at all. The folks involved in this lobby group are listed here:

https://www.affuture.org/about/


jph00 15 daysReload
You're missing the point. Liability here would also fall on the open source developer who created a general purpose model, which someone else then went on to fine-tune and prompt to do something harmful.

jph00 15 daysReload
Pretty much all models, including today's models, already fall foul of the "Hazardous capability" clause. These models can be used to craft persuasive emails or blog posts, analyse code for security problems, and so forth. Whether such a thing is done as part of a process that leads to lots of damage depends on the context, not on the model.

So in practice, only the flops criteria matters. Which means only giant companies with well-funded legal departments, or large states, can build these models, increasing centralization and control, and making full model access a scarce resource worth fighting over.


jph00 15 daysReload
I've written a submission to the authors of this bill, and made it publicly available here:

https://www.answer.ai/posts/2024-04-29-sb1047.html

The EFF have also prepared a submission:

https://www.context.fund/policy/2024-03-26SB1047EFFSIA.pdf

A key issue with the bill is that it criminalises creating a model that someone else uses to cause harm. But of course, it's impossible to control what someone else does with your model -- regardless of how you train it, it can be fine-tuned, prompted, etc by users for their own purposes. Even then, you can't really know why a model is doing something -- for instance, AI security researchers Arvind Narayanan and Sayash Kapoor point out:

> Consider the concern that LLMs can help hackers generate and send phishing emails to a large number of potential victims. It’s true — in our own small-scale tests, we’ve found that LLMs can generate persuasive phishing emails tailored to a particular individual based on publicly available information about them. But here’s the problem: phishing emails are just regular emails! There is nothing intrinsically malicious about them. A phishing email might tell the recipient that there is an urgent deadline for a project they are working on, and that they need to click on a link or open an attachment to complete some action. What is malicious is the content of the webpage or the attachment. But the model that’s being asked to generate the phishing email is not given access to the content that is potentially malicious. So the only way to make a model refuse to generate phishing emails is to make it refuse to generate emails.

Nearly a year ago I warned that that bills of this kind could hurt, rather than help safety, and could actually tear down the foundations of the Enlightenment:

https://www.fast.ai/posts/2023-11-07-dislightenment.html