Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️ Call-to-Action on SB 1047 – Frontier Artificial Intelligence Models Act
Animats 16 daysReload
I just sent in some comments.

It's too late to stop "deep fakes". That technology is already in Photoshop and even built into some cameras. Also, regulate that and Hollywood special effects shops may have to move out of state.

As for LLMs making it easier to people to build destructive devices, Google can provide info about that. Or just read some "prepper" books and magazines. That ship sailed long ago.

Real threats are mostly about how much decision power companies delegate to AIs. Systems terminating accounts with no appeal are already a serious problem. An EU-type requirement for appeals, a requirement for warning notices, and the right to take such disputes to court would help there. It's not the technology.


jph00 16 daysReload
I've written a submission to the authors of this bill, and made it publicly available here:

https://www.answer.ai/posts/2024-04-29-sb1047.html

The EFF have also prepared a submission:

https://www.context.fund/policy/2024-03-26SB1047EFFSIA.pdf

A key issue with the bill is that it criminalises creating a model that someone else uses to cause harm. But of course, it's impossible to control what someone else does with your model -- regardless of how you train it, it can be fine-tuned, prompted, etc by users for their own purposes. Even then, you can't really know why a model is doing something -- for instance, AI security researchers Arvind Narayanan and Sayash Kapoor point out:

> Consider the concern that LLMs can help hackers generate and send phishing emails to a large number of potential victims. It’s true — in our own small-scale tests, we’ve found that LLMs can generate persuasive phishing emails tailored to a particular individual based on publicly available information about them. But here’s the problem: phishing emails are just regular emails! There is nothing intrinsically malicious about them. A phishing email might tell the recipient that there is an urgent deadline for a project they are working on, and that they need to click on a link or open an attachment to complete some action. What is malicious is the content of the webpage or the attachment. But the model that’s being asked to generate the phishing email is not given access to the content that is potentially malicious. So the only way to make a model refuse to generate phishing emails is to make it refuse to generate emails.

Nearly a year ago I warned that that bills of this kind could hurt, rather than help safety, and could actually tear down the foundations of the Enlightenment:

https://www.fast.ai/posts/2023-11-07-dislightenment.html


pcthrowaway 16 daysReload
This bill sounds unbelievably stupid. If passed, it will just result in a migration of AI projects out of California, save a few which are already tied to the EA movement.

I'm not under the impression that the EA movement is better suited to steward AI development than other groups, but even assuming they were, there is no chance for an initiative like this to work unless every country agreed to it and followed it.


interroboink 16 daysReload
I feel like the legal definition of "AI Model" is pretty slippery.

From this document, they define:

    “Artificial intelligence model” means an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs that can influence physical or virtual environments and that may operate with varying levels of autonomy.
That's pretty dang broad. Doesn't it cover basically all software? I'm not a lawyer, and I realize it's ultimately up to judges to interpret, but it seems almost limitless. Seems like it could cover a kitchen hand mixer too, as far as I can tell.

_heimdall 16 daysReload
Anyone have a link to a less biased explanation of the bill? I can't take this one too seriously when it baselessly claims people will be charged with thought crimes.