Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️ AI 2027
Vegenoid 5 daysReload
I think we've actually had capable AIs for long enough now to see that this kind of exponential advance to AGI in 2 years is extremely unlikely. The AI we have today isn't radically different from the AI we had in 2023. They are much better at the thing they are good at, and there are some new capabilities that are big, but they are still fundamentally next-token predictors. They still fail at larger scope longer term tasks in mostly the same way, and they are still much worse at learning from small amounts of data than humans. Despite their ability to write decent code, we haven't seen the signs of a runaway singularity as some thought was likely.

I see people saying that these kinds of things are happening behind closed doors, but I haven't seen any convincing evidence of it, and there is enormous propensity for AI speculation to run rampant.


stego-tech 5 daysReload
It’s good science fiction, I’ll give it that. I think getting lost in the weeds over technicalities ignores the crux of the narrative: even if this doesn’t lead to AGI, at the very least it’s likely the final “warning shot” we’ll get before it’s suddenly and irreversibly here.

The problems it raises - alignment, geopolitics, lack of societal safeguards - are all real, and happening now (just replace “AGI” with “corporations”, and voila, you have a story about the climate crisis and regulatory capture). We should be solving these problems before AGI or job-replacing AI becomes commonplace, lest we run the very real risk of societal collapse or species extinction.

The point of these stories is to incite alarm, because they’re trying to provoke proactive responses while time is on our side, instead of trusting self-interested individuals in times of great crisis.


visarga 5 daysReload
The story is entertaining, but it has a big fallacy - progress is not a function of compute or model size alone. This kind of mistake is almost magical thinking. What matters most is the training set.

During the GPT-3 era there was plenty of organic text to scale into, and compute seemed to be the bottleneck. But we quickly exhausted it, and now we try other ideas - synthetic reasoning chains, or just plain synthetic text for example. But you can't do that fully in silico.

What is necessary in order to create new and valuable text is exploration and validation. LLMs can ideate very well, so we are covered on that side. But we can only automate validation in math and code, but not in other fields.

Real world validation thus becomes the bottleneck for progress. The world is jealously guarding its secrets and we need to spend exponentially more effort to pry them away, because the low hanging fruit has been picked long ago.

If I am right, it has implications on the speed of progress. Exponential friction of validation is opposing exponential scaling of compute. The story also says an AI could be created in secret, which is against the validation principle - we validate faster together, nobody can secretly outvalidate humanity. It's like blockchain, we depend on everyone else.


ivraatiems 6 daysReload
Though I think it is probably mostly science-fiction, this is one of the more chillingly thorough descriptions of potential AGI takeoff scenarios that I've seen. I think part of the problem is that the world you get if you go with the "Slowdown"/somewhat more aligned world is still pretty rough for humans: What's the point of our existence if we have no way to meaningfully contribute to our own world?

I hope we're wrong about a lot of this, and AGI turns out to either be impossible, or much less useful than we think it will be. I hope we end up in a world where humans' value increases, instead of decreasing. At a minimum, if AGI is possible, I hope we can imbue it with ethics that allow it to make decisions that value other sentient life.

Do I think this will actually happen in two years, let alone five or ten or fifty? Not really. I think it is wildly optimistic to assume we can get there from here - where "here" is LLM technology, mostly. But five years ago, I thought the idea of LLMs themselves working as well as they do at speaking conversational English was essentially fiction - so really, anything is possible, or at least worth considering.

"May you live in interesting times" is a curse for a reason.


KaiserPro 6 daysReload
> AI has started to take jobs, but has also created new ones.

Yeah nah, theres a key thing missing here, the number of jobs created needs to be more than the ones it's destroyed, and they need to be better paying and happen in time.

History says that actually when this happens, an entire generation is yeeted on to the streets (see powered looms, Jacquard machine, steam powered machine tools) All of that cheap labour needed to power the new towns and cities was created by automation of agriculture and artisan jobs.

Dark satanic mills were fed the decedents of once reasonably prosperous crafts people.

AI as presented here will kneecap the wages of a good proportion of the decent paying jobs we have now. This will cause huge economic disparities, and probably revolution. There is a reason why the royalty of Europe all disappeared when they did...

So no, the stock market will not be growing because of AI, it will be in spite of it.

Plus china knows that unless they can occupy most of its population with some sort of work, they are finished. AI and decent robot automation are an existential threat to the CCP, as much as it is to what ever remains of the "west"