Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.
Free offerings are loss-leaders and will likely be enshittified within the next few years, but businesses relying on the models will instead be using a cloud GPU host or API, the price of which (based on the models we know the details of) appears to cover inference plus a profit margin.
Plausible that releases of better models will slow down due to training/R&D expenses if investment decreases, but I think if you're paying for GPT-4 currently then from this point on there'll always be at least some equivalent model available.
> [...] diminishing returns [...]. The entire generative AI movement lives and dies by the idea that more compute power and more training data makes these things better, and if that's no longer the case [...] what's the point?
Diminishing returns means going from 100 GPUs to 101 GPUs does not give the same improvement as going from 1 GPU to 2 GPUs - not that it gives no improvement. The scaling laws predict this, and I believe it's likely true of any approach to intelligence rather than a limitation specific to deep learning. Computer graphics also has diminishing returns from compute, for instance.
> The constant refrain I hear from VCs and AI fantasists is that "chips will bring down the cost of inference," yet I don't see any proof of that happening
Surely it already has? Try running some recent LLM on 2010-2012 hardware from around when modern AI was taking off.
> It cost $100 million to train GPT-4o [...]
Beyonce's home cost $200 million, and advertising for the Monopoly Go mobile game cost $500 million. It's undoubtedly a lot of money, but also relatively high-impact and I wouldn't necessarily bet against it eventually going a couple of order of magnitudes higher.
I really don't understand why people think it's some insurmountable wall. People do hallucinate in the same way llm-s do. If you talk to a baby that just acquired language it spews all sorts of nonsense, but also some sensible things.
So far we made a baby, granted, with huge vocabulary and shallow knowledge of many things. But now we need to raise it. The same way we raise humans. With better training. By exposing it to new knowlege but also reducing input of garbage. By making it hear the words that have more and more complex concepts behind them and telling it when it generates outputs that are erroneous.
It's a monumental task, but we are not raising an only child. We need to figure out how to make children cooperate and teach from each other.
Synthetic data will be a huge part of this process. Gold and shovels ... AI might be search for gold, but creating variety of synthetic data from deep reasoning will be shovels factory.
It's weird that people suddenly expect 4 year old to accurately reproduce encyclopedia just because it can talk now.