Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️ Liquid: Language models are scalable and unified multi-modal generators
Centigonal 5 daysReload
I love the website for this paper! Each section asks a question, and immediately answers it with a figure and a few sentences of discussion. It's less tech-demo heavy than a lot of other paper websites (those are cool, too, in their own way), and instead focuses on characterizing multimodal model behavior in a nice, clean, disciplined way.

gwern 5 daysReload
> For the first time, Liquid uncovers a scaling law that performance drop unavoidably brought by the unified training of visual and language tasks diminishes as the model size increases...No prior work has explored whether LLMs retain the power-law scaling laws observed in language tasks when extended to visual generation tasks. We prove this alignment and further show that vision can be effectively learned by LLMs as a form of language.

Does this really show much that https://arxiv.org/abs/2301.03728#facebook (uncited) and other earlier work did not?


swyx 5 daysReload
hmm this is a tough name - conflicts with Liquid AI https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

Nijikokun 5 daysReload
it performs well with composition, however it seems SD and SDXL excels in capability and quality when intermixed with pipelines and workflows, this doesn't do much to talk about that comparison and whenever i see things like this i think about the overall workflow, like cool you do good composition but you don't fit within the workflow or ecosystem that surrounds that tool and thus i have low expectations around adoption

marviel 4 daysReload
The Synesthesia these models must experience has gotta be intense