Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️ AniSora: Open-source anime video generation model
kachapopopow 8 hoursReload
Some of these are very obviously trained on webtoons and manga, probably pixiv as well. This is very clear due to seeing CG buildings and other misc artifacts. So this is obviously trained on copyrighted material.

Art is something that cannot be generated like synthetic text so it will have to be nearly forever powered by human artists or else you will continue to end up with artifacting. So it makes me wonder if artists will just be downgraded to an "AI" training position, but it could be for the best as people can draw what they like instead and have that input feed into a model for training which doesn't sound too bad.

While being very pro AI in terms of any kind of trademaking and copyright, it still make me wonder what will happen to all the people who provided us with entertainment and if the quality continue to increase or if we're going to start losing challenging styles because "it's too hard for ai" and everything will start 'felling' the same.

It doesn't feel the same as people being replaced with computer and machines, this feels like the end of a road.


internet2000 13 hoursReload
We’re so close to finally being able to generate our own Haruhi season 3… what a time to be alive.

isaacimagine 14 hoursReload
I tested this out with a promotional illustration from Neon Genesis Evangelion. The model works quite well, but there are some temporal artifacts w.r.t. the animation of the hair as the head turns:

https://goto.isaac.sh/neon-anisora

Prompt: The giant head turns to face the two people sitting.

Oh, there is a docs page with more examples:

https://pwz4yo5eenw.feishu.cn/docx/XN9YdiOwCoqJuexLdCpcakSln...


vunderba 10 hoursReload
From the paper:

> a variable-length training approach is adopted, with training durations ranging from 2 to 8 seconds. This strategy enables our model to generate 720p video clips with flexible lengths between 2 and 8 seconds.

I'd like to see it benched against FramePack which in my experience also handles 2d animation pretty well and doesn't suffer from the usual duration limitations of other models.

https://lllyasviel.github.io/frame_pack_gitpage


smusamashah 12 hoursReload
There are so many glitches even on the very first example. Arm of the shirt glitching, moving hair disappear and appear out of no where. Rest is just moving arm and clouds.