Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️
danielhanchen 3 daysReload
Show HN: Finetune Llama 3.2 Vision in a Colab
Hey HN! Just added vision finetuning support in Unsloth! VLMs like Llama 3.2 11B, 90B, Pixtral 12B, Qwen2 VL 2B, 7B, 72B and all Llava variants. Unsloth finetunes them up to 2x faster with up to 70% less VRAM without any accuracy degradation!

danielhanchen 16 daysReload
:) Oh ye from the paper it looks like if one uses alpha = 2*rank, sometimes LoRA does even better than full finetuning

danielhanchen 16 daysReload
Sorry about that!

danielhanchen 16 daysReload
Ok fair points - I also know many packages still depend on Python 3.10, so some don't upgrade yet

danielhanchen 16 daysReload
TLDR: 1. Use alpha = 2*rank

2. Don't use too small ranks (rank=1 to 8)

3. Sensational title. Better title "LoRA works if done right"

4. Didn't test SVD init