Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️ LightlyTrain: Better Vision Models, Faster – No Labels Needed
joelio182 18 hoursReload
Really cool to see more tooling making self-supervised learning usable on real-world datasets. Domain shift is a recurring pain, especially when labels are limited—so being able to pretrain directly on unlabeled data is a big deal. Also great to see it open-sourced under AGPL. Have you tried LightlyTrain on any more niche domains, like satellite or industrial inspection data? Would be interesting to see how it performs outside the usual benchmarks. Nice work!

liopeer 18 hoursReload
Computer Vision pretraining for the masses!

leonax97 18 hoursReload
Finally a production-ready framework for pretraining!

isusmelj 18 hoursReload
Hi HN, I’m Igor, co-founder of Lightly AI (https://www.lightly.ai/).

We just released LightlyTrain, a new open-source Python package (AGPL-3.0, free for research and educational purpose) for self-supervised pretraining of computer vision models: https://github.com/lightly-ai/lightly-train

Standard vision models pretrained on generic datasets like ImageNet or COCO often underperform on specific domains (e.g., medical, agriculture, autonomous driving). Fine-tuning helps, but performance is limited, and getting enough labeled data is expensive and slow.

LightlyTrain uses self-supervised learning (SSL) to pretrain models directly on your own unlabeled images or videos. This adapts the model to your specific visual domain before fine-tuning, leading to significantly better performance with less labeled data.

Key Features:

- No Labels Needed: Pretrain using your existing unlabeled image data.

- Better Performance: Consistently outperforms training from scratch and ImageNet-pretrained weights, especially in low-data regimes and domain-specific tasks (benchmarks in README/blog). We see gains across detection, classification, and segmentation.

- Domain Adaptation: Tailor models to your specific industry (manufacturing, healthcare, retail, etc.).

- Supports Popular Models: Works out-of-the-box with YOLO (v5-v12), RT-DETR, ResNet, ViTs, etc., integrating with frameworks like Ultralytics, TIMM, Torchvision.

- Easy to Use & Scalable: Simple pip install, minimal code to start, scales to millions of images, runs fully on-premise (single/multi-GPU). We built this because while SSL research is mature, making it easily accessible and effective for industry computer vision teams was hard. LightlyTrain aims to bridge that gap.

We’ve benchmarked it on COCO, BDD100K (driving), DeepLesion (medical), and DeepWeeds (agriculture), showing strong improvements over baselines (details in the repo/blog post linked below). For example, on COCO with only 10% labels, LightlyTrain pretraining boosted YOLOv8-s mAP by +14% over ImageNet weights and +34% over no pretraining.

- GitHub Repo: https://github.com/lightly-ai/lightly-train

- Docs: https://docs.lightly.ai/train

- Detailed Blog Post/Benchmarks: https://www.lightly.ai/blog/introducing-lightly-train

- Quick Demo Video: https://youtu.be/5Lmry1k_cA8

We’re here to answer any questions! Happy to discuss the tech, benchmarks, or use cases. Commercial licenses are also available for businesses needing different terms.