Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.
We just released LightlyTrain, a new open-source Python package (AGPL-3.0, free for research and educational purpose) for self-supervised pretraining of computer vision models: https://github.com/lightly-ai/lightly-train
Standard vision models pretrained on generic datasets like ImageNet or COCO often underperform on specific domains (e.g., medical, agriculture, autonomous driving). Fine-tuning helps, but performance is limited, and getting enough labeled data is expensive and slow.
LightlyTrain uses self-supervised learning (SSL) to pretrain models directly on your own unlabeled images or videos. This adapts the model to your specific visual domain before fine-tuning, leading to significantly better performance with less labeled data.
Key Features:
- No Labels Needed: Pretrain using your existing unlabeled image data.
- Better Performance: Consistently outperforms training from scratch and ImageNet-pretrained weights, especially in low-data regimes and domain-specific tasks (benchmarks in README/blog). We see gains across detection, classification, and segmentation.
- Domain Adaptation: Tailor models to your specific industry (manufacturing, healthcare, retail, etc.).
- Supports Popular Models: Works out-of-the-box with YOLO (v5-v12), RT-DETR, ResNet, ViTs, etc., integrating with frameworks like Ultralytics, TIMM, Torchvision.
- Easy to Use & Scalable: Simple pip install, minimal code to start, scales to millions of images, runs fully on-premise (single/multi-GPU). We built this because while SSL research is mature, making it easily accessible and effective for industry computer vision teams was hard. LightlyTrain aims to bridge that gap.
We’ve benchmarked it on COCO, BDD100K (driving), DeepLesion (medical), and DeepWeeds (agriculture), showing strong improvements over baselines (details in the repo/blog post linked below). For example, on COCO with only 10% labels, LightlyTrain pretraining boosted YOLOv8-s mAP by +14% over ImageNet weights and +34% over no pretraining.
- GitHub Repo: https://github.com/lightly-ai/lightly-train
- Docs: https://docs.lightly.ai/train
- Detailed Blog Post/Benchmarks: https://www.lightly.ai/blog/introducing-lightly-train
- Quick Demo Video: https://youtu.be/5Lmry1k_cA8
We’re here to answer any questions! Happy to discuss the tech, benchmarks, or use cases. Commercial licenses are also available for businesses needing different terms.