Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.
- Our best model (potion-base-8M) has only 8M parameters, which is ~30mb on disk
- Inference is ~500x faster than the distilled base model (bge-base), on a CPU
- New models can be distilled in 30 seconds on a CPU without requiring a dataset - just a vocabulary
- Numpy-only inference: The packaged can be install the package with minimal dependencies for lightweight deployments
- The library is integrated in SentenceTransformers, making it easy to use with other popular libraries
We built this because we think static embeddings can provide a hardware friendly alternative to many of the larger embedding models out there, while still being performant enough to power usecases such as RAG, or semantic search. We are curious to hear your feedback on this and whether there’s any usecases you can think of that we have not explored yet!
Link to the code and results: https://github.com/MinishLab/model2vec