📄 AI

ColBERT-Zero: To Pre-train Or Not To Pre-train ColBERT models

RESEARCH PAPER Published on February 18, 2026

Research by Antoine Chaffin, Luca Arnaboldi, Amélie Chatelain and 1 others

Source: arXiv 5 min read advanced

Summary

Current state-of-the-art multi-vector models are obtained through a small Knowledge Distillation (KD) training step on top of strong single-vector models, leveraging the large-scale pre-training of these models. In this paper, we study the pre-training of multi-vector models and show that large-scale multi-vector pre-training yields much stronger multi-vector models. Notably, a fully ColBERT-pre-trained model, ColBERT-Zero, trained only on public data, outperforms GTE-ModernColBERT as well as its base model, GTE-ModernBERT, which leverages closed and much stronger data, setting new state-of-the-art for model this size. We also find that, although performing only a small KD step is not enough to achieve results close to full pre-training, adding a supervised step beforehand allows to achieve much closer performance while skipping the most costly unsupervised phase. Finally, we find that aligning the fine-tuning and pre-training setups is crucial when repurposing existing models. To enable exploration of our results, we release various checkpoints as well as code used to train them.

#cs-cl #notably #zero #to pre #or not to pre #unsupervised
0 views
0 likes
0 comments