Skip to content

castorini/anserini

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Anserini

build codecov Generic badge Maven Central LICENSE doi

Anserini is a toolkit for reproducible information retrieval research. By building on Lucene, we aim to bridge the gap between academic information retrieval research and the practice of building real-world search applications. Among other goals, our effort aims to be the opposite of this.* Anserini grew out of a reproducibility study of various open-source retrieval engines in 2016 (Lin et al., ECIR 2016). See Yang et al. (SIGIR 2017) and Yang et al. (JDIQ 2018) for overviews.

❗ Anserini was upgraded from JDK 11 to JDK 21 at commit 272565 (2024/04/03), which corresponds to the release of v0.35.0.

πŸ’₯ Try It!

Anserini is packaged in a self-contained fatjar, which also provides the simplest way to get started. Assuming you've already got Java installed, fetch the fatjar:

wget https://repo1.maven.org/maven2/io/anserini/anserini/0.35.0/anserini-0.35.0-fatjar.jar

The follow commands will generate a SPLADE++ ED run with the dev queries (encoded using ONNX) on the MS MARCO passage corpus:

java -cp anserini-0.35.0-fatjar.jar io.anserini.search.SearchCollection \
  -index msmarco-v1-passage-splade-pp-ed \
  -topics msmarco-v1-passage-dev \
  -encoder SpladePlusPlusEnsembleDistil \
  -output run.msmarco-v1-passage-dev.splade-pp-ed-onnx.txt \
  -impact -pretokenized

To evaluate:

wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/qrels.msmarco-passage.dev-subset.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.splade-pp-ed-onnx.txt

See below for instructions on using Anserini to reproduce runs from MS MARCO passage and BEIR, all directly from the fatjar!

Regressions directly from the fatjar: MS MARCO passage

Currently, Anserini provides support for the following models:

  • BM25
  • SPLADE++ EnsembleDistil: pre-encoded queries and ONNX query encoding
  • cosDPR-distil: pre-encoded queries and ONNX query encoding
  • BGE-base-en-v1.5: pre-encoded queries and ONNX query encoding

The following snippet will generate the complete set of results for MS MARCO passage:

# BM25
TOPICS=(msmarco-v1-passage-dev dl19-passage dl20-passage); for t in "${TOPICS[@]}"
do
    java -cp anserini-0.35.0-fatjar.jar io.anserini.search.SearchCollection -index msmarco-v1-passage -topics ${t} -output run.${t}.bm25.txt -threads 16 -bm25
done

# SPLADE++ ED
TOPICS=(msmarco-v1-passage-dev dl19-passage dl20-passage); for t in "${TOPICS[@]}"
do
    # Using pre-encoded queries
    java -cp anserini-0.35.0-fatjar.jar io.anserini.search.SearchCollection -index msmarco-v1-passage-splade-pp-ed -topics ${t}-splade-pp-ed -output run.${t}.splade-pp-ed-pre.txt -threads 16 -impact -pretokenized
    # Using ONNX
    java -cp anserini-0.35.0-fatjar.jar io.anserini.search.SearchCollection -index msmarco-v1-passage-splade-pp-ed -topics ${t} -encoder SpladePlusPlusEnsembleDistil -output run.${t}.splade-pp-ed-onnx.txt -threads 16 -impact -pretokenized
done

# cosDPR-distil
TOPICS=(msmarco-v1-passage-dev dl19-passage dl20-passage); for t in "${TOPICS[@]}"
do
    # Using pre-encoded queries, full index
    java -cp anserini-0.35.0-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index msmarco-v1-passage-cos-dpr-distil -topics ${t}-cos-dpr-distil -output run.${t}.cos-dpr-distil-full-pre.txt -threads 16 -efSearch 1000
    # Using pre-encoded queries, quantized index
    java -cp anserini-0.35.0-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index msmarco-v1-passage-cos-dpr-distil-quantized -topics ${t}-cos-dpr-distil -output run.${t}.cos-dpr-distil-quantized-pre.txt -threads 16 -efSearch 1000
    # Using ONNX, full index
    java -cp anserini-0.35.0-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index msmarco-v1-passage-cos-dpr-distil -topics ${t} -encoder CosDprDistil -output run.${t}.cos-dpr-distil-full-onnx.txt -threads 16 -efSearch 1000
    # Using ONNX, quantized index
    java -cp anserini-0.35.0-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index msmarco-v1-passage-cos-dpr-distil-quantized -topics ${t} -encoder CosDprDistil -output run.${t}.cos-dpr-distil-quantized-onnx.txt -threads 16 -efSearch 1000
done

# BGE-base-en-v1.5
TOPICS=(msmarco-v1-passage-dev dl19-passage dl20-passage); for t in "${TOPICS[@]}"
do
    # Using pre-encoded queries, full index
    java -cp anserini-0.35.0-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index msmarco-v1-passage-bge-base-en-v1.5 -topics ${t}-bge-base-en-v1.5 -output run.${t}.bge-base-en-v1.5-full-pre.txt -threads 16 -efSearch 1000
    # Using pre-encoded queries, quantized index
    java -cp anserini-0.35.0-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index msmarco-v1-passage-bge-base-en-v1.5-quantized -topics ${t}-bge-base-en-v1.5 -output run.${t}.bge-base-en-v1.5-quantized-pre.txt -threads 16 -efSearch 1000
    # Using ONNX, full index
    java -cp anserini-0.35.0-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index msmarco-v1-passage-bge-base-en-v1.5 -topics ${t} -encoder BgeBaseEn15 -output run.${t}.bge-base-en-v1.5-full-onnx.txt -threads 16 -efSearch 1000
    # Using ONNX, quantized index
    java -cp anserini-0.35.0-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index msmarco-v1-passage-bge-base-en-v1.5-quantized -topics ${t} -encoder BgeBaseEn15 -output run.${t}.bge-base-en-v1.5-quantized-onnx.txt -threads 16 -efSearch 1000
done

Here are the expected scores (dev using MRR@10, DL19 and DL20 using nDCG@10):

dev DL19 DL20
BM25 0.1840 0.5058 0.4796
SPLADE++ ED (pre-encoded) 0.3830 0.7317 0.7198
SPLADE++ ED (ONNX) 0.3828 0.7308 0.7197
cos-DPR: full HNSW (pre-encoded) 0.3887 0.7250 0.7025
cos-DPR: quantized HNSW (pre-encoded) 0.3897 0.7240 0.7004
cos-DPR: full HNSW ONNX) 0.3887 0.7250 0.7025
cos-DPR: quantized HNSW (ONNX) 0.3899 0.7247 0.6996
BGE-base-en-v1.5: full HNSW (pre-encoded) 0.3574 0.7065 0.6780
BGE-base-en-v1.5: quantized HNSW (pre-encoded) 0.3572 0.7016 0.6738
BGE-base-en-v1.5: full HNSW (ONNX) 0.3575 0.7016 0.6768
BGE-base-en-v1.5: quantized HNSW (ONNX) 0.3575 0.7017 0.6767

And here's the snippet of code to perform the evaluation (which will yield the results above):

wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/qrels.msmarco-passage.dev-subset.txt
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/qrels.dl19-passage.txt
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/qrels.dl20-passage.txt

java -cp anserini-0.35.0-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.bm25.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt                    run.dl19-passage.bm25.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt                    run.dl20-passage.bm25.txt

java -cp anserini-0.35.0-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.splade-pp-ed-pre.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt                    run.dl19-passage.splade-pp-ed-pre.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt                    run.dl20-passage.splade-pp-ed-pre.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.splade-pp-ed-onnx.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt                    run.dl19-passage.splade-pp-ed-onnx.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt                    run.dl20-passage.splade-pp-ed-onnx.txt

java -cp anserini-0.35.0-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.cos-dpr-distil-full-pre.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt                    run.dl19-passage.cos-dpr-distil-full-pre.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt                    run.dl20-passage.cos-dpr-distil-full-pre.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.cos-dpr-distil-quantized-pre.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt                    run.dl19-passage.cos-dpr-distil-quantized-pre.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt                    run.dl20-passage.cos-dpr-distil-quantized-pre.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.cos-dpr-distil-full-onnx.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt                    run.dl19-passage.cos-dpr-distil-full-onnx.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt                    run.dl20-passage.cos-dpr-distil-full-onnx.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.cos-dpr-distil-quantized-onnx.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt                    run.dl19-passage.cos-dpr-distil-quantized-onnx.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt                    run.dl20-passage.cos-dpr-distil-quantized-onnx.txt

java -cp anserini-0.35.0-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.bge-base-en-v1.5-full-pre.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt                    run.dl19-passage.bge-base-en-v1.5-full-pre.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt                    run.dl20-passage.bge-base-en-v1.5-full-pre.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.bge-base-en-v1.5-quantized-pre.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt                    run.dl19-passage.bge-base-en-v1.5-quantized-pre.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt                    run.dl20-passage.bge-base-en-v1.5-quantized-pre.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.bge-base-en-v1.5-full-onnx.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt                    run.dl19-passage.bge-base-en-v1.5-full-onnx.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt                    run.dl20-passage.bge-base-en-v1.5-full-onnx.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.bge-base-en-v1.5-quantized-onnx.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt                    run.dl19-passage.bge-base-en-v1.5-quantized-onnx.txt
java -cp anserini-0.35.0-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt                    run.dl20-passage.bge-base-en-v1.5-quantized-onnx.txt
Regressions directly from the fatjar: BEIR

Currently, Anserini provides support for the following models:

  • Flat = BM25, "flat" bag-of-words baseline
  • MF = BM25, "multifield" bag-of-words baseline
  • S = SPLADE++ EnsembleDistil:
    • Pre-encoded queries (Sp)
    • ONNX query encoding (So)
  • D = BGE-base-en-v1.5
    • Pre-encoded queries (Dp)
    • ONNX query encoding (Do)

The following snippet will generate the complete set of results for BEIR:

CORPORA=(trec-covid bioasq nfcorpus nq hotpotqa fiqa signal1m trec-news robust04 arguana webis-touche2020 cqadupstack-android cqadupstack-english cqadupstack-gaming cqadupstack-gis cqadupstack-mathematica cqadupstack-physics cqadupstack-programmers cqadupstack-stats cqadupstack-tex cqadupstack-unix cqadupstack-webmasters cqadupstack-wordpress quora dbpedia-entity scidocs fever climate-fever scifact); for c in "${CORPORA[@]}"
do
    # "flat" indexes
    java -cp anserini-0.35.0-fatjar.jar io.anserini.search.SearchCollection -index beir-v1.0.0-${c}.flat -topics beir-${c} -output run.beir.${c}.flat.txt -bm25 -removeQuery
    # "multifield" indexes
    java -cp anserini-0.35.0-fatjar.jar io.anserini.search.SearchCollection -index beir-v1.0.0-${c}.multifield -topics beir-${c} -output run.beir.${c}.multifield.txt -bm25 -removeQuery -fields contents=1.0 title=1.0
    # SPLADE++ ED, pre-encoded queries
    java -cp anserini-0.35.0-fatjar.jar io.anserini.search.SearchCollection -index beir-v1.0.0-${c}.splade-pp-ed -topics beir-${c}.splade-pp-ed -output run.beir.${c}.splade-pp-ed-pre.txt -impact -pretokenized -removeQuery
    # SPLADE++ ED, ONNX
    java -cp anserini-0.35.0-fatjar.jar io.anserini.search.SearchCollection -index beir-v1.0.0-${c}.splade-pp-ed -topics beir-${c} -encoder SpladePlusPlusEnsembleDistil -output run.beir.${c}.splade-pp-ed-onnx.txt -impact -pretokenized -removeQuery
    # BGE-base-en-v1.5, pre-encoded queries
    java -cp anserini-0.35.0-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index beir-v1.0.0-${c}.bge-base-en-v1.5 -topics beir-${c}.bge-base-en-v1.5 -output run.beir.${c}.bge-pre.txt -threads 16 -efSearch 1000 -removeQuery
    # BGE-base-en-v1.5, ONNX
    java -cp anserini-0.35.0-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index beir-v1.0.0-${c}.bge-base-en-v1.5 -topics beir-${c} -encoder BgeBaseEn15 -output run.beir.${c}.bge-onnx.txt -threads 16 -efSearch 1000 -removeQuery
done

Here are the expected nDCG@10 scores:

Corpus Flat MF Sp So Dp Do
trec-covid 0.5947 0.6559 0.7274 0.7270 0.7834 0.7835
bioasq 0.5225 0.4646 0.4980 0.4980 0.4042 0.4042
nfcorpus 0.3218 0.3254 0.3470 0.3473 0.3735 0.3738
nq 0.3055 0.3285 0.5378 0.5372 0.5413 0.5415
hotpotqa 0.6330 0.6027 0.6868 0.6868 0.7242 0.7241
fiqa 0.2361 0.2361 0.3475 0.3473 0.4065 0.4065
signal1m 0.3304 0.3304 0.3008 0.3006 0.2869 0.2869
trec-news 0.3952 0.3977 0.4152 0.4169 0.4411 0.4410
robust04 0.4070 0.4070 0.4679 0.4651 0.4467 0.4437
arguana 0.3970 0.4142 0.5203 0.5218 0.6361 0.6228
webis-touche2020 0.4422 0.3673 0.2468 0.2464 0.2570 0.2571
cqadupstack-android 0.3801 0.3709 0.3904 0.3898 0.5075 0.5076
cqadupstack-english 0.3453 0.3321 0.4079 0.4078 0.4855 0.4855
cqadupstack-gaming 0.4822 0.4418 0.4957 0.4959 0.5965 0.5967
cqadupstack-gis 0.2901 0.2904 0.3150 0.3148 0.4129 0.4133
cqadupstack-mathematica 0.2015 0.2046 0.2377 0.2379 0.3163 0.3163
cqadupstack-physics 0.3214 0.3248 0.3599 0.3597 0.4722 0.4724
cqadupstack-programmers 0.2802 0.2963 0.3401 0.3399 0.4242 0.4238
cqadupstack-stats 0.2711 0.2790 0.2990 0.2980 0.3731 0.3728
cqadupstack-tex 0.2244 0.2086 0.2530 0.2529 0.3115 0.3115
cqadupstack-unix 0.2749 0.2788 0.3167 0.3170 0.4219 0.4220
cqadupstack-webmasters 0.3059 0.3008 0.3167 0.3166 0.4065 0.4072
cqadupstack-wordpress 0.2483 0.2562 0.2733 0.2718 0.3547 0.3547
quora 0.7886 0.7886 0.8343 0.8344 0.8890 0.8876
dbpedia-entity 0.3180 0.3128 0.4366 0.4374 0.4077 0.4076
scidocs 0.1490 0.1581 0.1591 0.1588 0.2170 0.2172
fever 0.6513 0.7530 0.7882 0.7879 0.8620 0.8620
climate-fever 0.1651 0.2129 0.2297 0.2298 0.3119 0.3117
scifact 0.6789 0.6647 0.7041 0.7036 0.7408 0.7408

And here's the snippet of code to perform the evaluation (which will yield the results above):

CORPORA=(trec-covid bioasq nfcorpus nq hotpotqa fiqa signal1m trec-news robust04 arguana webis-touche2020 cqadupstack-android cqadupstack-english cqadupstack-gaming cqadupstack-gis cqadupstack-mathematica cqadupstack-physics cqadupstack-programmers cqadupstack-stats cqadupstack-tex cqadupstack-unix cqadupstack-webmasters cqadupstack-wordpress quora dbpedia-entity scidocs fever climate-fever scifact); for c in "${CORPORA[@]}"
do
    wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/qrels.beir-v1.0.0-${c}.test.txt
    echo $c
    java -cp anserini-0.35.0-fatjar.jar trec_eval -c -m ndcg_cut.10 qrels.beir-v1.0.0-${c}.test.txt run.beir.${c}.flat.txt
    java -cp anserini-0.35.0-fatjar.jar trec_eval -c -m ndcg_cut.10 qrels.beir-v1.0.0-${c}.test.txt run.beir.${c}.multifield.txt
    java -cp anserini-0.35.0-fatjar.jar trec_eval -c -m ndcg_cut.10 qrels.beir-v1.0.0-${c}.test.txt run.beir.${c}.splade-pp-ed-pre.txt
    java -cp anserini-0.35.0-fatjar.jar trec_eval -c -m ndcg_cut.10 qrels.beir-v1.0.0-${c}.test.txt run.beir.${c}.splade-pp-ed-onnx.txt
    java -cp anserini-0.35.0-fatjar.jar trec_eval -c -m ndcg_cut.10 qrels.beir-v1.0.0-${c}.test.txt run.beir.${c}.bge-pre.txt
    java -cp anserini-0.35.0-fatjar.jar trec_eval -c -m ndcg_cut.10 qrels.beir-v1.0.0-${c}.test.txt run.beir.${c}.bge-onnx.txt
done

🎬 Installation

Most Anserini features are exposed in the Pyserini Python interface. If you're more comfortable with Python, start there, although Anserini forms an important building block of Pyserini, so it remains worthwhile to learn about Anserini.

You'll need Java 21 and Maven 3.9+ to build Anserini. Clone our repo with the --recurse-submodules option to make sure the eval/ submodule also gets cloned (alternatively, use git submodule update --init). Then, build using Maven:

mvn clean package appassembler:assemble

The tools/ directory, which contains evaluation tools and other scripts, is actually this repo, integrated as a Git submodule (so that it can be shared across related projects). Build as follows (you might get warnings, but okay to ignore):

cd tools/eval && tar xvfz trec_eval.9.0.4.tar.gz && cd trec_eval.9.0.4 && make && cd ../../..
cd tools/eval/ndeval && make && cd ../../..

With that, you should be ready to go. The onboarding path for Anserini starts here!

Windows tips

If you are using Windows, please use WSL2 to build Anserini. Please refer to the WSL2 Installation document to install WSL2 if you haven't already.

Note that on Windows without WSL2, tests may fail due to encoding issues, see #1466. A simple workaround is to skip tests by adding -Dmaven.test.skip=true to the above mvn command. See #1121 for additional discussions on debugging Windows build errors.

βš—οΈ End-to-End Regression Experiments

Anserini is designed to support end-to-end experiments on various standard IR test collections out of the box. Each of these end-to-end regressions starts from the raw corpus, builds the necessary index, performs retrieval runs, and generates evaluation results. See individual pages for details.

MS MARCO V1 Passage Regressions

MS MARCO V1 Passage Regressions

dev DL19 DL20
Unsupervised Sparse
Lucene BoW baselines + + +
Quantized BM25 βœ“ βœ“ βœ“
WordPiece baselines (pre-tokenized) + + +
WordPiece baselines (Huggingface tokenizer) + + +
WordPiece + Lucene BoW baselines + + +
doc2query +
doc2query-T5 + + +
Learned Sparse (uniCOIL family)
uniCOIL noexp βœ“ βœ“ βœ“
uniCOIL with doc2query-T5 βœ“ βœ“ βœ“
uniCOIL with TILDE βœ“
Learned Sparse (other)
DeepImpact βœ“
SPLADEv2 βœ“
SPLADE++ CoCondenser-EnsembleDistil (cached queries) βœ“ βœ“ βœ“
SPLADE++ CoCondenser-EnsembleDistil (ONNX) βœ“ βœ“ βœ“
SPLADE++ CoCondenser-SelfDistil (cached queries) βœ“ βœ“ βœ“
SPLADE++ CoCondenser-SelfDistil (ONNX) βœ“ βœ“ βœ“
Learned Dense (HNSW)
cosDPR-distil w/ HNSW fp32 (cached queries) βœ“ βœ“ βœ“
cosDPR-distil w/ HSNW fp32 (ONNX) βœ“ βœ“ βœ“
cosDPR-distil w/ HNSW int8 (cached queries) βœ“ βœ“ βœ“
cosDPR-distil w/ HSNW int8 (ONNX) βœ“ βœ“ βœ“
BGE-base-en-v1.5 w/ HNSW fp32 (cached queries) βœ“ βœ“ βœ“
BGE-base-en-v1.5 w/ HNSW fp32 (ONNX) βœ“ βœ“ βœ“
BGE-base-en-v1.5 w/ HNSW int8 (cached queries) βœ“ βœ“ βœ“
BGE-base-en-v1.5 w/ HNSW int8 (ONNX) βœ“ βœ“ βœ“
OpenAI Ada2 w/ HNSW fp32 (cached queries) βœ“ βœ“ βœ“
OpenAI Ada2 w/ HNSW int8 (cached queries) βœ“ βœ“ βœ“
Cohere English v3.0 w/ HNSW fp32 (cached queries) βœ“ βœ“ βœ“
Cohere English v3.0 w/ HNSW int8 (cached queries) βœ“ βœ“ βœ“
Learned Dense (Inverted; experimental)
cosDPR-distil w/ "fake words" (cached queries) βœ“ βœ“ βœ“
cosDPR-distil w/ "LexLSH" (cached queries) βœ“ βœ“ βœ“

Available Corpora for Download

Corpora Size Checksum
Quantized BM25 1.2 GB 0a623e2c97ac6b7e814bf1323a97b435
uniCOIL (noexp) 2.7 GB f17ddd8c7c00ff121c3c3b147d2e17d8
uniCOIL (d2q-T5) 3.4 GB 78eef752c78c8691f7d61600ceed306f
uniCOIL (TILDE) 3.9 GB 12a9c289d94e32fd63a7d39c9677d75c
DeepImpact 3.6 GB 73843885b503af3c8b3ee62e5f5a9900
SPLADEv2 9.9 GB b5d126f5d9a8e1b3ef3f5cb0ba651725
SPLADE++ CoCondenser-EnsembleDistil 4.2 GB e489133bdc54ee1e7c62a32aa582bc77
SPLADE++ CoCondenser-SelfDistil 4.8 GB cb7e264222f2bf2221dd2c9d28190be1
cosDPR-distil 57 GB e20ffbc8b5e7f760af31298aefeaebbd
BGE-base-en-v1.5 59 GB 353d2c9e72e858897ad479cca4ea0db1
OpenAI-ada2 109 GB a4d843d522ff3a3af7edbee789a63402
Cohere embed-english-v3.0 38 GB 06a6e38a0522850c6aa504db7b2617f5
MS MARCO V1 Document Regressions

MS MARCO V1 Document Regressions

dev DL19 DL20
Unsupervised Lexical, Complete Doc*
Lucene BoW baselines + + +
WordPiece baselines (pre-tokenized) + + +
WordPiece baselines (Huggingface tokenizer) + + +
WordPiece + Lucene BoW baselines + + +
doc2query-T5 + + +
Unsupervised Lexical, Segmented Doc*
Lucene BoW baselines + + +
WordPiece baselines (pre-tokenized) + + +
WordPiece + Lucene BoW baselines + + +
doc2query-T5 + + +
Learned Sparse Lexical
uniCOIL noexp βœ“ βœ“ βœ“
uniCOIL with doc2query-T5 βœ“ βœ“ βœ“

Available Corpora for Download

Corpora Size Checksum
MS MARCO V1 doc: uniCOIL (noexp) 11 GB 11b226e1cacd9c8ae0a660fd14cdd710
MS MARCO V1 doc: uniCOIL (d2q-T5) 19 GB 6a00e2c0c375cb1e52c83ae5ac377ebb
MS MARCO V2 Passage Regressions

MS MARCO V2 Passage Regressions

dev DL21 DL22 DL23
Unsupervised Lexical, Original Corpus
baselines + + + +
doc2query-T5 + + + +
Unsupervised Lexical, Augmented Corpus
baselines + + + +
doc2query-T5 + + + +
Learned Sparse Lexical
uniCOIL noexp zero-shot βœ“ βœ“ βœ“ βœ“
uniCOIL with doc2query-T5 zero-shot βœ“ βœ“ βœ“ βœ“
SPLADE++ CoCondenser-EnsembleDistil (cached queries) βœ“ βœ“ βœ“ βœ“
SPLADE++ CoCondenser-EnsembleDistil (ONNX) βœ“ βœ“ βœ“ βœ“
SPLADE++ CoCondenser-SelfDistil (cached queries) βœ“ βœ“ βœ“ βœ“
SPLADE++ CoCondenser-SelfDistil (ONNX) βœ“ βœ“ βœ“ βœ“

Available Corpora for Download

Corpora Size Checksum
uniCOIL (noexp) 24 GB d9cc1ed3049746e68a2c91bf90e5212d
uniCOIL (d2q-T5) 41 GB 1949a00bfd5e1f1a230a04bbc1f01539
SPLADE++ CoCondenser-EnsembleDistil 66 GB 2cdb2adc259b8fa6caf666b20ebdc0e8
SPLADE++ CoCondenser-SelfDistil 76 GB 061930dd615c7c807323ea7fc7957877
MS MARCO V2 Document Regressions

MS MARCO V2 Document Regressions

dev DL21 DL22 DL23
Unsupervised Lexical, Complete Doc
baselines + + + +
doc2query-T5 + + + +
Unsupervised Lexical, Segmented Doc
baselines + + + +
doc2query-T5 + + + +
Learned Sparse Lexical
uniCOIL noexp zero-shot βœ“ βœ“ βœ“ βœ“
uniCOIL with doc2query-T5 zero-shot βœ“ βœ“ βœ“ βœ“

Available Corpora for Download

Corpora Size Checksum
MS MARCO V2 doc: uniCOIL (noexp) 55 GB 97ba262c497164de1054f357caea0c63
MS MARCO V2 doc: uniCOIL (d2q-T5) 72 GB c5639748c2cbad0152e10b0ebde3b804
BEIR (v1.0.0) Regressions

BEIR (v1.0.0) Regressions

Key:

  • F1 = "flat" baseline (Lucene analyzer)
  • F2 = "flat" baseline (pre-tokenized with bert-base-uncased tokenizer)
  • MF = "multifield" baseline (Lucene analyzer)
  • U1 = uniCOIL (noexp)
  • S1 = SPLADE++ CoCondenser-EnsembleDistil: pre-encoded queries (βœ“), ONNX (O)
  • D1 = BGE-base-en-v1.5
    • D1o: original HNSW indexes: pre-encoded queries (βœ“), ONNX (O)
    • D1q: quantized HNSW indexes: pre-encoded queries (βœ“), ONNX (O)

See instructions below the table for how to reproduce results for a model on all BEIR corpora "in one go".

Corpus F1 F2 MF U1 S1 D1o D1q
TREC-COVID βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
BioASQ βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
NFCorpus βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
NQ βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
HotpotQA βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
FiQA-2018 βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
Signal-1M(RT) βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
TREC-NEWS βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
Robust04 βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
ArguAna βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
Touche2020 βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
CQADupStack-Android βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
CQADupStack-English βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
CQADupStack-Gaming βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
CQADupStack-Gis βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
CQADupStack-Mathematica βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
CQADupStack-Physics βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
CQADupStack-Programmers βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
CQADupStack-Stats βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
CQADupStack-Tex βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
CQADupStack-Unix βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
CQADupStack-Webmasters βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
CQADupStack-Wordpress βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
Quora βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
DBPedia βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
SCIDOCS βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
FEVER βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
Climate-FEVER βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O
SciFact βœ“ βœ“ βœ“ βœ“ βœ“ O βœ“ O βœ“ O

To reproduce the SPLADE++ CoCondenser-EnsembleDistil results, start by downloading the collection:

wget https://rgw.cs.uwaterloo.ca/pyserini/data/beir-v1.0.0-splade-pp-ed.tar -P collections/
tar xvf collections/beir-v1.0.0-splade-pp-ed.tar -C collections/

The tarball is 42 GB and has MD5 checksum 9c7de5b444a788c9e74c340bf833173b. Once you've unpacked the data, the following commands will loop over all BEIR corpora and run the regressions:

MODEL="splade-pp-ed"; CORPORA=(trec-covid bioasq nfcorpus nq hotpotqa fiqa signal1m trec-news robust04 arguana webis-touche2020 cqadupstack-android cqadupstack-english cqadupstack-gaming cqadupstack-gis cqadupstack-mathematica cqadupstack-physics cqadupstack-programmers cqadupstack-stats cqadupstack-tex cqadupstack-unix cqadupstack-webmasters cqadupstack-wordpress quora dbpedia-entity scidocs fever climate-fever scifact); for c in "${CORPORA[@]}"
do
    echo "Running $c..."
    python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-${c}-${MODEL} > logs/log.beir-v1.0.0-${c}-${MODEL} 2>&1
done

You can verify the results by examining the log files in logs/.

For the other models, modify the above commands as follows:

Key Corpus Checksum MODEL
F1 corpus faefd5281b662c72ce03d22021e4ff6b flat
F2 corpus-wp 3cf8f3dcdcadd49362965dd4466e6ff2 flat-wp
MF corpus faefd5281b662c72ce03d22021e4ff6b multifield
U1 unicoil-noexp 4fd04d2af816a6637fc12922cccc8a83 unicoil-noexp
S1 splade-pp-ed 9c7de5b444a788c9e74c340bf833173b splade-pp-ed
D1 bge-base-en-v1.5 e4e8324ba3da3b46e715297407a24f00 bge-base-en-v1.5-hnsw

The "Corpus" above should be substituted into the full file name beir-v1.0.0-${corpus}.tar, e.g., beir-v1.0.0-bge-base-en-v1.5.tar.

Cross-lingual and Multi-lingual Regressions

Cross-lingual and Multi-lingual Regressions

Other Regressions

Other Regressions

πŸ“ƒ Additional Documentation

The experiments described below are not associated with rigorous end-to-end regression testing and thus provide a lower standard of reproducibility. For the most part, manual copying and pasting of commands into a shell is required to reproduce our results.

MS MARCO V1

MS MARCO V1

MS MARCO V2

MS MARCO V2

TREC-COVID and CORD-19

TREC-COVID and CORD-19

Other Experiments and Features

Other Experiments and Features

πŸ™‹ How Can I Contribute?

If you've found Anserini to be helpful, we have a simple request for you to contribute back. In the course of reproducing baseline results on standard test collections, please let us know if you're successful by sending us a pull request with a simple note, like what appears at the bottom of the page for Disks 4 & 5. Reproducibility is important to us, and we'd like to know about successes as well as failures. Since the regression documentation is auto-generated, pull requests should be sent against the raw templates. Then the regression documentation can be generated using the bin/build.sh script. In turn, you'll be recognized as a contributor.

Beyond that, there are always open issues we would appreciate help on!

πŸ“œοΈ Release History

older... (and historic notes)

πŸ“œοΈ Historical Notes

  • Anserini was upgraded to Lucene 9.3 at commit 272565 (8/2/2022): this upgrade created backward compatibility issues, see #1952. Anserini will automatically detect Lucene 8 indexes and disable consistent tie-breaking to avoid runtime errors. However, Lucene 9 code running on Lucene 8 indexes may give slightly different results than Lucene 8 code running on Lucene 8 indexes. Lucene 8 code will not run on Lucene 9 indexes. Pyserini has also been upgraded and similar issues apply: Lucene 9 code running on Lucene 8 indexes may give slightly different results than Lucene 8 code running on Lucene 8 indexes.
  • Anserini was upgraded to Java 11 at commit 17b702d (7/11/2019) from Java 8. Maven 3.3+ is also required.
  • Anserini was upgraded to Lucene 8.0 as of commit 75e36f9 (6/12/2019); prior to that, the toolkit uses Lucene 7.6. Based on preliminary experiments, query evaluation latency has been much improved in Lucene 8. As a result of this upgrade, results of all regressions have changed slightly. To reproducible old results from Lucene 7.6, use v0.5.1.

✨ References

πŸ™ Acknowledgments

This research is supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada. Previous support came from the U.S. National Science Foundation under IIS-1423002 and CNS-1405688. Any opinions, findings, and conclusions or recommendations expressed do not necessarily reflect the views of the sponsors.