ANN-Benchmarks is a benchmarking environment for approximate nearest neighbor algorithms search. This website contains the current benchmarking results. Please visit http://github.com/maumueller/ann-benchmarks/ to get an overview over evaluated data sets and algorithms. Make a pull request on Github to add your own code or improvements to the benchmarking system.

Results are split by distance measure and dataset. In the bottom, you can find an overview of an algorithm's performance on all datasets. Each dataset is annoted
by *(k = ...)*, the number of nearest neighbors an algorithm was supposed to return. The plot shown depicts *Recall* (the fraction
of true nearest neighbors found, on average over all queries) against *Queries per second*. Clicking on a plot reveils detailled interactive plots, including
approximate recall, index size, and build time.

Please find the raw experimental data here (13 GB). The query set is available queries-sisap.tar (7.5 GB) as well. The algorithms used the following parameter choices in the experiments: k = 10 and k=100.

ANN-Benchmarks has been developed by Martin Aumueller (maau@itu.dk), Erik Bernhardsson (mail@erikbern.com), and Alec Faitfull (alef@itu.dk). Please use Github to submit your implementation or improvements.

- Speciality
- Dedicated query set is put together with data set

- Speciality
- Dedicated query set is put together with data set

- rpforest
- bruteforce
- bf
- datasketch
- mih
- annoy-hamming
- BallTree(nmslib)
- falconn
- flann
- hnsw(nmslib)
- dolphinn
- ball
- nearpy
- bruteforce-blas
- kgraph
- bruteforce0(nmslib)
- SW-graph(nmslib)
- annoy
- MP-lsh(lshkit)
- kd
- faiss-lsh
- DolphinnPy