Go to file
dependabot[bot] f875d2f2a3
build(deps): bump actions/upload-artifact from 4.3.0 to 4.3.1 (#207)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-06 09:04:27 +10:00
.cargo Tantivy 0.19.2 (#67) 2023-02-14 13:20:59 +00:00
.github build(deps): bump actions/upload-artifact from 4.3.0 to 4.3.1 (#207) 2024-02-06 09:04:27 +10:00
ci chore: deprecate python 3.7 and add 3.12 (#139) 2023-11-01 15:58:18 +10:00
docs doc: escaping quotes requires quoted queries (fix #185) (#189) 2024-01-21 21:16:47 +01:00
src Add field_boosts and fuzzy_fields optional parameters to Index::parse_query (#202) 2024-02-05 12:01:26 +01:00
tantivy Expose tantivy's TermQuery (#175) 2023-12-20 19:40:50 +10:00
tests Add field_boosts and fuzzy_fields optional parameters to Index::parse_query (#202) 2024-02-05 12:01:26 +01:00
.gitignore Snippet generator (fixes #36 and #63) (#122) 2023-09-13 12:03:39 +02:00
.readthedocs.yaml doc: configure ReadTheDocs (#120) 2023-09-10 22:44:53 +02:00
Cargo.lock Upgrade itertools to 0.12.0 (#191) 2024-01-22 14:10:20 +01:00
Cargo.toml Upgrade itertools to 0.12.0 (#191) 2024-01-22 14:10:20 +01:00
LICENSE Initial python bindings implementation. 2019-06-04 11:09:58 +02:00
MANIFEST.in Initial python bindings implementation. 2019-06-04 11:09:58 +02:00
Makefile Tantivy 0.19.2 (#67) 2023-02-14 13:20:59 +00:00
README.md Fix code snippets in README.md (#203) 2024-02-02 16:47:04 +01:00
build.rs Update to pyo3 0.15 (#40) 2022-01-17 10:16:08 +09:00
mkdocs.yml doc: add MkDocs documentation (#94) 2023-08-04 13:27:52 +02:00
noxfile.py chore: deprecate python 3.7 and add 3.12 (#139) 2023-11-01 15:58:18 +10:00
pyproject.toml doc: enable doctests (#156) 2023-11-20 11:44:32 +10:00
requirements-dev.txt Switch from pipenv to nox 2022-01-17 21:26:11 +08:00
rust-toolchain.toml Change to action `dtolnay/rust-toolchain` and 1.73 (#154) 2023-11-19 01:05:00 +01:00
rustfmt.toml Tantivy 0.19.2 (#67) 2023-02-14 13:20:59 +00:00

README.md

Build Status License: MIT Docs

tantivy-py

Python bindings for Tantivy the full-text search engine library written in Rust.

Installation

The bindings can be installed using from pypi using pip:

pip install tantivy

If no binary wheel is present for your operating system the bindings will be build from source, this means that Rust needs to be installed before building can succeed.

Note that the bindings are using PyO3, which only supports python3.

Development

For compiling Python module:

# create virtual env
python -m venv .venv
source .venv/bin/activate

# install maturin, the build tool for PyO3
pip install maturin

# compile and install python module in venv
maturin develop

Setting up a development environment can be done in a virtual environment using nox or using local packages using the provided Makefile.

For the nox setup install the virtual environment and build the bindings using:

python3 -m pip install nox
nox

For the Makefile based setup run:

make

Running the tests is done using:

make test

Usage

The Python bindings have a similar API to Tantivy. To create a index first a schema needs to be built. After that documents can be added to the index and a reader can be created to search the index.

Building an index and populating it

import tantivy

# Declaring our schema.
schema_builder = tantivy.SchemaBuilder()
schema_builder.add_text_field("title", stored=True)
schema_builder.add_text_field("body", stored=True)
schema_builder.add_integer_field("doc_id", stored=True, indexed=True)
schema = schema_builder.build()

# Creating our index (in memory)
index = tantivy.Index(schema)

To have a persistent index, use the path parameter to store the index on the disk, e.g:

import os

index_path = os.path.abspath("index")
os.makedirs(index_path)
index = tantivy.Index(schema, path=index_path)

By default, tantivy offers the following tokenizers which can be used in tantivy-py:

  • default default is the tokenizer that will be used if you do not assign a specific tokenizer to your text field. It will chop your text on punctuation and whitespaces, removes tokens that are longer than 40 chars, and lowercase your text.

  • raw Does not actual tokenizer your text. It keeps it entirely unprocessed. It can be useful to index uuids, or urls for instance.

  • en_stem

In addition to what default does, the en_stem tokenizer also apply stemming to your tokens. Stemming consists in trimming words to remove their inflection. This tokenizer is slower than the default one, but is recommended to improve recall.

to use the above tokenizers, simply provide them as a parameter to add_text_field. e.g.

schema_builder.add_text_field("body",  stored=True,  tokenizer_name='en_stem')

Adding one document.

writer = index.writer()
writer.add_document(tantivy.Document(
	doc_id=1,
    title=["The Old Man and the Sea"],
    body=["""He was an old man who fished alone in a skiff in the Gulf Stream and he had gone eighty-four days now without taking a fish."""],
))
# ... and committing
writer.commit()

Building and Executing Queries

First you need to get a searcher for the index

# Reload the index to ensure it points to the last commit.
index.reload()
searcher = index.searcher()

Then you need to get a valid query object by parsing your query on the index.

query = index.parse_query("fish days", ["title", "body"])
(best_score, best_doc_address) = searcher.search(query, 3).hits[0]
best_doc = searcher.doc(best_doc_address)
assert best_doc["title"] == ["The Old Man and the Sea"]
print(best_doc)

Valid Query Formats

tantivy-py supports the query language used in tantivy. Some basic query Formats.

  • AND and OR conjunctions.
query = index.parse_query('(Old AND Man) OR Stream', ["title", "body"])
(best_score, best_doc_address) = searcher.search(query, 3).hits[0]
best_doc = searcher.doc(best_doc_address)
print(best_doc)
  • +(includes) and -(excludes) operators.
query = index.parse_query('+Old +Man chef -fished', ["title", "body"])
hits = searcher.search(query, 3).hits
print(len(hits))

Note: in a query like above, a word with no +/- acts like an OR.

  • phrase search.
query = index.parse_query('"eighty-four days"', ["title", "body"])
(best_score, best_doc_address) = searcher.search(query, 3).hits[0]
best_doc = searcher.doc(best_doc_address)
print(best_doc)
  • integer search
query = index.parse_query("1", ["doc_id"])
(best_score, best_doc_address) = searcher.search(query, 3).hits[0]
best_doc = searcher.doc(best_doc_address)
print(best_doc)

Note: for integer search, the integer field should be indexed.

For more possible query formats and possible query options, see Tantivy Query Parser Docs.