Giter Club home page Giter Club logo

cherche's Introduction

Cherche

Neural search

documentation Demo license

Cherche enables the development of a neural search pipeline that employs retrievers and pre-trained language models both as retrievers and rankers. The primary advantage of Cherche lies in its capacity to construct end-to-end pipelines. Additionally, Cherche is well-suited for offline semantic search due to its compatibility with batch computation.

Live demo of a NLP search engine powered by Cherche

Alt text

Installation 🤖

To install Cherche for use with a simple retriever on CPU, such as TfIdf, Flash, Lunr, Fuzz, use the following command:

pip install cherche

To install Cherche for use with any semantic retriever or ranker on CPU, use the following command:

pip install "cherche[cpu]"

Finally, if you plan to use any semantic retriever or ranker on GPU, use the following command:

pip install "cherche[gpu]"

By following these installation instructions, you will be able to use Cherche with the appropriate requirements for your needs.

Documentation

Documentation is available here. It provides details about retrievers, rankers, pipelines and examples.

QuickStart 📑

Documents

Cherche allows findings the right document within a list of objects. Here is an example of a corpus.

from cherche import data

documents = data.load_towns()

documents[:3]
[{'id': 0,
  'title': 'Paris',
  'url': 'https://en.wikipedia.org/wiki/Paris',
  'article': 'Paris is the capital and most populous city of France.'},
 {'id': 1,
  'title': 'Paris',
  'url': 'https://en.wikipedia.org/wiki/Paris',
  'article': "Since the 17th century, Paris has been one of Europe's major centres of science, and arts."},
 {'id': 2,
  'title': 'Paris',
  'url': 'https://en.wikipedia.org/wiki/Paris',
  'article': 'The City of Paris is the centre and seat of government of the region and province of Île-de-France.'
  }]

Retriever ranker

Here is an example of a neural search pipeline composed of a TF-IDF that quickly retrieves documents, followed by a ranking model. The ranking model sorts the documents produced by the retriever based on the semantic similarity between the query and the documents. We can call the pipeline using a list of queries and get relevant documents for each query.

from cherche import data, retrieve, rank
from sentence_transformers import SentenceTransformer

# List of dicts
documents = data.load_towns()

# Retrieve on fields title and article
retriever = retrieve.TfIdf(key="id", on=["title", "article"], documents=documents, k=30)

# Rank on fields title and article
ranker = rank.Encoder(
    key = "id",
    on = ["title", "article"],
    encoder = SentenceTransformer("sentence-transformers/all-mpnet-base-v2").encode,
    k = 3,
)

# Pipeline creation
search = retriever + ranker

search.add(documents=documents)

# Search documents for 3 queries.
search(["Bordeaux", "Paris", "Toulouse"])
[[{'id': 57, 'similarity': 0.69513524},
  {'id': 63, 'similarity': 0.6214994},
  {'id': 65, 'similarity': 0.61809087}],
 [{'id': 16, 'similarity': 0.59158516},
  {'id': 0, 'similarity': 0.58217555},
  {'id': 1, 'similarity': 0.57944715}],
 [{'id': 26, 'similarity': 0.6925601},
  {'id': 37, 'similarity': 0.63977146},
  {'id': 28, 'similarity': 0.62772334}]]

We can map the index to the documents to access their contents using pipelines:

search += documents
search(["Bordeaux", "Paris", "Toulouse"])
[[{'id': 57,
   'title': 'Bordeaux',
   'url': 'https://en.wikipedia.org/wiki/Bordeaux',
   'similarity': 0.69513524},
  {'id': 63,
   'title': 'Bordeaux',
   'similarity': 0.6214994},
  {'id': 65,
   'title': 'Bordeaux',
   'url': 'https://en.wikipedia.org/wiki/Bordeaux',
   'similarity': 0.61809087}],
 [{'id': 16,
   'title': 'Paris',
   'url': 'https://en.wikipedia.org/wiki/Paris',
   'article': 'Paris received 12.',
   'similarity': 0.59158516},
  {'id': 0,
   'title': 'Paris',
   'url': 'https://en.wikipedia.org/wiki/Paris',
   'similarity': 0.58217555},
  {'id': 1,
   'title': 'Paris',
   'url': 'https://en.wikipedia.org/wiki/Paris',
   'similarity': 0.57944715}],
 [{'id': 26,
   'title': 'Toulouse',
   'url': 'https://en.wikipedia.org/wiki/Toulouse',
   'similarity': 0.6925601},
  {'id': 37,
   'title': 'Toulouse',
   'url': 'https://en.wikipedia.org/wiki/Toulouse',
   'similarity': 0.63977146},
  {'id': 28,
   'title': 'Toulouse',
   'url': 'https://en.wikipedia.org/wiki/Toulouse',
   'similarity': 0.62772334}]]

Retrieve

Cherche provides retrievers that filter input documents based on a query.

  • retrieve.TfIdf
  • retrieve.Lunr
  • retrieve.Flash
  • retrieve.Encoder
  • retrieve.DPR
  • retrieve.Fuzz
  • retrieve.Embedding

Rank

Cherche provides rankers that filter documents in output of retrievers.

Cherche rankers are compatible with SentenceTransformers models which are available on Hugging Face hub.

  • rank.Encoder
  • rank.DPR
  • rank.CrossEncoder
  • rank.Embedding

Question answering

Cherche provides modules dedicated to question answering. These modules are compatible with Hugging Face's pre-trained models and fully integrated into neural search pipelines.

Contributors 🤝

Cherche was created for/by Renault and is now available to all. We welcome all contributions.

Acknowledgements 👏

TfIdf retriever is a wrapper around scikit-learn's TfidfVectorizer. Lunr retriever is a wrapper around Lunr.py. Flash retriever is a wrapper around FlashText. DPR, Encode and CrossEncoder rankers are wrappers dedicated to the use of the pre-trained models of SentenceTransformers in a neural search pipeline.

Citations

If you use cherche to produce results for your scientific publication, please refer to our SIGIR paper:

@inproceedings{Sourty2022sigir,
    author = {Raphael Sourty and Jose G. Moreno and Lynda Tamine and Francois-Paul Servant},
    title = {CHERCHE: A new tool to rapidly implement pipelines in information retrieval},
    booktitle = {Proceedings of SIGIR 2022},
    year = {2022}
}

Dev Team 💾

The Cherche dev team is made up of Raphaël Sourty, François-Paul Servant, Nicolas Bizzozzero, Jose G Moreno. 🥳

cherche's People

Contributors

dconathan avatar maxhalford avatar nicolasbizzozzero avatar raphaelsty avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cherche's Issues

"IndexError: index out of range in self "While adding documents to cherche pipeline

I'm using a cherche pipline built of a tfidf retriever with a sentencetransformer ranker as follows : search = (retriever + ranker)
While trying to add documents to the pipeline (search.add(documents=documents), I got this error :

"""/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
2181 # remove once script supports set_grad_enabled
2182 no_grad_embedding_renorm(weight, input, max_norm, norm_type)
-> 2183 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
2184
2185

IndexError: index out of range in self"""

active project

Just curious if this project is still active. It looks great, thank for working on it!

k param in retriever, ranker and pipeline, and documentation

the doc at https://raphaelsty.github.io/cherche/api/compose/Pipeline/
regarding the "call" method says:

If the batch_size_ranker, or batch_size_retriever it takes precedence over the batch_size. If the k_ranker, or k_retriever it takes precedence over the k parameter.

which is not really understandable, needs to be clarified (and could be interpreted as something misleading).

Regarding the k param, please note the following: if you define a retriever (say a tfidf one) with a k param of 20, followed by a ranker with a k param of 10, (your interested in top_k = 10 values at the end, but use 20 values at the retriever level) then a likely error one can make is to call the pipeline with a k value of 10. In this case indeed, it appears that the retriever uses a k value of 10.

k param when creating sbert retriever not taken into account

Create a retriever based on a sentence bert, passing a value, eg. 10, to k param.
It is not taken into account when calling the retriever (more values are returned)

    retriever = retrieve.Encoder(
        key='id',
        on=['content'],
        encoder=SentenceTransformer('sentence-transformers/all-MiniLM-L12-v2').encode,
        k = 10
    )
    retriever(documents=docs)

len(retriever(queries)[0]) > 10

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.