Giter Club home page Giter Club logo

Comments (6)

smr97 avatar smr97 commented on May 9, 2024 1

Sounds good! Thanks for the prompt responses on this, I find this work very impressive, and will spread the word.

from tokenizers.

n1t0 avatar n1t0 commented on May 9, 2024

We are actually using rayon to encode multiple sequences in parallel, see https://github.com/huggingface/tokenizers/blob/master/tokenizers/src/tokenizer/mod.rs#L357.

Otherwise, we don't make any assumption on the type of content that we are going to process. According to your choice of PreTokenizer and Model, there isn't necessarily a concept of word, and thus parallelization at this level isn't really possible, at least not trivially.

from tokenizers.

smr97 avatar smr97 commented on May 9, 2024

Thanks for the response! I am new to NLP, so I did not know that this concept is not generic. Also I had missed this call to par_iter(), so thanks for pointing that out.

On an unrelated note, could you please point me to a time/throughput comparison with WordPiece tokenizer in python that is used for the BERT model? It would help me push the case for switching to this tokenizer at my workplace.

from tokenizers.

n1t0 avatar n1t0 commented on May 9, 2024

Hey @smr97, you can have a look at this file: https://github.com/huggingface/tokenizers/blob/master/bindings/python/examples/example.py
It provides an example of processing between the Python tokenizer which is available in transformers and the one provided by this library. You can process a large enough file, like one of the wiki-text-raw to see the difference.
If you need to download a vocabulary for BERT, all the links are available here: https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_bert.py#L32

from tokenizers.

smr97 avatar smr97 commented on May 9, 2024

Thanks for the useful example, that is exactly what I wanted. I ran this with the wiki-text-raw-train file (541MB) and saw a pretty amazing speedup (MacBook 2017)!

Just FYI though, there seems to be a bug in the script, it throws this error as shown in the figure. I checked the rust source and it seems that the decode_batch function indeed has two positional arguments (so the error is un-called for). I have not worked with python bindings to Rust, so am not sure where the bug really is.

Screenshot 2020-01-14 at 3 45 08 PM

from tokenizers.

mfuntowicz avatar mfuntowicz commented on May 9, 2024

@smr97 It should be fixed on master now :)

from tokenizers.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.