Comments (2)
This could be possible by having the vocab
field be a shallow wrapping of HashMap<String, u32>
and then implementing the Serialize
trait on the wrapper much like this: https://docs.serde.rs/src/serde/ser/impls.rs.html#353-366, except we'd pass an iterator to serializer.collect_map
(https://docs.serde.rs/src/serde/ser/impls.rs.html#364) that iterates in order of token index.
from tokenizers.
Btw as a workaround this is how I am currently doing it, as a post-processing step:
const fs = require("fs");
const o = JSON.parse(fs.readFileSync("./vocab.json", "utf8"));
const entries = Object.entries(o);
entries.sort((a, b) => a[1] - b[1]);
const out = JSON.stringify(
Object.fromEntries(entries),
entries.map(x => x[0]),
"\t"
);
/// ^^ second arg is to order by value. (and not default object order)
fs.writeFileSync("sorted-vocab.json", out);
from tokenizers.
Related Issues (20)
- Discrepancy Between GitHub Release and NPM Package Version & Missing Dependencies HOT 1
- Deepseeker model completely loses performance after using tokenizer.add_tokens(special_tokens)
- Unsound use of unsafe in `src/utils/parallelism.rs`
- LLamaTokenizer with `use_fast=True` / and `use_fast=False` causing memory leak when used with multiprocessing / `dataset.map(num_proc)` HOT 1
- StripAccents doesn't work
- Issue in installing rudalle on google colab, !pip install rudalle
- Extended vocab tokenizer merging text into a single string without spaces while decoding
- offline installation HOT 1
- Failing to build bindings with 0.19.1 HOT 1
- Python Binding: Tokenizer.from_file() cannot parse JSON file of tokens HOT 1
- Treatment of hyphenated words
- Cross-compilation fails for custom target
- Breaking changes in v0.19.1 for tiktoken/llama3 HOT 3
- BPE Trainer doesn't respect the `vocab_size` parameter when dataset size is increased HOT 1
- UnigramTrainer: byte_fallback is false.
- Tokens Removed from Trained Custom BPE Tokenizer
- Llama3 tokenizer with Incorrect offset_mapping HOT 1
- Loading `tokenizer.model` with Rust API HOT 4
- Why the tokenizer is slower than tiktoken? HOT 1
- Why are 'unknown' tokens randomly added to my tokenized input? HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tokenizers.