Giter Club home page Giter Club logo

fast-knn-nmt's People

Contributors

littlesulley avatar yuxianmeng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

fast-knn-nmt's Issues

Which fairseq files are used?

In the repo we have fairseq files in fast-knn-nmt/fast_knn_nmt/custom_fairseq/ as well as fast-knn-nmt/thirdparty/fairseq/ the files/changes to, for example, the sequence_scorer.py differ in these two. Which one is the correct one to use and why do the other ones exist?

FAISS Issue

I am encountering following issue during k nearest neighbors script run, can you please help

Screen Shot 2023-03-15 at 10 44 59 PM

Missing fairseq code-diff

The transformer_layer.py also has changes regarding the ret_ffn_inp flag that is introduced here which aren't listed in the readme.

How do you bypass memory limitations on the WMT dataset?

This isn't really related to your code but: Since the WMT dataset for the vanilla knn-MT has a pretty big datastore (I think in the ballpark of 900M keys), the corresponding FAISS index is quite big and loading it to a single GPU together with the model is a pretty big issue. Naturally, you'd want to shard the index over multiple GPUs but FAISS has some problems searching over a sharded index with GPU tensors, see: facebookresearch/faiss#2074 or facebookresearch/faiss#1997. Also loading it to a separate GPU, transferring tensors there for searching, and moving the results back, runs into some memory deallocation issues inside FAISS.

I wonder how you ran the experiments on that or does your codebase has some neat trick that I didn't catch yet?

From my understanding you don't shard as well, according to this line and then you also transfer the tensors you search on to CPU here (you can pass GPU tensors btw if you import faiss.contrib.torch_utils that changes the function signatures to accept GPU tensors instead of np arrays). Does your index for WMT with vanilla knn-MT just fit in GPU memory?

For the domain dataset, the datastores are relatively small so this isn't a big issue and I got them set-up just fine!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.