Giter Club home page Giter Club logo

Comments (4)

qiyuangong avatar qiyuangong commented on June 9, 2024 1

Hi @ElliottDyson
Self-speculative decoding with vLLM is not available right now. Will let you know when it's available.

BTW, Speculative decoding support in vLLM is also in progress (https://docs.google.com/document/d/1rE4pr3IdspRw97XbImY4fS9IWYuJJ3HGtL7AdIKGrw8/).

from ipex-llm.

jason-dai avatar jason-dai commented on June 9, 2024 1

Hello there,

I was wondering if it were possible to have the self-speculative decoding operate using IQ2 as the draft model and FP8 as the core model (as it has been shown that FP8 is very rarely any different in accuracy compared to FP16).

A look into the following 1.58bit quant method would also be interesting as to its integration: ggerganov/llama.cpp#5999

I was also curious as to whether llama.cpp quants other than 4bit are compatible at all, as I noticed you only provided examples using 4bit quantisations. My reasoning behind being interested in this is the ability to offload x number of layers to the GPU and keep the remaining layers computing on the CPU, as it is an incredibly useful feature to be able to work with much larger models or/and longer context lengths.

Thanks

@ElliottDyson currently we only optimized IQ2 for memory size, not for speed yet, and therefore using IQ2 as the draft model may not be faster than INT4; using FP8 as target model may be possible.

And we do support llama.cpp compatible IQ2 and IQ1 models using our cpp backend (see https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/llama_cpp_quickstart.html)

from ipex-llm.

ElliottDyson avatar ElliottDyson commented on June 9, 2024

Sorry, one more thing that I forgot to add, is it possible to use your self-speculative decoding method or custom IQ2 quantisation in vLLM in any way (only typical low-bit quant was mentioned in the docs and I can't seem to find the library link to "ipex_llm.vllm.entrypoints.llm" to figure out if this is possible by myself)? I also had a thought that may work better than configuring for the various custom quants in llama.cpp such as IQ2, and that would be integrating CPU layer offloading directly into the core methods you are using here, it's just a possible alternative idea I had in case it was any easier.

Again, thank you for all the work your team have been doing here!

from ipex-llm.

ElliottDyson avatar ElliottDyson commented on June 9, 2024

Hello there,

I was wondering if it were possible to have the self-speculative decoding operate using IQ2 as the draft model and FP8 as the core model (as it has been shown that FP8 is very rarely any different in accuracy compared to FP16).

A look into the following 1.58bit quant method would also be interesting as to its integration: ggerganov/llama.cpp#5999

I was also curious as to whether llama.cpp quants other than 4bit are compatible at all, as I noticed you only provided examples using 4bit quantisations. My reasoning behind being interested in this is the ability to offload x number of layers to the GPU and keep the remaining layers computing on the CPU, as it is an incredibly useful feature to be able to work with much larger models or/and longer context lengths.

Thanks

@ElliottDyson currently we only optimized IQ2 for memory size, not for speed yet, and therefore using IQ2 as the draft model may not be faster than INT4; using FP8 as target model may be possible.

And we do support llama.cpp compatible IQ2 and IQ1 models using our cpp backend (see https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/llama_cpp_quickstart.html)

Just tried this combination. Thank you, FP8 as target and INT4 as draft worked very well. Looking forward to the potential of an even speedier lower precision draft model! 😁

from ipex-llm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.