Giter Club home page Giter Club logo

Comments (9)

MeouSker77 avatar MeouSker77 commented on July 16, 2024

we have made some breaking change on qwen-1.5's int4 checkpoint in 5.21 version, old int4 checkpoint(generated by ipex 0520 or eariler) cannot be loaded with new ipex-llm(0521 or later), please regenerate int4 checkpoint with ipex-llm 20240521 or later

from bigdl.

grandxin avatar grandxin commented on July 16, 2024

we have made some breaking change on qwen-1.5's int4 checkpoint in 5.21 version, old int4 checkpoint(generated by ipex 0520 or eariler) cannot be loaded with new ipex-llm(0521 or later), please regenerate int4 checkpoint with ipex-llm 20240521 or later

ok, got it.
the new version has some improvements? such as quantization accuracy, or RAM?

from bigdl.

MeouSker77 avatar MeouSker77 commented on July 16, 2024

ok, got it.
the new version has some improvements? such as quantization accuracy, or RAM?

yes, there should be some improvements on speed and RAM, but not much

from bigdl.

grandxin avatar grandxin commented on July 16, 2024

ok, got it.
the new version has some improvements? such as quantization accuracy, or RAM?

yes, there should be some improvements on speed and RAM, but not much

I regenerate qwen-7b int4 model and run it on my laptop(ultra 7 155H), but the "warm up" stage costs very long time(more than 5 minutes), do you have any advice?

from bigdl.

MeouSker77 avatar MeouSker77 commented on July 16, 2024

I regenerate qwen-7b int4 model and run it on my laptop(ultra 7 155H), but the "warm up" stage costs very long time(more than 5 minutes), do you have any advice?

Did you set SYCL_CACHE_PERSISTENT=1? https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#runtime-configuration

from bigdl.

grandxin avatar grandxin commented on July 16, 2024

I regenerate qwen-7b int4 model and run it on my laptop(ultra 7 155H), but the "warm up" stage costs very long time(more than 5 minutes), do you have any advice?

Did you set SYCL_CACHE_PERSISTENT=1? https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#runtime-configuration

yes, i have set it
I found that warm up speed is much faster in cpu mode(about 10-20s). but slower in xpu mode..

from bigdl.

MeouSker77 avatar MeouSker77 commented on July 16, 2024

I found that warm up speed is much faster in cpu mode(about 10-20s). but slower in xpu mode..

CPU doesn't need JIT compilation, while gpu needs.

On CPU: load model -> quantization -> inference

On GPU: load model -> quantization -> JIT compilation -> inference. This JIT compilation is what we called warm up, and it costs about ten minutes.

set SYCL_CACHE_PERSISTENT=1 will store gpu JIT code on disk so that it won't need to compile again the second time you run it.

If you are using powershell, please use CMD instead.

Could you check whether C:\Users\<user name>\AppData\Roaming\libsycl_cache exists ? If exits, please delete it. Then set SYCL_CACHE_PERSISTENT=1 and run inference (this run will take a long time (about 10 minutes) because it needs to regenerate JIT code cache), after finish, you should see regenerated C:\Users\<user name>\AppData\Roaming\libsycl_cache. With cache, following inference should has no warm up. (set SYCL_CACHE_PERSISTENT=1 is still required)

from bigdl.

grandxin avatar grandxin commented on July 16, 2024

I found that warm up speed is much faster in cpu mode(about 10-20s). but slower in xpu mode..

CPU doesn't need JIT compilation, while gpu needs.

On CPU: load model -> quantization -> inference

On GPU: load model -> quantization -> JIT compilation -> inference. This JIT compilation is what we called warm up, and it costs about ten minutes.

set SYCL_CACHE_PERSISTENT=1 will store gpu JIT code on disk so that it won't need to compile again the second time you run it.

If you are using powershell, please use CMD instead.

Could you check whether C:\Users\<user name>\AppData\Roaming\libsycl_cache exists ? If exits, please delete it. Then set SYCL_CACHE_PERSISTENT=1 and run inference (this run will take a long time (about 10 minutes) because it needs to regenerate JIT code cache), after finish, you should see regenerated C:\Users\<user name>\AppData\Roaming\libsycl_cache. With cache, following inference should has no warm up. (set SYCL_CACHE_PERSISTENT=1 is still required)

ok,i will try, thank you very much.
If libsycl_cache exists, even if I finish the infer process, restart and reload model, is there no need for a warm up?

from bigdl.

MeouSker77 avatar MeouSker77 commented on July 16, 2024

If libsycl_cache exists, even if I finish the infer process, restart and reload model, is there no need for a warm up?

yes

from bigdl.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.