Giter Club home page Giter Club logo

Comments (8)

shwu-nyunai avatar shwu-nyunai commented on May 23, 2024 1

have resolved the issues using the following set of install-scripts;
https://github.com/nyunAI/Faster-LLM-Survey/tree/A100TGIv2.0.1/scripts

Usually, if u have required version of cmake, libkineto, protobuff & rust installed you can directly run

  1. scripts/install-tgi.sh , then
  2. scripts/parallel-install-extensions.sh (this parallely installs all extensions - flash-attn, flash-attn-v2-cuda, vllm-cuda, exllamav2_kernels, etc.)

use other scripts in the directory as required.

for other system and driver details see - https://github.com/nyunAI/Faster-LLM-Survey/blob/A100TGIv2.0.1/experiment_details.txt

ps. maintainer can close this. leaving open for anyone facing a similar issue.

from text-generation-inference.

Semihal avatar Semihal commented on May 23, 2024 1

Build and install rotary and layer_norm from https://github.com/Dao-AILab/flash-attention/tree/main/csrc.
This work for me

from text-generation-inference.

shuaills avatar shuaills commented on May 23, 2024

You need to re-install vllm and flash-attention-v2
`cd text-generation-inference/server
rm -rf vllm
make install-vllm-cuda

rm -rf flash-attention-v2
make install-flash-attention-v2-cuda`

They forgot to add this to the release notes about local installs.
#1738
I tried this and solved my problem.

from text-generation-inference.

shwu-nyunai avatar shwu-nyunai commented on May 23, 2024

I have been installing all of the extensions via those commands for 2 days now;
I also tried using the release v2.0.1 code zip
let me try this once more with a clean installation

from text-generation-inference.

shuaills avatar shuaills commented on May 23, 2024

I have been installing all of the extensions via those commands for 2 days now; I also tried using the release v2.0.1 code zip let me try this once more with a clean installation

I feel you, did exactly the same. install/delete about 4 times

from text-generation-inference.

boxiaowave avatar boxiaowave commented on May 23, 2024

I have been installing all of the extensions via those commands for 2 days now; I also tried using the release v2.0.1 code zip let me try this once more with a clean installation

You can follow the steps in the Dockerfile, after compile flash-attn with cmd 'make install-flash..โ€˜, the script moves the compiled file to python's site-package folder, just like
cp -r /text-generation-inference/server/flash-attention-v2/build/lib.linux-x86_64-cpython-39/* /usr/local/lib/python3.10/site-packages/

from text-generation-inference.

for-just-we avatar for-just-we commented on May 23, 2024

have resolved the issues using the following set of install-scripts; https://github.com/nyunAI/Faster-LLM-Survey/tree/A100TGIv2.0.1/scripts

Usually, if u have required version of cmake, libkineto, protobuff & rust installed you can directly run

1. [scripts/install-tgi.sh](https://github.com/nyunAI/Faster-LLM-Survey/blob/A100TGIv2.0.1/scripts/install-tgi.sh) , then

2. [scripts/parallel-install-extensions.sh](https://github.com/nyunAI/Faster-LLM-Survey/blob/A100TGIv2.0.1/scripts/parallel-install-extensions.sh) (this parallely installs all extensions - flash-attn, flash-attn-v2-cuda, vllm-cuda, exllamav2_kernels, etc.)

use other scripts in the directory as required.

for other system and driver details see - https://github.com/nyunAI/Faster-LLM-Survey/blob/A100TGIv2.0.1/experiment_details.txt

ps. maintainer can close this. leaving open for anyone facing a similar issue.

When install vllm for TGI-2.0.1, I came across :

error: triton 2.3.0 is installed but triton==2.1.0 is required by {'torch'}
make: *** [Makefile-vllm:12: install-vllm-cuda] Error 1

Is this because I use wrong vllm version. I don't modify anything in the Makefile-* scriot

from text-generation-inference.

shwu-nyunai avatar shwu-nyunai commented on May 23, 2024

Your PyTorch version might be different. I faced this issue for the same reason that my PyTorch version was higher than torch==2.1.0 and hence the default triton that was installed was 2.2.0 (afair).
Nonetheless, use a fresh virtual env (maybe conda)

install torch==2.1.0 or use install-tgi.sh

from text-generation-inference.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.