Giter Club home page Giter Club logo

Comments (4)

tqchen avatar tqchen commented on June 3, 2024

do you mind try out the python api https://llm.mlc.ai/docs/deploy/python_engine.html and provide a reproducible script that can bring ths error?

from mlc-llm.

aaaaaad333 avatar aaaaaad333 commented on June 3, 2024

Thank you, this script causes the error every time it's run:

from mlc_llm import MLCEngine

# Create engine
model = "HF://mlc-ai/Llama-3-8B-Instruct-q4f16_1-MLC"
engine = MLCEngine(model)

# Run chat completion in OpenAI API.
for response in engine.chat.completions.create(
    messages=[{"role": "user", "content": """What a profound and timeless question!

The meaning of life is a topic that has puzzled philosophers, theologians, and scientists for centuries. While there may not be a definitive answer, I can offer some perspectives and insights that might be helpful.

One approach is to consider the concept of purpose. What gives your life significance? What are your values, passions, and goals? For many people, finding meaning and purpose in life involves pursuing their values and interests, building meaningful relationships, and making a positive impact on the world.

Another perspective is to look at the human experience as a whole. We are social creatures, and our lives are intertwined with those of others. We have a natural desire for connection, community, and belonging. We also have a need for self-expression, creativity, and personal growth. These aspects of human nature can be seen as fundamental to our existence and provide a sense of meaning.

Some people find meaning in their lives through spirituality or religion. They may believe that their existence has a higher purpose, and that their experiences and challenges are part of a larger plan.

Others may find meaning through their work, hobbies, or activities that bring them joy and fulfillment. They may believe that their existence has a purpose because they are contributing to the greater good, making a positive impact, or leaving a lasting legacy.

Ultimately, the meaning of life is a highly personal and subjective concept. It can be influenced by our experiences, values, and perspectives. While there may not be a single, definitive answer, exploring these questions and reflecting on our own experiences can help us discover our own sense of purpose and meaning.

What are your thoughts on the meaning of life? What gives your life significance?
"""}],
    model=model,
    stream=True,
):
    for choice in response.choices:
        print(choice.delta.content, end="", flush=True)
print("\n")

engine.terminate()

This is the full log:

(envformlc) 11:04:58 username@hostname:~/mlc$ python ./mlc.py 
[2024-05-12 11:09:23] INFO auto_device.py:88: Not found device: cuda:0
[2024-05-12 11:09:25] INFO auto_device.py:88: Not found device: rocm:0
[2024-05-12 11:09:26] INFO auto_device.py:88: Not found device: metal:0
[2024-05-12 11:09:28] INFO auto_device.py:79: Found device: vulkan:0
[2024-05-12 11:09:28] INFO auto_device.py:79: Found device: vulkan:1
[2024-05-12 11:09:30] INFO auto_device.py:88: Not found device: opencl:0
[2024-05-12 11:09:30] INFO auto_device.py:35: Using device: vulkan:0
[2024-05-12 11:09:30] INFO chat_module.py:362: Downloading model from HuggingFace: HF://mlc-ai/Llama-3-8B-Instruct-q4f16_1-MLC
[2024-05-12 11:09:30] INFO download.py:133: Weights already downloaded: /home/username/.cache/mlc_llm/model_weights/mlc-ai/Llama-3-8B-Instruct-q4f16_1-MLC
[2024-05-12 11:09:30] INFO jit.py:43: MLC_JIT_POLICY = ON. Can be one of: ON, OFF, REDO, READONLY
[2024-05-12 11:09:30] INFO jit.py:160: Using cached model lib: /home/username/.cache/mlc_llm/model_lib/dc91913de42964b1f58e63f0d45a691e.so
[2024-05-12 11:09:30] INFO engine_base.py:124: The selected engine mode is local. We choose small max batch size and KV cache capacity to use less GPU memory.
[2024-05-12 11:09:30] INFO engine_base.py:149: If you don't have concurrent requests and only use the engine interactively, please select mode "interactive".
[2024-05-12 11:09:30] INFO engine_base.py:154: If you have high concurrent requests and want to maximize the GPU memory utilization, please select mode "server".
[11:09:30] /workspace/mlc-llm/cpp/serve/config.cc:601: Under mode "local", max batch size will be set to 4, max KV cache token capacity will be set to 8192, prefill chunk size will be set to 1024. 
[11:09:30] /workspace/mlc-llm/cpp/serve/config.cc:601: Under mode "interactive", max batch size will be set to 1, max KV cache token capacity will be set to 8192, prefill chunk size will be set to 1024. 
[11:09:30] /workspace/mlc-llm/cpp/serve/config.cc:601: Under mode "server", max batch size will be set to 80, max KV cache token capacity will be set to 41512, prefill chunk size will be set to 1024. 
[11:09:30] /workspace/mlc-llm/cpp/serve/config.cc:678: The actual engine mode is "local". So max batch size is 4, max KV cache token capacity is 8192, prefill chunk size is 1024.
[11:09:30] /workspace/mlc-llm/cpp/serve/config.cc:683: Estimated total single GPU memory usage: 5736.325 MB (Parameters: 4308.133 MB. KVCache: 1092.268 MB. Temporary buffer: 335.925 MB). The actual usage might be slightly larger than the estimated number.
Exception in thread Thread-1 (_background_loop):
Traceback (most recent call last):
  File "/home/username/miniconda3/envs/envformlc/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
  File "/home/username/miniconda3/envs/envformlc/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
  File "/home/username/miniconda3/envs/envformlc/lib/python3.12/site-packages/mlc_llm/serve/engine_base.py", line 482, in _background_loop
    self._ffi["run_background_loop"]()
  File "tvm/_ffi/_cython/./packed_func.pxi", line 332, in tvm._ffi._cy3.core.PackedFuncBase.__call__
  File "tvm/_ffi/_cython/./packed_func.pxi", line 263, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./packed_func.pxi", line 252, in tvm._ffi._cy3.core.FuncCall3
  File "tvm/_ffi/_cython/./base.pxi", line 182, in tvm._ffi._cy3.core.CHECK_CALL
  File "/home/username/miniconda3/envs/envformlc/lib/python3.12/site-packages/tvm/_ffi/base.py", line 481, in raise_last_ffi_error
    raise py_err
  File "/workspace/mlc-llm/cpp/serve/threaded_engine.cc", line 168, in mlc::llm::serve::ThreadedEngineImpl::RunBackgroundLoop()
  File "/workspace/mlc-llm/cpp/serve/engine.cc", line 328, in mlc::llm::serve::EngineImpl::Step()
  File "/workspace/mlc-llm/cpp/serve/engine_actions/new_request_prefill.cc", line 233, in mlc::llm::serve::NewRequestPrefillActionObj::Step(mlc::llm::serve::EngineState)
  File "/workspace/mlc-llm/cpp/serve/sampler/cpu_sampler.cc", line 344, in mlc::llm::serve::CPUSampler::BatchRenormalizeProbsByTopP(tvm::runtime::NDArray, std::vector<int, std::allocator<int> > const&, tvm::runtime::Array<tvm::runtime::String, void> const&, tvm::runtime::Array<mlc::llm::serve::GenerationConfig, void> const&)
  File "/workspace/mlc-llm/cpp/serve/sampler/cpu_sampler.cc", line 560, in mlc::llm::serve::CPUSampler::CopyProbsToCPU(tvm::runtime::NDArray)
  File "/workspace/mlc-llm/3rdparty/tvm/include/tvm/runtime/ndarray.h", line 405, in tvm::runtime::NDArray::CopyFrom(tvm::runtime::NDArray const&)
tvm.error.InternalError: Traceback (most recent call last):
  10: mlc::llm::serve::ThreadedEngineImpl::RunBackgroundLoop()
        at /workspace/mlc-llm/cpp/serve/threaded_engine.cc:168
  9: mlc::llm::serve::EngineImpl::Step()
        at /workspace/mlc-llm/cpp/serve/engine.cc:328
  8: mlc::llm::serve::NewRequestPrefillActionObj::Step(mlc::llm::serve::EngineState)
        at /workspace/mlc-llm/cpp/serve/engine_actions/new_request_prefill.cc:233
  7: mlc::llm::serve::CPUSampler::BatchRenormalizeProbsByTopP(tvm::runtime::NDArray, std::vector<int, std::allocator<int> > const&, tvm::runtime::Array<tvm::runtime::String, void> const&, tvm::runtime::Array<mlc::llm::serve::GenerationConfig, void> const&)
        at /workspace/mlc-llm/cpp/serve/sampler/cpu_sampler.cc:344
  6: mlc::llm::serve::CPUSampler::CopyProbsToCPU(tvm::runtime::NDArray)
        at /workspace/mlc-llm/cpp/serve/sampler/cpu_sampler.cc:560
  5: tvm::runtime::NDArray::CopyFrom(tvm::runtime::NDArray const&)
        at /workspace/mlc-llm/3rdparty/tvm/include/tvm/runtime/ndarray.h:405
  4: tvm::runtime::NDArray::CopyFromTo(DLTensor const*, DLTensor*, void*)
  3: tvm::runtime::DeviceAPI::CopyDataFromTo(DLTensor*, DLTensor*, void*)
  2: tvm::runtime::vulkan::VulkanDeviceAPI::CopyDataFromTo(void const*, unsigned long, void*, unsigned long, unsigned long, DLDevice, DLDevice, DLDataType, void*)
  1: tvm::runtime::vulkan::VulkanStream::Synchronize()
  0: _ZN3tvm7runtime6deta
  File "/workspace/tvm/src/runtime/vulkan/vulkan_stream.cc", line 155
InternalError: Check failed: (res == VK_SUCCESS) is false: Vulkan Error, code=-4: VK_ERROR_DEVICE_LOST
^CTraceback (most recent call last):
  File "/home/username/mlc/./mlc.py", line 8, in <module>
    for response in engine.chat.completions.create(
  File "/home/username/miniconda3/envs/envformlc/lib/python3.12/site-packages/mlc_llm/serve/engine.py", line 1735, in _handle_chat_completion
    for delta_outputs in self._generate(prompts, generation_cfg, request_id):  # type: ignore
  File "/home/username/miniconda3/envs/envformlc/lib/python3.12/site-packages/mlc_llm/serve/engine.py", line 1858, in _generate
    delta_outputs = self.state.sync_output_queue.get()
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/username/miniconda3/envs/envformlc/lib/python3.12/queue.py", line 171, in get
    self.not_empty.wait()
  File "/home/username/miniconda3/envs/envformlc/lib/python3.12/threading.py", line 355, in wait
    waiter.acquire()
KeyboardInterrupt
^CException ignored in: <module 'threading' from '/home/username/miniconda3/envs/envformlc/lib/python3.12/threading.py'>
Traceback (most recent call last):
  File "/home/username/miniconda3/envs/envformlc/lib/python3.12/threading.py", line 1622, in _shutdown
    lock.acquire()
KeyboardInterrupt: 

from mlc-llm.

tqchen avatar tqchen commented on June 3, 2024

Thank you, do you also mind comment about the GPu you have and the vram size ?

from mlc-llm.

aaaaaad333 avatar aaaaaad333 commented on June 3, 2024

I don't have a separate GPU, I'm using a Celeron 5105 with an Intel UHD Graphics 24EU Mobile, with no VRAM of its own.

lspci -v | grep VGA -A 12
00:02.0 VGA compatible controller: Intel Corporation JasperLake [UHD Graphics] (rev 01) (prog-if 00 [VGA controller])
	DeviceName: Onboard - Video
	Subsystem: Intel Corporation JasperLake [UHD Graphics]
	Flags: bus master, fast devsel, latency 0, IRQ 129
	Memory at 6000000000 (64-bit, non-prefetchable) [size=16M]
	Memory at 4000000000 (64-bit, prefetchable) [size=256M]
	I/O ports at 5000 [size=64]
	Expansion ROM at 000c0000 [virtual] [disabled] [size=128K]
	Capabilities: <access denied>
	Kernel driver in use: i915
	Kernel modules: i915

from mlc-llm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.