openbmb / ollama Goto Github PK
View Code? Open in Web Editor NEWThis project forked from ollama/ollama
Get up and running with Llama 3, Mistral, Gemma, and other large language models.
Home Page: https://ollama.com
License: MIT License
This project forked from ollama/ollama
Get up and running with Llama 3, Mistral, Gemma, and other large language models.
Home Page: https://ollama.com
License: MIT License
试用成本有点高,能不能给一个安装包试用一下
Please do not use this issue area, but use the main repo issue area.
please raise an issue in the main repo with "ollama" label. I will respond ASAP.
No response
No response
No response
No response
FROM ./MiniCPM-V-2_5/model/ggml-model-Q4_K_M.gguf
FROM ./MiniCPM-V-2_5/mmproj-model-f16.gguf
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
PARAMETER num_keep 4
PARAMETER num_ctx 2048
使用的模型文件是CPU的gguf,原来是出现无法浏览访问外部文件系统中的文件,后来换了一下,执行run报错是Error: llama runner process has terminated: exit status 0xc0000409
No response
No response
Intel
No response
i built the ollama image on MacOS from source by following the instructions 3. Rebuild ./ollama binary file instruction
, the build works, but the model is not able to extract information correctly.
result on MacOS
extract prodcut information from this image, /Users/jack.wu/Desktop/breville1.png, provide product name, description, price and list price
Added image '/Users/jack.wu/Desktop/breville1.png'
Product: Nespresso Coffee Machine & Accessories
Description: The image shows a range of Nespresso coffee machines and accessories that include the machine, capsules, and possibly related products. These
items are likely displayed in an online retail environment.
Price: The price information is not visible in the image provided.
List Price: Similarly, there is no list price visible in the image.
do you see any number in the image?
Yes, there are numbers visible in the image. The number "4.2" is seen next to a star rating graphic, which indicates that this product has a 4.2-star rating
based on 240 ratings. Additionally, there is a price of "$699.95" displayed for one of the Nespresso products shown in the image.
is this a discount of the original price? what is the original price?
Yes, this appears to be a discount of the original price. The original list price is "$699.95," and there's a strike-through indicating that the
macOS
Apple
Apple
No response
在mac上部署2.6版本,执行go generate ./...时候,openblas已经安装了,就一直报错:ollama/llm/llama.cpp/ggml-blas.cpp:12:13: fatal error: 'cblas.h' file not found,怎么装都不行,该如何解决?
macOS
Apple
Apple
No response
运行go generate ./...时出现异常 以下是实际的输出,其中包含了使用到的依赖
go generate ./...
Already on 'minicpm-v2.5'
Your branch is up to date with 'origin/minicpm-v2.5'.
Submodule path '../llama.cpp': checked out 'd8974b8ea61e1268a4cad27f4f6e2cde3c5d1370'
Checking for MinGW...
CommandType Name Version Source
Application gcc.exe 0.0.0.0 w64devkit\bin\gcc.exe
Application mingw32-make.exe 0.0.0.0 w64devkit\bin\mingw32-make.exe
Building static library
generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64_static -G MinGW Makefiles -DCMAKE_C_COMPILER=gcc.exe -DCMAKE_CXX_COMPILER=g++.exe -DBUILD_SHARED_LIBS=off -DLLAMA_NATIVE=off -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_F16C=off -DLLAMA_FMA=off
cmake version 3.29.3
CMake suite maintained and supported by Kitware (kitware.com/cmake).
-- The C compiler identification is GNU 14.1.0
-- The CXX compiler identification is GNU 14.1.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: xxx/w64devkit/bin/gcc.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: xxx/w64devkit/bin/g++.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: xxx/Git/bin/git.exe (found version "2.30.0.windows.2")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- x86 detected
-- Configuring done (3.7s)
-- Generating done (1.4s)
-- Build files have been written to: xxx/llm/build/windows/amd64_static
building with: cmake --build ../build/windows/amd64_static --config Release --target llama --target ggml
[ 16%] Building C object CMakeFiles/ggml.dir/ggml.c.obj
[ 16%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.obj
[ 33%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.obj
[ 50%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.obj
[ 50%] Building CXX object CMakeFiles/ggml.dir/sgemm.cpp.obj
[ 50%] Built target ggml
[ 66%] Building CXX object CMakeFiles/llama.dir/llama.cpp.obj
xxx\llm\llama.cpp\llama.cpp: In constructor 'llama_mmap::llama_mmap(llama_file*, size_t, bool)':
xxx\llm\llama.cpp\llama.cpp:1428:38: warning: cast between incompatible function types from 'FARPROC' {aka 'long long int ()()'} to 'BOOL ()(HANDLE, ULONG_PTR, PWIN32_MEMORY_RANGE_ENTRY, ULONG)' {aka 'int ()(void, long long unsigned int, _WIN32_MEMORY_RANGE_ENTRY*, long unsigned int)'} [-Wcast-function-type]
1428 | pPrefetchVirtualMemory = reinterpret_cast<decltype(pPrefetchVirtualMemory)> (GetProcAddress(hKernel32, "PrefetchVirtualMemory"));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
xxx\llm\llama.cpp\llama.cpp: In function 'float* llama_get_logits_ith(llama_context*, int32_t)':
xxx\llm\llama.cpp\llama.cpp:17331:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector::size_type' {aka 'long long unsigned int'} [-Wformat=]
17331 | throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size()));
| ~~^ ~~~~~~~~~~~~~~~~~~~~~~
| | |
| long unsigned int std::vector::size_type {aka long long unsigned int}
| %llu
xxx\llm\llama.cpp\llama.cpp: In function 'float* llama_get_embeddings_ith(llama_context*, int32_t)':
xxx\llm\llama.cpp\llama.cpp:17376:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector::size_type' {aka 'long long unsigned int'} [-Wformat=]
17376 | throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size()));
| ~~^ ~~~~~~~~~~~~~~~~~~~~~~
| | |
| long unsigned int std::vector::size_type {aka long long unsigned int}
| %llu
[ 83%] Building CXX object CMakeFiles/llama.dir/unicode.cpp.obj
[ 83%] Building CXX object CMakeFiles/llama.dir/unicode-data.cpp.obj
[100%] Linking CXX static library libllama.a
[100%] Built target llama
[100%] Built target ggml
Building LCD CPU
generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64/cpu -DCMAKE_POSITION_INDEPENDENT_CODE=on -A x64 -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DBUILD_SHARED_LIBS=on -DLLAMA_NATIVE=off -DLLAMA_SERVER_VERBOSE=off -DCMAKE_BUILD_TYPE=Release
cmake version 3.29.3
CMake suite maintained and supported by Kitware (kitware.com/cmake).
-- Building for: Visual Studio 16 2019
-- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.26217.
-- The C compiler identification is MSVC 19.29.30148.0
-- The CXX compiler identification is MSVC 19.29.30148.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: xxx//Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: xxx//Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: xxx/Git/bin/git.exe (found version "2.30.0.windows.2")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - not found
-- Found Threads: TRUE
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- CMAKE_GENERATOR_PLATFORM: x64
-- x86 detected
-- Configuring done (9.7s)
-- Generating done (1.2s)
CMake Warning:
Manually-specified variables were not used by the project:
LLAMA_F16C
-- Build files have been written to: xxx/ollama/llm/build/windows/amd64/cpu
building with: cmake --build ../build/windows/amd64/cpu --config Release --target ollama_llama_server
用于 .NET Framework 的 Microsoft (R) 生成引擎版本 16.11.2+f32259642
版权所有(C) Microsoft Corporation。保留所有权利。
MSBUILD : error MSB1009: 项目文件不存在。
开关:ollama_llama_server.vcxproj
llm\generate\generate_windows.go:3: running "powershell": exit status 1
Windows
Nvidia
Intel
OpenBMB/ollama/tree/minicpm-v2.5
I followed the instruction (https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5) and ran
./ollama run minicpm-v2.5
I got this error:
Error: llama runner process has terminated: signal: abort trap
macOS
Apple
Apple
Latest git commit (367ec3f)
I created an Ollama model (for the fp16 GGUF) based on this: https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5
When testing one of my sample forms images, I get bad/wrong results when running the model locally via Ollama.
./ollama run minicpm-v2.5
>>> How is the Ending Balance? ./credit-card-statement.jpg
Added image './credit-card-statement.jpg'
The Ending Balance is 8,010.
I get the perfect and correct answers when using the same forms image in the online demo: https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5
What can we do to get the same quality here locally?
macOS
Apple
Apple
Latest git commit (367ec3f)
Hi:
我按照文档本地编译了Ollama, 将编译产出可执行文件ollama 软连到 /usr/local/bin下面
Ollama可以正常启动 模型也可以正常导入
我看在工程的examples/minicpm-v2.5/slice_token_for_ollama.raw这里有这个文件,我应该将这个文件放在什么位置?
但是在进行chat推理时报以下错误:
llm_load_vocab: missing pre-tokenizer type, using: 'default'
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab:
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: ************************************
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: CONSIDER REGENERATING THE MODEL
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: ************************************
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab:
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: special tokens definition check successful ( 256/128256 ).
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: format = GGUF V3 (latest)
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: arch = llama
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: vocab type = BPE
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_vocab = 128256
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_merges = 280147
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_ctx_train = 8192
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd = 4096
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_head = 32
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_head_kv = 8
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_layer = 32
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_rot = 128
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd_head_k = 128
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd_head_v = 128
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_gqa = 4
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd_k_gqa = 1024
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd_v_gqa = 1024
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_norm_eps = 0.0e+00
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_logit_scale = 0.0e+00
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_ff = 14336
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_expert = 0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_expert_used = 0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: causal attn = 1
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: pooling type = 0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: rope type = 0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: rope scaling = linear
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: freq_base_train = 500000.0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: freq_scale_train = 1
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_yarn_orig_ctx = 8192
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: rope_finetuned = unknown
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: ssm_d_conv = 0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: ssm_d_inner = 0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: ssm_d_state = 0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: ssm_dt_rank = 0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: model type = 8B
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: model ftype = Q4_0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: model params = 8.03 B
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: model size = 4.33 GiB (4.64 BPW)
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: general.name = model
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: UNK token = 128002 ''
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: PAD token = 0 '!'
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: LF token = 128 'Ä'
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_tensors: ggml ctx size = 0.30 MiB
May 30 14:34:54 wbs-desktop ollama[651389]: llm_load_tensors: offloading 13 repeating layers to GPU
May 30 14:34:54 wbs-desktop ollama[651389]: llm_load_tensors: offloaded 13/33 layers to GPU
May 30 14:34:54 wbs-desktop ollama[651389]: llm_load_tensors: CPU buffer size = 4437.80 MiB
May 30 14:34:54 wbs-desktop ollama[651389]: llm_load_tensors: CUDA0 buffer size = 1521.41 MiB
May 30 14:34:55 wbs-desktop ollama[651389]: .......................................................................................
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: n_ctx = 2048
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: n_batch = 512
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: n_ubatch = 512
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: flash_attn = 0
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: freq_base = 500000.0
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: freq_scale = 1
May 30 14:34:55 wbs-desktop ollama[651389]: llama_kv_cache_init: CUDA_Host KV buffer size = 152.00 MiB
May 30 14:34:55 wbs-desktop ollama[651389]: llama_kv_cache_init: CUDA0 KV buffer size = 104.00 MiB
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: CUDA_Host output buffer size = 0.50 MiB
May 30 14:34:55 wbs-desktop ollama[656763]: [1717050895] warming up the model with an empty run
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: CUDA0 compute buffer size = 677.48 MiB
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: CUDA_Host compute buffer size = 12.01 MiB
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: graph nodes = 1030
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: graph splits = 213
May 30 14:34:56 wbs-desktop ollama[656763]: INFO [main] model loaded | tid="139943102652416" timestamp=1717050896
May 30 14:34:56 wbs-desktop ollama[651389]: time=2024-05-30T14:34:56.488+08:00 level=INFO source=server.go:567 msg="llama runner started in 6.06 seconds"
May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896] slice_image: multiple 1
May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896]
May 30 14:34:56 wbs-desktop ollama[656763]: encode_image_with_clip: image encoded in 9.25 ms by clip_image_preprocess.
May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896]
May 30 14:34:56 wbs-desktop ollama[656763]: encode_image_with_clip: mm_patch_merge_type is flat.
May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896] clip_image_build_graph: ctx->buf_compute_meta.size(): 884880
May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896] clip_image_build_graph: load_image_size: 462 434
May 30 14:34:57 wbs-desktop ollama[656763]: [1717050897] encode_image_with_clip: image embedding created: 96 tokens
May 30 14:34:57 wbs-desktop ollama[656763]: [1717050897]
May 30 14:34:57 wbs-desktop ollama[656763]: encode_image_with_clip: image encoded in 1025.53 ms by CLIP ( 10.68 ms per image patch)
May 30 14:34:57 wbs-desktop ollama[656763]: [1717050897] llava_image_embed_make_with_clip_img_ollama: can't read file ./examples/minicpm-v2.5/slice_token_for_ollama.raw
Linux
Nvidia
Intel
No response
go generate ./...
Already on 'minicpm-v2.5'
Your branch is up to date with 'origin/minicpm-v2.5'.
Submodule path '../llama.cpp': checked out 'd8974b8ea61e1268a4cad27f4f6e2cde3c5d1370'
Checking for MinGW...
CommandType Name Version Source
Application gcc.exe 0.0.0.0 C:\w64devkit\bin\gcc.exe
Application mingw32-make.exe 0.0.0.0 C:\w64devkit\bin\mingw32-make.exe
Building static library
generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64_static -G MinGW Makefiles -DCMAKE_C_COMPILER=gcc.exe -DCMAKE_CXX_COMPILER=g++.exe -DBUILD_SHARED_LIBS=off -DLLAMA_NATIVE=off -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_F16C=off -DLLAMA_FMA=off
cmake version 3.29.4
CMake suite maintained and supported by Kitware (kitware.com/cmake).
-- The C compiler identification is GNU 13.2.0
-- The CXX compiler identification is GNU 13.2.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/w64devkit/bin/gcc.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/w64devkit/bin/g++.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: C:/Program Files/Git/cmd/git.exe (found version "2.45.1.windows.1")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- x86 detected
-- Configuring done (1.7s)
-- Generating done (0.7s)
-- Build files have been written to: D:/projects/ollama/llm/build/windows/amd64_static
building with: cmake --build ../build/windows/amd64_static --config Release --target llama --target ggml
[ 16%] Building C object CMakeFiles/ggml.dir/ggml.c.obj
[ 16%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.obj
[ 33%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.obj
[ 50%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.obj
[ 50%] Building CXX object CMakeFiles/ggml.dir/sgemm.cpp.obj
[ 50%] Built target ggml
[ 66%] Building CXX object CMakeFiles/llama.dir/llama.cpp.obj
D:\projects\ollama\llm\llama.cpp\llama.cpp: In constructor 'llama_mmap::llama_mmap(llama_file*, size_t, bool)':
D:\projects\ollama\llm\llama.cpp\llama.cpp:1428:38: warning: cast between incompatible function types from 'FARPROC' {aka 'long long int ()()'} to 'BOOL ()(HANDLE, ULONG_PTR, PWIN32_MEMORY_RANGE_ENTRY, ULONG)' {aka 'int ()(void, long long unsigned int, _WIN32_MEMORY_RANGE_ENTRY*, long unsigned int)'} [-Wcast-function-type]
1428 | pPrefetchVirtualMemory = reinterpret_cast<decltype(pPrefetchVirtualMemory)> (GetProcAddress(hKernel32, "PrefetchVirtualMemory"));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
D:\projects\ollama\llm\llama.cpp\llama.cpp: In function 'float* llama_get_logits_ith(llama_context*, int32_t)':
D:\projects\ollama\llm\llama.cpp\llama.cpp:17331:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector::size_type' {aka 'long long unsigned int'} [-Wformat=]
17331 | throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size()));
| ~~^ ~~~~~~~~~~~~~~~~~~~~~~
| | |
| long unsigned int std::vector::size_type {aka long long unsigned int}
| %llu
D:\projects\ollama\llm\llama.cpp\llama.cpp: In function 'float* llama_get_embeddings_ith(llama_context*, int32_t)':
D:\projects\ollama\llm\llama.cpp\llama.cpp:17376:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector::size_type' {aka 'long long unsigned int'} [-Wformat=]
17376 | throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size()));
| ~~^ ~~~~~~~~~~~~~~~~~~~~~~
| | |
| long unsigned int std::vector::size_type {aka long long unsigned int}
| %llu
[ 83%] Building CXX object CMakeFiles/llama.dir/unicode.cpp.obj
[ 83%] Building CXX object CMakeFiles/llama.dir/unicode-data.cpp.obj
[100%] Linking CXX static library libllama.a
[100%] Built target llama
[100%] Built target ggml
Building LCD CPU
generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64/cpu -DCMAKE_POSITION_INDEPENDENT_CODE=on -A x64 -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DBUILD_SHARED_LIBS=on -DLLAMA_NATIVE=off -DLLAMA_SERVER_VERBOSE=off -DCMAKE_BUILD_TYPE=Release
cmake version 3.29.4
CMake suite maintained and supported by Kitware (kitware.com/cmake).
-- Building for: Visual Studio 17 2022
-- Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.22631.
-- The C compiler identification is MSVC 19.40.33811.0
-- The CXX compiler identification is MSVC 19.40.33811.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.40.33807/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.40.33807/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: C:/Program Files/Git/cmd/git.exe (found version "2.45.1.windows.1")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - not found
-- Found Threads: TRUE
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- CMAKE_GENERATOR_PLATFORM: x64
-- x86 detected
-- Configuring done (3.8s)
-- Generating done (0.5s)
CMake Warning:
Manually-specified variables were not used by the project:
LLAMA_F16C
-- Build files have been written to: D:/projects/ollama/llm/build/windows/amd64/cpu
building with: cmake --build ../build/windows/amd64/cpu --config Release --target ollama_llama_server
适用于 .NET Framework MSBuild 版本 17.10.4+10fbfbf2e
MSBUILD : error MSB1009: 项目文件不存在。
开关:ollama_llama_server.vcxproj
llm\generate\generate_windows.go:3: running "powershell": exit status 1
Windows
Nvidia
Intel
No response
如题,想使用还要先搭建编译环境,试用成本确实不小,能否放出几个常用端的安装包?
Hi,
Would you like to create a model with Visual function calling abilities (like corping images, indexing items, so on and so forth) beyond visual QA?
Thanks!
就是怎么让ollama MiniCPM-V 2.6支持视频理解?先谢过了!
按doc文档获取git分支,编译成功后,通过modelfile生成模型,在进行对话时报错Error: an unknown error was encountered while running the model
以下 是serve端报错日志
2024/05/27 23:22:34 routes.go:1028: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"
time=2024-05-27T23:22:34.090+08:00 level=INFO source=images.go:729 msg="total blobs: 8"
time=2024-05-27T23:22:34.090+08:00 level=INFO source=images.go:736 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullModelHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateModelHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushModelHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyModelHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteModelHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).ProcessHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2024-05-27T23:22:34.091+08:00 level=INFO source=routes.go:1074 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2024-05-27T23:22:34.091+08:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3477894997/runners
time=2024-05-27T23:22:34.166+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2]"
time=2024-05-27T23:22:35.854+08:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-a93d53d5-add0-d73c-9800-83ba35515332 library=cuda compute=8.6 driver=12.4 name="NVIDIA GeForce RTX 3060 Ti" total="8.0 GiB" available="7.0 GiB"
[GIN] 2024/05/27 - 23:22:40 | 200 | 5.063803ms | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/27 - 23:22:40 | 200 | 2.428835ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/05/27 - 23:22:50 | 200 | 33.81µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/27 - 23:22:50 | 200 | 3.816911ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/05/27 - 23:22:50 | 200 | 223.798µs | 127.0.0.1 | POST "/api/show"
time=2024-05-27T23:22:52.868+08:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=33 memory.available="7.0 GiB" memory.required.full="5.3 GiB" memory.required.partial="5.3 GiB" memory.required.kv="256.0 MiB" memory.weights.total="4.3 GiB" memory.weights.repeating="3.9 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-05-27T23:22:52.868+08:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=33 memory.available="7.0 GiB" memory.required.full="5.3 GiB" memory.required.partial="5.3 GiB" memory.required.kv="256.0 MiB" memory.weights.total="4.3 GiB" memory.weights.repeating="3.9 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-05-27T23:22:52.869+08:00 level=INFO source=server.go:338 msg="starting llama server" cmd="/tmp/ollama3477894997/runners/cpu_avx2/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-010ec3ba94cb5ad2d9c8f95f46f01c6d80f83deab9df0a0831334ea45afff3e2 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 1 --port 60783"
time=2024-05-27T23:22:52.869+08:00 level=INFO source=sched.go:338 msg="loaded runners" count=1
WARN [server_params_parse] Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support | n_gpu_layers=-1 tid="139932059617152" timestamp=1716823372
INFO [main] build info | build=2994 commit="8541e996" tid="139932059617152" timestamp=1716823372
time=2024-05-27T23:22:52.871+08:00 level=INFO source=server.go:525 msg="waiting for llama runner to start responding"
INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139932059617152" timestamp=1716823372 total_threads=12
time=2024-05-27T23:22:52.874+08:00 level=INFO source=server.go:562 msg="waiting for server to become available" status="llm server error"
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="11" port="60783" tid="139932059617152" timestamp=1716823372
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-010ec3ba94cb5ad2d9c8f95f46f01c6d80f83deab9df0a0831334ea45afff3e2 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = model
llama_model_loader: - kv 2: llama.vocab_size u32 = 128256
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.block_count u32 = 32
llama_model_loader: - kv 6: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 7: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 8: llama.attention.head_count u32 = 32
llama_model_loader: - kv 9: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 10: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: general.file_type u32 = 15
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 15: tokenizer.ggml.scores arr[f32,128256] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32 = 128002
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 22: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 23: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_K: 193 tensors
llama_model_loader: - type q6_K: 33 tensors
time=2024-05-27T23:22:53.126+08:00 level=INFO source=server.go:562 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: missing pre-tokenizer type, using: 'default'
llm_load_vocab:
llm_load_vocab: ************************************
llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!
llm_load_vocab: CONSIDER REGENERATING THE MODEL
llm_load_vocab: ************************************
llm_load_vocab:
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.58 GiB (4.89 BPW)
llm_load_print_meta: general.name = model
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
llm_load_print_meta: UNK token = 128002 ''
llm_load_print_meta: PAD token = 0 '!'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_tensors: ggml ctx size = 0.15 MiB
llm_load_tensors: CPU buffer size = 4685.30 MiB
........................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.50 MiB
llama_new_context_with_model: CPU compute buffer size = 258.50 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 1
INFO [main] model loaded | tid="139932059617152" timestamp=1716823373
time=2024-05-27T23:22:53.629+08:00 level=INFO source=server.go:567 msg="llama runner started in 0.76 seconds"
[GIN] 2024/05/27 - 23:22:53 | 200 | 2.832875282s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/05/27 - 23:22:55 | 200 | 98.939441ms | 127.0.0.1 | POST "/api/chat"
time=2024-05-27T23:28:01.188+08:00 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=6.156460594
time=2024-05-27T23:28:02.666+08:00 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=7.634409778
time=2024-05-27T23:28:04.142+08:00 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=9.11041182
[GIN] 2024/05/27 - 23:29:13 | 200 | 6.560773ms | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/27 - 23:29:13 | 200 | 23.035463ms | 127.0.0.1 | GET "/api/tags"
WSL2
Nvidia
AMD
0.0.0
docker exec -it ollama ollama run minicpm-v2.5
Error: llama runner process has terminated: signal: aborted (core dumped)
Docker
No response
Intel
0.1.43
talos:~/code/models/MiniCPM-Llama3-V-2_5-int4$ ollama run minicpm-llama3-v2.5-q6_k:latest
>>> 请描述这张图: ./a3.jpg
Added image './a3.jpg'
>>>
Error: no slots available after 10 retries
输入图片后要等待很久时间,最终报错返回。这个问题困扰我很久。但又发现在拉取 OpenBMB/ollama 目录下,执行 ./ollama serve
是能正常读取图片并输出。
在无数次重试中,发现要让 ollama 命令运行下在你新编译的目录下,不然就不能正常读取图片。问题确认了,但如果我们是把Ollama添加为启动服务来启动的,怎么办?
用 Kimi 搜索:”ubuntu service 如何先进入指定目录,再执行命令“,得到:
设置WorkingDirectory:
在服务单元文件中,添加或修改[Service]部分,包含WorkingDirectory指令,后跟您希望服务进入的目录路径。
好,我的 ollama.service 配置文件,最终如下 :
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
WorkingDirectory=/usr/local/ollama # 这个是你新编译 ollama 的目录
ExecStart=/usr/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_FLASH_ATTENTION=1" #这个用于解决 Qwen2 执行不正常(回答异常,返回重复字符)
[Install]
WantedBy=default.target
希望帮助到有碰到此问题的码农们~~~
ubuntu 22.04
No response
No response
最新版本
Building on Ubuntu 20.04.6 LTS (GNU/Linux 5.4.0-169-generic x86_64)
:
go generate ./...
seems to work, but go build .
gives:
# github.com/ollama/ollama/gpu
gpu/amd_linux.go:162:19: undefined: RocmComputeMin
gpu/amd_linux.go:232:20: undefined: IGPUMemLimit
gpu/amd_linux.go:253:19: undefined: rocmMinimumMemory
nccv -V
gives:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Jun__8_16:49:14_PDT_2022
Cuda compilation tools, release 11.7, V11.7.99
Build cuda_11.7.r11.7/compiler.31442593_0
Attaching full terminal output for in txt file:
go generate ./...
go build .
Linux
Nvidia
Intel
No response
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.