Comments (3)
Can you be more specific what is the issue that you're actually facing?
I tested Windows builds from MSYS2 environments, CLANG64 and UCRT64, and I do not see any immediate issues.
przemoc@NUC11PHKi7C002 CLANG64 /d/git/github.com/ggerganov/whisper.cpp
$ rm -rf build && cmake -B build && cmake --build build -j $(nproc)
...
przemoc@NUC11PHKi7C002 CLANG64 /d/git/github.com/ggerganov/whisper.cpp
$ ./build/bin/main.exe -m models/ggml-large-v3.bin -f samples/jfk.wav
whisper_init_from_file_with_params_no_state: loading model from 'models/ggml-large-v3.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab = 51866
whisper_model_load: n_audio_ctx = 1500
whisper_model_load: n_audio_state = 1280
whisper_model_load: n_audio_head = 20
whisper_model_load: n_audio_layer = 32
whisper_model_load: n_text_ctx = 448
whisper_model_load: n_text_state = 1280
whisper_model_load: n_text_head = 20
whisper_model_load: n_text_layer = 32
whisper_model_load: n_mels = 128
whisper_model_load: ftype = 1
whisper_model_load: qntvr = 0
whisper_model_load: type = 5 (large v3)
whisper_model_load: adding 1609 extra tokens
whisper_model_load: n_langs = 100
whisper_model_load: CPU total size = 3094.36 MB
whisper_model_load: model size = 3094.36 MB
whisper_init_state: kv self size = 220.20 MB
whisper_init_state: kv cross size = 245.76 MB
whisper_init_state: compute buffer (conv) = 36.26 MB
whisper_init_state: compute buffer (encode) = 926.66 MB
whisper_init_state: compute buffer (cross) = 9.38 MB
whisper_init_state: compute buffer (decode) = 209.26 MB
system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | METAL = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | CUDA = 0 | COREML = 0 | OPENVINO = 0
main: processing 'samples/jfk.wav' (176000 samples, 11.0 sec), 4 threads, 1 processors, 5 beams + best of 5, lang = en, task = transcribe, timestamps = 1 ...
[00:00:00.300 --> 00:00:09.000] And so, my fellow Americans, ask not what your country can do for you, ask what you
[00:00:09.000 --> 00:00:11.000] can do for your country.
whisper_print_timings: load time = 1932.32 ms
whisper_print_timings: fallbacks = 0 p / 0 h
whisper_print_timings: mel time = 13.86 ms
whisper_print_timings: sample time = 95.37 ms / 147 runs ( 0.65 ms per run)
whisper_print_timings: encode time = 32488.18 ms / 1 runs (32488.18 ms per run)
whisper_print_timings: decode time = 0.00 ms / 1 runs ( 0.00 ms per run)
whisper_print_timings: batchd time = 4972.74 ms / 145 runs ( 34.29 ms per run)
whisper_print_timings: prompt time = 0.00 ms / 1 runs ( 0.00 ms per run)
whisper_print_timings: total time = 39508.21 ms
przemoc@NUC11PHKi7C002 UCRT64 /d/git/github.com/ggerganov/whisper.cpp
$ rm -rf build && cmake -B build && cmake --build build -j $(nproc)
...
przemoc@NUC11PHKi7C002 UCRT64 /d/git/github.com/ggerganov/whisper.cpp
$ ./build/bin/main.exe -m models/ggml-large-v3.bin -f samples/jfk.wav
whisper_init_from_file_with_params_no_state: loading model from 'models/ggml-large-v3.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab = 51866
whisper_model_load: n_audio_ctx = 1500
whisper_model_load: n_audio_state = 1280
whisper_model_load: n_audio_head = 20
whisper_model_load: n_audio_layer = 32
whisper_model_load: n_text_ctx = 448
whisper_model_load: n_text_state = 1280
whisper_model_load: n_text_head = 20
whisper_model_load: n_text_layer = 32
whisper_model_load: n_mels = 128
whisper_model_load: ftype = 1
whisper_model_load: qntvr = 0
whisper_model_load: type = 5 (large v3)
whisper_model_load: adding 1609 extra tokens
whisper_model_load: n_langs = 100
whisper_model_load: CPU total size = 3094.36 MB
whisper_model_load: model size = 3094.36 MB
whisper_init_state: kv self size = 220.20 MB
whisper_init_state: kv cross size = 245.76 MB
whisper_init_state: compute buffer (conv) = 36.26 MB
whisper_init_state: compute buffer (encode) = 926.66 MB
whisper_init_state: compute buffer (cross) = 9.38 MB
whisper_init_state: compute buffer (decode) = 209.26 MB
system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | METAL = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | CUDA = 0 | COREML = 0 | OPENVINO = 0
main: processing 'samples/jfk.wav' (176000 samples, 11.0 sec), 4 threads, 1 processors, 5 beams + best of 5, lang = en, task = transcribe, timestamps = 1 ...
[00:00:00.300 --> 00:00:09.000] And so, my fellow Americans, ask not what your country can do for you, ask what you
[00:00:09.000 --> 00:00:11.000] can do for your country.
whisper_print_timings: load time = 1010.02 ms
whisper_print_timings: fallbacks = 0 p / 0 h
whisper_print_timings: mel time = 21.38 ms
whisper_print_timings: sample time = 85.51 ms / 147 runs ( 0.58 ms per run)
whisper_print_timings: encode time = 32120.25 ms / 1 runs (32120.25 ms per run)
whisper_print_timings: decode time = 0.00 ms / 1 runs ( 0.00 ms per run)
whisper_print_timings: batchd time = 4907.33 ms / 145 runs ( 33.84 ms per run)
whisper_print_timings: prompt time = 0.00 ms / 1 runs ( 0.00 ms per run)
whisper_print_timings: total time = 38150.42 ms
from whisper.cpp.
Hi On windows no matter how much mem you've got on ur machine you cannot get large v3 to run cuz malloc on windows has 4GB limit. would this be smt you can test and fix Cheers
Are you still using a 32bit intel CPU and old desktop version Windows OS? If not, this link explains it well https://stackoverflow.com/questions/181050/can-you-allocate-a-very-large-single-chunk-of-memory-4gb-in-c-or-c?noredirect=1&lq=1
from whisper.cpp.
Hey, I tried the new version and it seems to be fine with the latest build. Sorry I don't have the version of the build that this happened anymore.
from whisper.cpp.
Related Issues (20)
- ci: emscripten builds are failing with Emscripten SDK 3.1.58 HOT 1
- Stream: noise ouput
- Spam Attack HOT 2
- Ubuntu 22.04 - tested commit 8fac645 - microphone is not passing audio to talk-llama , older builds ( from a month passing microphone audio ) HOT 2
- MSVC static runtime library
- The path to metal files is not validated when whisper.cpp is used as a subproject
- Disabling WHISPER_LOG_INFO HOT 2
- Unable to generate the large-v3 CoreML model
- chinese characters not showing up on windows HOT 2
- When transcribing Chinese audio, using whisper_full_get_segment_text can return the correct text, but using whisper_full_get_token_text might result in NULL.
- main.exe does nothing on transcribe task (crash probably) HOT 10
- Read write protected Memory Exception When Try to use Timestamp DTW. HOT 1
- Neural Engine, CoreML not utilized for Apple Silicon HOT 2
- Every run with CoreML "first run on a device may take a while ..." HOT 5
- Migration of whisper.cpp to OpenWrt: Integration with Luci UI and Package Management
- Version for Windows on Arm? HOT 1
- `make stream` `ld: symbol(s) not found for architecture arm64` on M1 mac HOT 1
- Is there a way to get timestamp from the generated txt output HOT 1
- iOS Swift help needed with customising the parameters for the model HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from whisper.cpp.