Giter Club home page Giter Club logo

Comments (8)

kuvaus avatar kuvaus commented on August 26, 2024 2

If basic llama builds using regular make, that's good news! I don't think I have any processor specific code outside llama so it should work in principle. So yeah, its the CmakeLists.txt file that needs to get fixed.

I found this on the web:

On aarch64 -moutline-atomics has been turned on by default, and those symbols are solely in libgcc.a, not in libgcc_s.so.*.

The problem is that I use static linking (so that people can just copy their binary to another computer without worry) but that seems to cause problems.

Now, you could try dynamic linking. Change these in the main CmakeLists.txt

set(BUILD_SHARED_LIBS ON FORCE)
#elseif(UNIX)
#  set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -static-libgcc -static-libstdc++ -static") 

Or somehow disable outline-atomics. Maybe like this:

set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -static-libgcc -static-libstdc++ -static -mno-outline-atomics")

At the moment I have no other ideas...

If i stumble on any fix I'll surely let you know.

Thanks!

from llamagptj-chat.

kuvaus avatar kuvaus commented on August 26, 2024 1

Uh.. This is a tough one as I don't have a computer with ARM64 processor.
Can you try if the basic Llama.cpp works?
If you can clone and build that separately that would be useful info! (There's also less cmakefiles to regenerate :)

If Llama does not compile at all, then there's not much I can do as it is needed as a submodule.

Thanks for testing. Would be nice to get this working on aarch64 too.

from llamagptj-chat.

clort81 avatar clort81 commented on August 26, 2024

using llama.cpp cmake
[ 35%] Building CXX object tests/CMakeFiles/test-tokenizer-0.dir/test-tokenizer-0.cpp.o /media/sd/Projects/Neural/llamma/llama.cpp/tests/test-tokenizer-0.cpp:19:2: warning: extra ‘;’ [-Wpedantic] 19 | }; | ^ [ 38%] Linking CXX executable ../bin/test-tokenizer-0 [ 38%] Built target test-tokenizer-0 [ 41%] Building CXX object examples/CMakeFiles/common.dir/common.cpp.o [ 41%] Built target common [ 45%] Building CXX object examples/main/CMakeFiles/main.dir/main.cpp.o [ 48%] Linking CXX executable ../../bin/main [ 48%] Built target main [ 51%] Building CXX object examples/quantize/CMakeFiles/quantize.dir/quantize.cpp.o [ 54%] Linking CXX executable ../../bin/quantize [ 54%] Built target quantize [ 58%] Building CXX object examples/quantize-stats/CMakeFiles/quantize-stats.dir/quantize-stats.cpp.o [ 61%] Linking CXX executable ../../bin/quantize-stats /usr/bin/ld: CMakeFiles/quantize-stats.dir/quantize-stats.cpp.o: in functionlayer_included(quantize_stats_params, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&)':
quantize-stats.cpp:(.text+0x278c): undefined reference to __aarch64_ldadd4_acq_rel' /usr/bin/ld: quantize-stats.cpp:(.text+0x279c): undefined reference to __aarch64_ldadd4_acq_rel'
/usr/bin/ld: quantize-stats.cpp:(.text+0x27ac): undefined reference to __aarch64_ldadd4_acq_rel' /usr/bin/ld: quantize-stats.cpp:(.text+0x27bc): undefined reference to __aarch64_ldadd4_acq_rel'
/usr/bin/ld: quantize-stats.cpp:(.text+0x284c): undefined reference to __aarch64_ldadd4_acq_rel' /usr/bin/ld: CMakeFiles/quantize-stats.dir/quantize-stats.cpp.o:quantize-stats.cpp:(.text+0x285c): more undefined references to __aarch64_ldadd4_acq_rel' follow
collect2: error: ld returned 1 exit status
gmake[2]: *** [examples/quantize-stats/CMakeFiles/quantize-stats.dir/build.make:98: bin/quantize-stats] Error 1
gmake[1]: *** [CMakeFiles/Makefile2:1315: examples/quantize-stats/CMakeFiles/quantize-stats.dir/all] Error 2
gmake: *** [Makefile:101: all] Error 2
17:38:11-Kornho@Desktop:/media/sd/Project
`
using regular make .. worked
379944 May 9 19:46 main

thanks for making this though kuvaus! :)
If i stumble on any fix I'll surely let you know.

from llamagptj-chat.

clort81 avatar clort81 commented on August 26, 2024

Applying only your suggested changes, I was able to build successfully.

/LlamaGPTJ-chat/build$ make
-- CMAKE_SYSTEM_PROCESSOR: aarch64
-- ARM detected
-- Configuring done
-- Generating done
-- Build files have been written to: /dir/Neural/LlamaGPTJ-chat/build
[  7%] Built target BUILD_INFO
[ 14%] Building C object llmodel/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o
[ 14%] Built target ggml
[ 21%] Building CXX object llmodel/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o
[ 28%] Linking CXX shared library libllama.so
[ 28%] Built target llama
[ 35%] Building CXX object llmodel/CMakeFiles/llmodel.dir/gptj.cpp.o
[ 42%] Building CXX object llmodel/CMakeFiles/llmodel.dir/llamamodel.cpp.o
[ 50%] Building CXX object llmodel/CMakeFiles/llmodel.dir/llama.cpp/examples/common.cpp.o
[ 57%] Building CXX object llmodel/CMakeFiles/llmodel.dir/llmodel_c.cpp.o
[ 64%] Building CXX object llmodel/CMakeFiles/llmodel.dir/utils.cpp.o
[ 71%] Linking CXX shared library libllmodel.so
[ 71%] Built target llmodel
[ 78%] Linking CXX executable ../bin/chat
[100%] Built target chat

/LlamaGPTJ-chat/build$ bin/chat
LlamaGPTJ-chat (v. 0.1.7)
LlamaGPTJ-chat: loading ./models/ggml-vicuna-13b-1.1-q4_2.bin
LlamaGPTJ-chat: done loading!


> What is the airspeed velocity of an unladen swallow?
 

1. The airspeed velocity of an unladen swallow is 75 miles per hour.
2. I'm sorry ,^C

Salute!

from llamagptj-chat.

kuvaus avatar kuvaus commented on August 26, 2024

That is great!

Which one of the changes did you make? Or did you have to have both?

I want to add this in the CMakeLists.txt in the next version:

IF(UNIX)
    IF(${CMAKE_SYSTEM_PROCESSOR} MATCHES "aarch64")
    //changes here
    ENDIF()
ENDIF()

So it would build automatically in the future.

from llamagptj-chat.

clort81 avatar clort81 commented on August 26, 2024

I applied both.

from llamagptj-chat.

kuvaus avatar kuvaus commented on August 26, 2024

Thanks! I put them in the CMakeLists.txt. I hope it now just builds without errors if you clone the repo.
I tried to make Github Actions to try to build a binary with Arm64 automatically but no luck so far...

from llamagptj-chat.

kuvaus avatar kuvaus commented on August 26, 2024

v0.2.0 comes with big changes:

  • Full Windows Visual Studio compatibility. Finally fixes issue 1:
    #1
  • Builds from source on aarch64 Linux. Fixes issue 3:
    #3
  • Full MPT support. Fixes issue 4:
    #4

Also updates during past few versions:

  • You can save chat-logs with --save_log
  • First response is tiny bit faster if you turn no-animation flag.
  • Prints if your computer supports AVX1/AVX2 at startup
  • Backend updated to v0.1.1. Thanks to GPT4All people. They were super fast on this!
  • Slightly better Readme.

Big thanks to everyone so far! You have been hugely helpful. :)

from llamagptj-chat.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.