Giter Club home page Giter Club logo

tc-build's Introduction

Toolchain build scripts

There are times where a tip of tree LLVM build will have some issue fixed and it isn't available to you, maybe because it isn't in a release or it isn't available through your distribution's package management system. At that point, to get that fix, LLVM needs to be compiled, which sounds scary but is rather simple. The build-llvm.py script takes it a step farther by trying to optimize both LLVM's build time by:

  • Trimming down a lot of things that kernel developers don't care about:
    • Documentation
    • LLVM tests
    • Ocaml bindings
    • libfuzzer
  • Building with the faster tools available (in order of fastest to slowest):
    • clang + lld
    • clang/gcc + ld.gold
    • clang/gcc + ld.bfd

Getting started

These scripts have been tested in a Docker image of the following distributions with the following packages installed. LLVM has minimum host tool version requirements so the latest stable version of the chosen distribution should be used whenever possible to ensure recent versions of the tools are used. Build errors from within LLVM are expected if the tool version is not recent enough, in which case it will need to be built from source or installed through other means.

  • Debian/Ubuntu

    apt install bc \
                binutils-dev \
                bison \
                build-essential \
                ca-certificates \
                ccache \
                clang \
                cmake \
                curl \
                file \
                flex \
                git \
                libelf-dev \
                libssl-dev \
                libstdc++-$(apt list libstdc++6 2>/dev/null | grep -Eos '[0-9]+\.[0-9]+\.[0-9]+' | head -1 | cut -d . -f 1)-dev \
                lld \
                make \
                ninja-build \
                python3-dev \
                texinfo \
                u-boot-tools \
                xz-utils \
                zlib1g-dev
    
  • Fedora

    dnf install bc \
                binutils-devel \
                bison \
                ccache \
                clang \
                cmake \
                elfutils-libelf-devel \
                flex \
                gcc \
                gcc-c++ \
                git \
                lld \
                make \
                ninja-build \
                openssl-devel \
                python3 \
                texinfo-tex \
                uboot-tools \
                xz \
                zlib-devel
    
  • Arch Linux / Manjaro

    pacman -S base-devel \
              bc \
              bison \
              ccache \
              clang \
              cpio \
              cmake \
              flex \
              git \
              libelf \
              lld \
              llvm \
              ninja \
              openssl \
              python3 \
              uboot-tools
    
  • Clear Linux

    swupd bundle-add c-basic \
                     ccache \
                     curl \
                     dev-utils \
                     devpkg-elfutils \
                     devpkg-openssl \
                     git \
                     python3-basic \
                     which
    

    Additionally, to build PowerPC kernels, you will need to build the U-Boot tools because there is no distribution package. The U-Boot tarballs can be found here and they can be built and used like so:

    $ curl -LSs https://ftp.denx.de/pub/u-boot/u-boot-2021.01.tar.bz2 | tar -xjf -
    $ cd u-boot-2021.01
    $ make -j"$(nproc)" defconfig tools-all
    ...
    $ sudo install -Dm755 tools/mkimage /usr/local/bin/mkimage
    $ mkimage -V
    mkimage version 2021.01
    

    Lastly, Clear Linux has ${CC}, ${CXX}, ${CFLAGS}, and ${CXXFLAGS} in the environment, which messes with the heuristics of the script for selecting a compiler. By default, the script will attempt to use clang and ld.lld but the environment's value of ${CC} and ${CXX} is respected first so gcc and g++ will be used. Clear Linux has optimized their gcc and g++ so this is fine but if you would like to use clang and clang++ instead, invoke the script like so:

    $ CC=clang CFLAGS= CXX=clang++ CXXFLAGS= ./build-llvm.py ...
    

Python 3.5.3+ is recommended, as that is what the script has been tested against. These scripts should be distribution agnostic. Please feel free to add different distribution install commands here through a pull request.

build-llvm.py

By default, ./build-llvm.py will clone LLVM, grab the latest binutils tarball (for the LLVMgold.so plugin), and build LLVM, clang, and lld, and install them into install.

The script automatically clones and manages the llvm-project. If you would like to do this management yourself, such as downloading a release tarball from releases.llvm.org, doing a more aggressive shallow clone (versus what is done in the script via --shallow-clone), or doing a bisection of LLVM, you just need to make sure that your source is in an llvm-project folder within the root of this repository and pass --no-update into the script. See this comment for an example.

Run ./build-llvm.py -h for more options and information.

build-binutils.py

This script builds a standalone copy of binutils. By default, ./build-binutils.py will download the latest stable version of binutils, build for all architectures we currently care about (see the help text or script for the full list), and install them into install. Run ./build-binutils.py -h for more options.

Building a standalone copy of binutils might be needed because certain distributions like Arch Linux (whose options the script uses) might symlink /usr/lib/LLVMgold.so to /usr/lib/bfd-plugins (source), which can cause issues when using the system's linker for LTO (even with LD_LIBRARY_PATH):

bfd plugin: LLVM gold plugin has failed to create LTO module: Unknown attribute kind (60) (Producer: 'LLVM9.0.0svn' Reader: 'LLVM 7.0.1')

Having a standalone copy of binutils (ideally in the same folder at the LLVM toolchain so that only one PATH modification is needed) works around this without any adverse side effects. Another workaround is bind mounting the new LLVMgold.so to /usr/lib/LLVMgold.so.

Contributing

This repository openly welcomes pull requests! There are a few presubmit checks that run to make sure the code stays consistently formatted and free of bugs.

  1. All Python files must be passed through yapf. See the installation section for how to get it (it may also be available through your package manager).

  2. All shell files must be passed through shfmt (specifically shfmt -ci -i 4 -w) and emit no shellcheck warnings.

The presubmit checks will do these things for you and fail if the code is not formatted properly or has a shellcheck warning. Running these tools on the command line before submitting will make it easier to get your code merged.

Additionally, please write a detailed commit message about why you are submitting your change.

Getting help

Please open an issue on this repo and include your distribution, shell, the command you ran, and the error output.

tc-build's People

Contributors

accipiter7 avatar albert753258 avatar bensuperpc avatar dakkshesh07 avatar dileks avatar electroperf avatar irebbok avatar kdrag0n avatar kees avatar maskray avatar msfjarvis avatar najahiiii avatar nathanchance avatar nickdesaulniers avatar torvic9 avatar wloot avatar zx2c4 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tc-build's Issues

investigate ELFHack

via https://glandium.org/blog/?p=1177 (@glandium).

I was thinking we might be able to use RELR
https://reviews.llvm.org/D48247
https://groups.google.com/g/generic-abi/c/bX460iggiKg
but @pcc had to implement relocation processing support in the Linux kernel for aarch64 to support this relocation type.
https://patchwork.kernel.org/project/linux-arm-kernel/patch/[email protected]/
So I doubt any existing dynamic linker could process these for x86-64 quite yet.

I'm not sure how much of this might be irrelevant though with #150 or with @MaskRay 's work on enabling -fno-semantic-interposition in Clang's Cmake itself.
https://reviews.llvm.org/D102453

I don't know if ELFhack is easy to fetch/use outside of mozilla-central.

Defaults for parallel compile and link jobs

With f917c5c you introduced custom options for parrallel compile and link jobs.
I welcome this, thanks.

When I first had issues with compiling/linking a llvm-toolchain LLVM folks recommended to use safe settings:
parallel compile jobs = 2 and ... safe parallel link jobs = 1.
Cannot say what the default is for linking in tc-build or LLVM upstream, but I see parallel compile jobs are set to 4 (maximum of available CPUs?) in tc-build for my Intel SandyBridge notebook.
What are the default settings?

Please note @nickdesaulniers tip in the below link.

[1] ClangBuiltLinux/llvm-dev-conf-2020#3 (comment)

Adding --use-good-revision

I'm opening this to get general feedback on the concept before I go through and implement the option.

There are times where the LLVM tree may be in a broken state (build error or runtime failure). Given that this script is more aimed at kernel developers wanting to try new LLVM/Clang features but having to bother with debugging/triaging unrelated LLVM/Clang issues, it might be nice to have an option like --use-good-revision that will automatically checkout a known good working revision of LLVM that builds and works properly for the kernel's sake. This revision should be as close to tip of tree as possible so that the user is always getting the latest fixes.

Ideally, this option won't have to be used that often. The only problem that I can potentially foresee is a user cloning the script, never updating the script, and always using this option but I don't necessarily see that as a bad thing. If they ever run into a problem that we fixed in a newer version, we can just tell them to update the script to get that new revision.

Before bumping the revision, the user should run the kernel/build.sh script to qualify the revision being tested, ideally with --allyesconfig if #29 gets accepted to get the most coverage. I envision this revision bump happening every week or so.

Let me know if there are any potential issues I am not seeing. I am totally fine with not implementing this if people feel it is not a good idea but I think it would be beneficial to users who just want something that works.

Linker issue when building kernel: cannot find libbfd.so

Hi,

When building the toolchain with the unmodified build-llvm.py script, and when using that toolchain to build the kernel, I get an error message as mentioned in the title (I don't have the exact message right here, but can post later).
I'm on Manjaro which is based on Arch.

So I modified the script in two points to fix the issue, however I don't yet know which one of those two was responsible.
I guess it's the first one but I might be wrong:

  1. Changed line 618
    'LLVM_BINUTILS_INCDIR': dirs.root_folder.joinpath(utils.current_binutils(), "include").as_posix(),
    to
    'LLVM_BINUTILS_INCDIR': dirs.root_folder.joinpath("usr", "include").as_posix(),

My understanding is that with this change, it uses the system binutils.

  1. Enabled dylib linking:
    'LLVM_LINK_LLVM_DYLIB': 'ON',
    'CLANG_LINK_CLANG_DYLIB': 'ON',

This is what Arch has in its build script.

macOS support?

On my reasonably update macOS Mojave I get:


-- Performing Test CXX_SUPPORTS_CUSTOM_LINKER
-- Performing Test CXX_SUPPORTS_CUSTOM_LINKER - Failed
CMake Error at cmake/modules/HandleLLVMOptions.cmake:224 (message):
  Host compiler does not support
  '-fuse-ld=/usr/local/Cellar/llvm/9.0.0_1/bin/ld.lld'
Call Stack (most recent call first):
  CMakeLists.txt:656 (include)


-- Configuring incomplete, errors occurred!
See also "/Users/itaru/projects/tc-build/build/llvm/stage1/CMakeFiles/CMakeOutput.log".
See also "/Users/itaru/projects/tc-build/build/llvm/stage1/CMakeFiles/CMakeError.log".
Traceback (most recent call last):
  File "./build-llvm.py", line 890, in <module>
    main()
  File "./build-llvm.py", line 886, in main
    do_multistage_build(args, dirs, env_vars)
  File "./build-llvm.py", line 857, in do_multistage_build
    invoke_cmake(args, dirs, env_vars, stage)
  File "./build-llvm.py", line 753, in invoke_cmake
    subprocess.run(cmake, check=True, cwd=cwd)
  File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py", line 512, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['cmake', '-G', 'Ninja', '-Wno-dev', '-DCLANG_ENABLE_ARCMT=OFF', '-DCLANG_ENABLE_STATIC_ANALYZER=OFF', '-DCLANG_PLUGIN_SUPPORT=OFF', '-DLLVM_BINUTILS_INCDIR=/Users/itaru/projects/tc-build/binutils-2.33.1/include', '-DLLVM_ENABLE_BINDINGS=OFF', '-DLLVM_ENABLE_PLUGINS=ON', '-DLLVM_ENABLE_OCAMLDOC=OFF', '-DLLVM_ENABLE_TERMINFO=OFF', '-DLLVM_EXTERNAL_CLANG_TOOLS_EXTRA_SOURCE_DIR=', '-DLLVM_INCLUDE_DOCS=OFF', '-DLLVM_INCLUDE_EXAMPLES=OFF', '-DCMAKE_C_COMPILER=/usr/local/Cellar/llvm/9.0.0_1/bin/clang-9', '-DCMAKE_CXX_COMPILER=/usr/local/Cellar/llvm/9.0.0_1/bin/clang++', '-DLLVM_USE_LINKER=/usr/local/Cellar/llvm/9.0.0_1/bin/ld.lld', '-DLLVM_ENABLE_PROJECTS=clang;lld', '-DLLVM_TARGETS_TO_BUILD=host', '-DCMAKE_BUILD_TYPE=Release', '-DLLVM_ENABLE_BACKTRACES=OFF', '-DLLVM_ENABLE_WARNINGS=OFF', '-DLLVM_INCLUDE_TESTS=OFF', '-DLLVM_INCLUDE_UTILS=OFF', '-DCLANG_VENDOR=ClangBuiltLinux', '/Users/itaru/projects/tc-build/llvm-project/llvm']' returned non-zero exit status 1.
MacBook-Pro:tc-build itaru$

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Environment (please complete the following information):

  • Command you ran: [e.g. ./build-llvm.py]
  • Distribution: [e.g. Debian stretch or Arch Linux]
  • Python version: [e.g. 3.7.3]
  • Shell: [e.g. zsh]

Ability to specify llvm sha

Describe the bug
Ability to specify llvm sha.

Expected behavior
Similar to --use-good-revision; I'd like to be able to specify the revision of LLVM we snap to. I think that might make the pgo data more portable for offline training

Building kernels: Show Linux version (when passing -L option)

When building a ThinLTO + PGO optimized LLVM toolchain I want to see the base of the Linux version.

From my last build-log:

========================
== Building kernels ==
========================
...
make -skj4 KCFLAGS=-Wno-error LLVM=1 O=out ARCH=x86_64 distclean defconfig all

real       62m27.790s
user       222m11.786s
sys        13m1.868s

The base here was Linux version 5.19-rc3.

Debug: Add verbosity to cmake (LLVM) and make (Linux) options

Last weekend I finished a marathon in building a ThinLTO and PGO optimized LLVM toolchain - optimized here means for Linux v5.9-rc2 with Clang-CFI enabled (details see [0] and especially my request on this one here in [1]).

While gasping at the the following section:

==  Building kernels  ==
========================

+ for TARGET in "${TARGETS[@]}"
+ case ${TARGET} in
+ make -skj4 O=out LLVM=1 distclean defconfig bzImage modules

real    45m56.082s
user    166m53.758s
sys     8m11.487s

$ du -s -m ~/src/linux-kernel/git/out
540     /home/dileks/src/linux-kernel/git/out

Several minutes you do not get a feedback.

For me watching the make V=1 lines of a Linux-kernel build is psychologically fundamental - it calms me down - and besides helpful to see what's going on and warnings etc.

Same can be done for cmake and the LLVM build-process (where I am personally and psychologically not interested).

Can we rename Building kernels to Building Linux and remove the double space to be consistent with the other section names?

$ grep '^\==  Building' log_tc-build.txt
==  Building kernels  ==

$ grep '^\== Building' log_tc-build.txt
== Building LLVM stage 1 ==
== Building LLVM stage 2 ==
== Building PGO profiles ==
== Building LLVM stage 3 ==

Side-note: Shouldn't this be named "Generating PGO profiles"?

[0] #109
[1] #109 (comment)
[2] https://github.com/samitolvanen/linux/commits/clang-cfi

Use sccache to speed up builds

When looking at LLVM release/15.x Git branch I noticed the usage of sccache:

commit 858ded9cba11aa108eaa67433983cb3af14f6fbf
"workflows: Use sccache to speed up CI builds"

cmake_args: '-GNinja -DLLVM_ENABLE_PROJECTS="${{ inputs.projects }}" -DCMAKE_BUILD_TYPE=Release -DLLDB_INCLUDE_TESTS=OFF -DCMAKE_C_COMPILER_LAUNCHER=sccache -DCMAKE_CXX_COMPILER_LAUNCHER=sccache'

Personally, I have no experience with it.

Maybe, it helps us to speedup at least stage1-only builds where ccache is used.

I will try with planned LLVM version 15.0.0-rc1 this week.

Note: I had no look at the current code of tc-build as I have archived my local Git due to disc-usage and other reasons.

[1] llvm/llvm-project@858ded9

pull-request: Document yapf requirements to format Python code

When dealing with issue #118 I wanted to offer a pull-request.

Unfortunately, the pull request showed errors in the yapf checks - clicking on the details has shown no hints.
Desired is the details should show the errors.

This was introduced by commit 41cc3c3 ("tc-build: Add GitHub Actions workflow").

Here on my Debian system I needed...

root# apt-get update 
root# apt-get install python3-yapf yapf3

...and checked the format of the utils.py file:

$ cd tc-build.dileks-github
$ yapf3 -i -vv utils.py
Reformatting utils.py

Unsure, if there is more to be installed locally to fullfill any other checks before sending a pull-request.

It is up to you to document whereever you like.
Personally, I prefer README.md.

If there are other smart solutions like activating the required GitHub workflows in the own private GitHub account, please let me know and/or document it, too.

[1] #118
[2] 41cc3c3

figure out how to statically link clang itself

As part of putting a build of clang up on kernel.org, statically linking all dependencies would simplify distribution for the various linux distros (I suspect). I don't know how to do this today in LLVM's cmake; maybe we need to add some things to upstream LLVM to do so.

SymLink ld.lld -> ld.lld-9

I would like to have a SymLink in bin-dir:

cd /path/to/install/bin

PRG_SUFFIX_VER="9"

$ ln -sf ld.lld ld.lld-$PRG_SUFFIX_VER

build-llvm.py not working

========================
== Checking CC and LD ==
========================

CC: /home/betrfs/clang+llvm-10.0.0-x86_64-linux-gnu-ubuntu-18.04/bin/clang-10
CXX: /home/betrfs/clang+llvm-10.0.0-x86_64-linux-gnu-ubuntu-18.04/bin/clang++
LD: /home/betrfs/clang+llvm-10.0.0-x86_64-linux-gnu-ubuntu-18.04/bin/ld.lld

===========================
== Checking dependencies ==
===========================

/usr/local/bin/cmake
/usr/bin/curl
/usr/bin/git
/usr/local/bin/ninja

===================
== Updating LLVM ==
===================

Already on 'master'
Your branch is up to date with 'origin/master'.
From https://github.com/llvm/llvm-project
 * branch                    master     -> FETCH_HEAD
Already up to date.
Current branch master is up to date.

==============================
== Configuring LLVM stage 1 ==
==============================

-- The C compiler identification is Clang 10.0.0
-- The CXX compiler identification is Clang 10.0.0
-- The ASM compiler identification is Clang
-- Found assembler: /home/betrfs/clang+llvm-10.0.0-x86_64-linux-gnu-ubuntu-18.04/bin/clang-10
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /home/betrfs/clang+llvm-10.0.0-x86_64-linux-gnu-ubuntu-18.04/bin/clang-10 - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - failed
-- Check for working CXX compiler: /home/betrfs/clang+llvm-10.0.0-x86_64-linux-gnu-ubuntu-18.04/bin/clang++
-- Check for working CXX compiler: /home/betrfs/clang+llvm-10.0.0-x86_64-linux-gnu-ubuntu-18.04/bin/clang++ - broken
CMake Error at /usr/local/share/cmake-3.18/Modules/CMakeTestCXXCompiler.cmake:59 (message):
  The C++ compiler

    "/home/betrfs/clang+llvm-10.0.0-x86_64-linux-gnu-ubuntu-18.04/bin/clang++"

  is not able to compile a simple test program.

  It fails with the following output:

    Change Dir: /home/betrfs/llvm-new/tc-build/build/llvm/stage1/CMakeFiles/CMakeTmp
    
    Run Build Command(s):/usr/local/bin/ninja cmTC_68162 && [1/2] Building CXX object CMakeFiles/cmTC_68162.dir/testCXXCompiler.cxx.o
    [2/2] Linking CXX executable cmTC_68162
    FAILED: cmTC_68162 
    : && /home/betrfs/clang+llvm-10.0.0-x86_64-linux-gnu-ubuntu-18.04/bin/clang++   CMakeFiles/cmTC_68162.dir/testCXXCompiler.cxx.o -o cmTC_68162   && :
    /usr/bin/ld: cannot find -lstdc++
    clang-10: error: linker command failed with exit code 1 (use -v to see invocation)
    ninja: build stopped: subcommand failed.
    
    

  

  CMake will not be able to correctly generate this project.
Call Stack (most recent call first):
  CMakeLists.txt:49 (project)


-- Configuring incomplete, errors occurred!
See also "/home/betrfs/llvm-new/tc-build/build/llvm/stage1/CMakeFiles/CMakeOutput.log".
See also "/home/betrfs/llvm-new/tc-build/build/llvm/stage1/CMakeFiles/CMakeError.log".
Traceback (most recent call last):
  File "./build-llvm.py", line 1061, in <module>
    main()
  File "./build-llvm.py", line 1057, in main
    do_multistage_build(args, dirs, env_vars)
  File "./build-llvm.py", line 1017, in do_multistage_build
    invoke_cmake(args, dirs, env_vars, stage)
  File "./build-llvm.py", line 902, in invoke_cmake
    subprocess.run(cmake, check=True, cwd=cwd)
  File "/usr/lib/python3.6/subprocess.py", line 438, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['cmake', '-G', 'Ninja', '-Wno-dev', '-DCLANG_ENABLE_ARCMT=OFF', '-DCLANG_ENABLE_STATIC_ANALYZER=OFF', '-DCLANG_PLUGIN_SUPPORT=OFF', '-DLLVM_BINUTILS_INCDIR=/home/betrfs/llvm-new/tc-build/binutils-2.35/include', '-DLLVM_ENABLE_BINDINGS=OFF', '-DLLVM_ENABLE_PLUGINS=ON', '-DLLVM_ENABLE_OCAMLDOC=OFF', '-DLLVM_ENABLE_TERMINFO=OFF', '-DLLVM_EXTERNAL_CLANG_TOOLS_EXTRA_SOURCE_DIR=', '-DLLVM_INCLUDE_DOCS=OFF', '-DLLVM_INCLUDE_EXAMPLES=OFF', '-DCMAKE_C_COMPILER=/home/betrfs/clang+llvm-10.0.0-x86_64-linux-gnu-ubuntu-18.04/bin/clang-10', '-DCMAKE_CXX_COMPILER=/home/betrfs/clang+llvm-10.0.0-x86_64-linux-gnu-ubuntu-18.04/bin/clang++', '-DLLVM_USE_LINKER=/home/betrfs/clang+llvm-10.0.0-x86_64-linux-gnu-ubuntu-18.04/bin/ld.lld', '-DLLVM_ENABLE_PROJECTS=clang;lld', '-DLLVM_TARGETS_TO_BUILD=host', '-DLLVM_CCACHE_BUILD=ON', '-DCMAKE_BUILD_TYPE=Release', '-DLLVM_ENABLE_BACKTRACES=OFF', '-DLLVM_ENABLE_WARNINGS=OFF', '-DLLVM_INCLUDE_TESTS=OFF', '-DLLVM_INCLUDE_UTILS=OFF', '-DCLANG_VENDOR=ClangBuiltLinux', '/home/betrfs/llvm-new/tc-build/llvm-project/llvm']' returned non-zero exit status 1.

tc-build supports building from tarballs

Does tc-build supports building from tarballs?

If yes , where do I have to place/unpack the tarballs speaking of the directory structure?

Currently, I see version 10.0.0-rc6 and wanted to do a stage1 build with LLVM, Clang and LLD tarballs.

Thanks to Universe the Clang tarball has no more cfe as prefix.

[1] https://prereleases.llvm.org/10.0.0/
[2] https://prereleases.llvm.org/10.0.0/rc6/llvm-10.0.0rc6.src.tar.xz
[3] https://prereleases.llvm.org/10.0.0/rc6/clang-10.0.0rc6.src.tar.xz
[4] https://prereleases.llvm.org/10.0.0/rc6/lld-10.0.0rc6.src.tar.xz

When BRANCH is a tag --incremental does not work

Here we go:

$ ./scripts/build_tc-build.sh
projects...... clang;lld
targets....... X86
branch........ llvmorg-9.0.0-rc1
stage1-opts... --build-stage1-only --install-stage1-only
misc-opts..... --incremental
build-dir..... /home/sdi/src/llvm-toolchain/build
install-dir... /home/sdi/src/llvm-toolchain/install

========================
== Checking CC and LD ==
========================

CC: /usr/lib/llvm-9/bin/clang
CXX: /usr/lib/llvm-9/bin/clang++
LD: /usr/bin/ld.lld-9

===========================
== Checking dependencies ==
===========================

/usr/bin/cmake
/usr/bin/curl
/usr/bin/git
/usr/bin/ninja

===================
== Updating LLVM ==
===================

Note: checking out 'llvmorg-9.0.0-rc1'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at 6aa75a25bdee Merging r367215: ------------------------------------------------------------------------ r367215 | hans | 2019-07-29 11:49:04 +0200 (Mon, 29 Jul 2019) | 66 lines
$ cd tc-build/llvm-project
$ git branch
* (HEAD detached at llvmorg-9.0.0-rc1)
  release/9.x

UPDATE: Add correct log in Updating LLVM section

figure out how to link clang itself against musl

follow on to #150 , though perhaps less important. For kernel.org builds of LLVM, we should see if we can link clang itself against musl (dynamically; static linking I assume is orthogonal). @arndb had done some experiments with this (it would be good to share notes here). I've been fooling around with static linking against musl and LTO'ing in musl in here (though that's irrelevant to dynamically linking against musl; thought I'd post it somewhere so that I don't forget).

stage1-only: .gitignore file is copied into INSTALL_DIR

Yesterday, I switched in my stage1-only build to /opt/llvm-toolchain as INSTALL_DIR and see:

$ cd /path/to/tc-build/git
$ python3 ./build-llvm.py --no-update --build-type Release -p clang;lld -t X86;BPF;AArch64 --clang-vendor dileks -B /home/dileks/src/llvm-toolchain/build -I /opt/llvm-toolchain --check-targets clang lld --build-stage1-only --install-stage1-only

$ LC_ALL=C ls -alh /opt/llvm-toolchain/.gitignore
-rw-r--r-- 1 dileks dileks 1 Jan 14 20:48 /opt/llvm-toolchain/.gitignore

question: how to find out why a linking error happens?

when it reaches stage 3 somewhere i get an error like this:

===========================
== Building LLVM stage 3 ==
===========================

[3010/3596] Linking CXX executable bin/clang-ast-dump
FAILED: bin/clang-ast-dump
: && /mnt/volume_fra1_01/tc_build/build/llvm/stage1/bin/clang++  -fPIC -fno-semantic-interposition -fvisibility-inlines-hidden -Werror=date-time -Werror=unguarded-availability-new -w -fdiagnostics-color -ffunction-sections -fdata-sections -fprofile-instr-use="/mnt/volume_fra1_01/tc_build/build/llvm/profdata.prof" -flto=full -fno-common -Woverloaded-virtual -Wno-nested-anon-types -O3 -DNDEBUG  -fuse-ld=/mnt/volume_fra1_01/tc_build/build/llvm/stage1/bin/ld.lld -Wl,--color-diagnostics -fprofile-instr-use="/mnt/volume_fra1_01/tc_build/build/llvm/profdata.prof" -flto=full    -Wl,--gc-sections tools/clang/lib/Tooling/DumpTool/CMakeFiles/clang-ast-dump.dir/ASTSrcLocProcessor.cpp.o tools/clang/lib/Tooling/DumpTool/CMakeFiles/clang-ast-dump.dir/ClangSrcLocDump.cpp.o  -o bin/clang-ast-dump  -Wl,-rpath,"\$ORIGIN/../lib"  lib/libLLVMOption.a  lib/libLLVMFrontendOpenMP.a  lib/libLLVMSupport.a  -lpthread  lib/libclangAST.a  lib/libclangASTMatchers.a  lib/libclangBasic.a  lib/libclangDriver.a  lib/libclangFrontend.a  lib/libclangSerialization.a  lib/libclangToolingCore.a  lib/libclangDriver.a  lib/libLLVMWindowsDriver.a  lib/libLLVMOption.a  lib/libclangParse.a  lib/libclangSema.a  lib/libclangEdit.a  lib/libclangAnalysis.a  lib/libclangASTMatchers.a  lib/libclangAST.a  lib/libLLVMFrontendOpenMP.a  lib/libLLVMScalarOpts.a  lib/libLLVMAggressiveInstCombine.a  lib/libLLVMInstCombine.a  lib/libLLVMTransformUtils.a  lib/libLLVMAnalysis.a  lib/libLLVMProfileData.a  lib/libLLVMSymbolize.a  lib/libLLVMDebugInfoPDB.a  lib/libLLVMDebugInfoMSF.a  lib/libLLVMDebugInfoDWARF.a  lib/libLLVMObject.a  lib/libLLVMMCParser.a  lib/libLLVMMC.a  lib/libLLVMDebugInfoCodeView.a  lib/libLLVMTextAPI.a  lib/libLLVMBitReader.a  lib/libLLVMCore.a  lib/libLLVMBinaryFormat.a  lib/libLLVMRemarks.a  lib/libLLVMBitstreamReader.a  lib/libclangRewrite.a  lib/libclangLex.a  lib/libclangBasic.a  lib/libLLVMSupport.a  -lrt  -ldl  -lpthread  -lm  /usr/lib/x86_64-linux-gnu/libz.so  lib/libLLVMDemangle.a && :
clang-15: error: unable to execute command: Killed
clang-15: error: linker command failed due to signal (use -v to see invocation)
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/mnt/volume_fra1_01/tc_build/build-llvm.py", line 1291, in <module>
    main()
  File "/mnt/volume_fra1_01/tc_build/build-llvm.py", line 1287, in main
    do_multistage_build(args, dirs, env_vars)
  File "/mnt/volume_fra1_01/tc_build/build-llvm.py", line 1236, in do_multistage_build
    invoke_ninja(args, dirs, stage)
  File "/mnt/volume_fra1_01/tc_build/build-llvm.py", line 1137, in invoke_ninja
    subprocess.run('ninja', check=True, cwd=build_folder)
  File "/usr/lib/python3.8/subprocess.py", line 516, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'ninja' returned non-zero exit status 1.

but i don't see what is wrong

building on ubuntu 20.04 , installed https://github.com/ClangBuiltLinux/tc-build/blob/main/ci.sh#L28

Update known good revision

I am filing this to track updating the known good revision soon. I was going to do it at the same time as updating to Linux 5.12 like I usually do but this issue breaks certain workflows so we should only update once it is fixed.

Target triple option

Since I use tc-build I see in my logs:

$ grep warning: build-log_5.7.0-rc6-2-amd64-clang.txt 
dpkg-architecture: warning: specified GNU system type x86_64-linux-gnu does not match CC system type x86_64-unknown-linux-gnu, try setting a correct CC environment variable

My generated clang-10 says Target: x86_64-unknown-linux-gnu:

$ clang-10 -v
ClangBuiltLinux clang version 10.0.1 (https://github.com/llvm/llvm-project f79cd71e145c6fd005ba4dd1238128dfa0dc2cb6)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /home/dileks/src/llvm-toolchain/install/bin
Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/10
Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/8
Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/9
Selected GCC installation: /usr/lib/gcc/x86_64-linux-gnu/10
Candidate multilib: .;@m64
Candidate multilib: 32;@m32
Candidate multilib: x32;@mx32
Selected multilib: .;@m64

Whereas Debian's gcc-10 says Target: x86_64-linux-gnu:

$ gcc-10 -v
Using built-in specs.
COLLECT_GCC=gcc-10
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/10/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none:amdgcn-amdhsa:hsa
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Debian 10.1.0-1' --with-bugurl=file:///usr/share/doc/gcc-10/README.Bugs --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --prefix=/usr --with-gcc-major-version-only --program-suffix=-10 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-plugin --enable-default-pie --with-system-zlib --enable-libphobos-checking=release --with-target-system-zlib=auto --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none,amdgcn-amdhsa,hsa --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
Supported LTO compression algorithms: zlib zstd
gcc version 10.1.0 (Debian 10.1.0-1)

So an option is desired to pass "triple"for the target(s).
Unsure which target triple options are allowed in the LLVM world.

Thanks.

download llvm from tarball's rather than shallow clone?

Describe the bug
It looks like downloading llvm is done via git clone. Might be faster to download the tarball? I think https://github.com/llvm/llvm-project/tarball/master works.

To Reproduce
Steps to reproduce the behavior:

$ ./build-llvm.py --pgo kernel-defconfig --no-ccache

Expected behavior
Downloads faster.

Screenshots
If applicable, add screenshots to help explain your problem.

Environment (please complete the following information):

  • Command you ran: ./build-llvm.py --pgo kernel-defconfig --no-ccache
  • Distribution: gLinux (Debian
  • Python version: Python 3.9.12
  • Shell: zsh

Building with PGO and ThinLTO support

Impressed by the PGO related talks at LPC 2020 I wanted to try it in combination with ThinLTO.

[ My first question ]
Why is it possible to set stage1-only tc-build options when I do a PGO + ThinLTO build which is AFAICS a multi-stage build?

[ My second question ]
How can I verify that especially PGO (and ThinLTO) was enabled?

My build-llvm.py line looks like this:

python3 ./build-llvm.py --no-update --build-type Release -p clang;lld -t X86 --clang-vendor '' -B /home/dileks/src/llvm-toolchain/build -I /home/dileks/src/llvm-toolchain/install --pgo --lto thin --check-targets clang lld

Tried to verify like this:

$ cd build/stage1

$ grep -i pgo CMakeCache.txt | grep ^[A-Z]
LLVM_ENABLE_IR_PGO:BOOL=OFF
LLVM_ENABLE_IR_PGO-ADVANCED:INTERNAL=1

$ grep -i lto CMakeCache.txt | grep ^[A-Z]
BENCHMARK_ENABLE_LTO:BOOL=OFF
CMAKE_DLLTOOL:FILEPATH=/usr/lib/llvm-11/bin/llvm-dlltool
LLVMDlltoolDriver_LIB_DEPENDS:STATIC=general;LLVMObject;general;LLVMOption;general;LLVMSupport;
LLVMLTO_LIB_DEPENDS:STATIC=general;LLVMAggressiveInstCombine;general;LLVMAnalysis;general;LLVMBinaryFormat;general;LLVMBitReader;general;LLVMBitWriter;general;LLVMCodeGen;general;LLVMCore;general;LLVMExtensions;general;LLVMInstCombine;general;LLVMLinker;general;LLVMMC;general;LLVMObjCARCOpts;general;LLVMObject;general;LLVMPasses;general;LLVMRemarks;general;LLVMScalarOpts;general;LLVMSupport;general;LLVMTarget;general;LLVMTransformUtils;general;LLVMipo;
LLVM_ENABLE_LTO:STRING=OFF
LLVM_TOOL_LLVM_LTO2_BUILD:BOOL=ON
LLVM_TOOL_LLVM_LTO_BUILD:BOOL=ON
LLVM_TOOL_LTO_BUILD:BOOL=ON
LLVMgold_LIB_DEPENDS:STATIC=general;LLVMX86CodeGen;general;LLVMX86AsmParser;general;LLVMX86Desc;general;LLVMX86Disassembler;general;LLVMX86Info;general;LLVMLinker;general;LLVMLTO;general;LLVMBitWriter;general;LLVMipo;
LTO_LIB_DEPENDS:STATIC=general;LLVMX86AsmParser;general;LLVMX86CodeGen;general;LLVMX86Desc;general;LLVMX86Disassembler;general;LLVMX86Info;general;LLVMBitReader;general;LLVMCore;general;LLVMCodeGen;general;LLVMLTO;general;LLVMMC;general;LLVMMCDisassembler;general;LLVMSupport;general;LLVMTarget;
CMAKE_DLLTOOL-ADVANCED:INTERNAL=1
COMPILER_RT_HAS_FNO_LTO_FLAG:INTERNAL=1
LLVM_TOOL_LLVM_LTO2_BUILD-ADVANCED:INTERNAL=1
LLVM_TOOL_LLVM_LTO_BUILD-ADVANCED:INTERNAL=1
LLVM_TOOL_LTO_BUILD-ADVANCED:INTERNAL=1

$ egrep -i 'pgo|lto' ../../logs/log_tc-build.txt 
-- Performing Test COMPILER_RT_HAS_FNO_LTO_FLAG
-- Performing Test COMPILER_RT_HAS_FNO_LTO_FLAG - Success

Some information of my build-environment:

[ clang ]

$ clang-11 -v
Debian clang version 11.0.0-++20200827023347+522d80ab553-1~exp1~20200827124024.70
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin
Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/10
Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/9
Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/10
Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/9
Selected GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/10
Candidate multilib: .;@m64
Candidate multilib: 32;@m32
Candidate multilib: x32;@mx32
Selected multilib: .;@m64

[ tc-build Git ]

$ git log -1 --oneline 
05e7f327cb3a (HEAD -> master, origin/master, origin/HEAD) Merge pull request #108 from nathanchance/kernel-script-update

[ llvm-project Git ]

$ git branch --show-current 
release/11.x
$ git log -1 --oneline 
97ac9e82002d (HEAD -> release/11.x, origin/release/11.x) [SSP] Restore setting the visibility of __guard_local to hidden for better code generation.

Installed Debian llvm-toolchain-11 packages:

VER="1:11.0.0~++20200827023347+522d80ab553-1~exp1~20200827124024.70"

root# dpkg -l | grep $VER | awk '/^ii/ {print $1 " " $2 " " $3}' | column -t
ii  clang-11                1:11.0.0~++20200827023347+522d80ab553-1~exp1~20200827124024.70
ii  libclang-common-11-dev  1:11.0.0~++20200827023347+522d80ab553-1~exp1~20200827124024.70
ii  libclang-cpp11          1:11.0.0~++20200827023347+522d80ab553-1~exp1~20200827124024.70
ii  libclang1-11            1:11.0.0~++20200827023347+522d80ab553-1~exp1~20200827124024.70
ii  libllvm11:amd64         1:11.0.0~++20200827023347+522d80ab553-1~exp1~20200827124024.70
ii  lld-11                  1:11.0.0~++20200827023347+522d80ab553-1~exp1~20200827124024.70
ii  llvm-11                 1:11.0.0~++20200827023347+522d80ab553-1~exp1~20200827124024.70
ii  llvm-11-runtime         1:11.0.0~++20200827023347+522d80ab553-1~exp1~20200827124024.70

Do I miss to install some Debian packages?

Do I need a host clang compiler with special support enabled?
How can I verify if my host clang compiler supports PGO and/or ThinLTO?

If this is a multi-stage build will I see PGO and/or (Thin)LTO settings later in a build/stage2 directory?

Ability to include tests for targets into build-dir

Hi,

while digging into CBL issue #619 I would like to have tests for clang and lld targets.

This is adapted from a previous version of my script.

[ run_tests-lld.sh ]

#!/bin/sh
  
export LANG=C
export LC_ALL=C

BUILD_DIR="$(pwd)/build/stage1"

LLD_TESTS="ELF/x86-64-retpoline-linkerscript.s ELF/x86-64-retpoline-znow-linkerscript.s ELF/x86-64-retpoline-znow.s ELF/x86-64-retpoline.s"

LLVM_LIT_OPTS="--verbose --echo-all-commands --show-all"

cd $BUILD_DIR

for t in $LLD_TESTS ; do ./bin/llvm-lit $LLVM_LIT_OPTS ./lld/test/$t ; done

I was not able to include the $LLD_TESTS from /path/to/tc-build/... to run this.

Option for BPF support when CONFIG_DEBUG_INFO_BTF=y

Debian-kernel team started to set CONFIG_DEBUG_INFO_BTF=y.

First, this needs pahole binary from dwarves package (minimum: v1.16).

$ pahole --version
v1.19

Second, your LLVM toolchain needs BPF support enabled.

My selfmade LLVM toolchain v11.0.1 does not have this:

$ which llc
/home/dileks/src/llvm-toolchain/install/bin/llc

$ llc --version
LLVM (http://llvm.org/):
 LLVM version 11.0.1
 Optimized build.
 Default target: x86_64-unknown-linux-gnu
 Host CPU: sandybridge

 Registered Targets:
   x86    - 32-bit X86: Pentium-Pro and above
   x86-64 - 64-bit X86: EM64T and AMD64

Debian's LLC has BPF support:

$ /usr/bin/llc-11 --version | grep -i bpf
   bpf        - BPF (host endian)
   bpfeb      - BPF (big endian)
   bpfel      - BPF (little endian)

Is it possible to have an option to enable BPF support?

Thanks.

[1] https://salsa.debian.org/kernel-team/linux/-/commit/929891281c61ce4403ddd869664c949692644a2f
[2] https://www.kernel.org/doc/html/latest/bpf/bpf_devel_QA.html?highlight=pahole#llvm
[3] https://www.kernel.org/doc/html/latest/bpf/btf.html?highlight=pahole#btf-generation

Linking CXX shared module lib/LLVMgold.so FAILED

Describe the bug
build-llvm.py fails at step 3 of 3 as described below. However, using --lto thin works as intended but --lto full fails.

To Reproduce
Steps to reproduce the behavior:

  1. build-llvm.py
    --targets "ARM;AArch64;X86"
    --use-good-revision
    --pgo
    --lto full

  2. Script fails at stage 3 of 3:
    [3405/4091] Linking CXX shared module lib/LLVMgold.so FAILED: lib/LLVMgold.so : && /home/danny/android/build-tools-llvm/build/llvm/stage1/bin/clang++ -fPIC -fPIC -fvisibility-inlines-hidden -Werror=date-time -Werror=unguarded-availability-new -w -fdiagnostics-color -ffunction-sections -fdata-sections -fprofile-instr-use="/home/danny/android/build-tools-llvm/build/llvm/profdata.prof" -flto=full -O3 -DNDEBUG -Wl,-z,nodelete -fuse-ld=/home/danny/android/build-tools-llvm/build/llvm/stage1/bin/ld.lld -Wl,--color-diagnostics -Wl,-O3 -Wl,--gc-sections -Wl,--version-script,"/home/danny/android/build-tools-llvm/build/llvm/stage3/tools/gold/LLVMgold.exports" -shared -o lib/LLVMgold.so tools/gold/CMakeFiles/LLVMgold.dir/gold-plugin.cpp.o -Wl,-rpath,"\$ORIGIN/../lib" lib/libLLVMARMCodeGen.a lib/libLLVMARMAsmParser.a lib/libLLVMARMDesc.a lib/libLLVMARMDisassembler.a lib/libLLVMARMInfo.a lib/libLLVMARMUtils.a lib/libLLVMAArch64CodeGen.a lib/libLLVMAArch64AsmParser.a lib/libLLVMAArch64Desc.a lib/libLLVMAArch64Disassembler.a lib/libLLVMAArch64Info.a lib/libLLVMAArch64Utils.a lib/libLLVMX86CodeGen.a lib/libLLVMX86AsmParser.a lib/libLLVMX86Desc.a lib/libLLVMX86Disassembler.a lib/libLLVMX86Info.a lib/libLLVMX86Utils.a lib/libLLVMLinker.a lib/libLLVMLTO.a lib/libLLVMBitWriter.a lib/libLLVMipo.a lib/libLLVMARMDesc.a lib/libLLVMARMInfo.a lib/libLLVMARMUtils.a lib/libLLVMAArch64Desc.a lib/libLLVMAArch64Info.a lib/libLLVMAArch64Utils.a lib/libLLVMAsmPrinter.a lib/libLLVMDebugInfoDWARF.a lib/libLLVMCFGuard.a lib/libLLVMGlobalISel.a lib/libLLVMSelectionDAG.a lib/libLLVMMCDisassembler.a lib/libLLVMObjCARCOpts.a lib/libLLVMPasses.a lib/libLLVMCodeGen.a lib/libLLVMTarget.a lib/libLLVMCoroutines.a lib/libLLVMipo.a lib/libLLVMLinker.a lib/libLLVMBitWriter.a lib/libLLVMFrontendOpenMP.a lib/libLLVMIRReader.a lib/libLLVMAsmParser.a lib/libLLVMInstrumentation.a lib/libLLVMVectorize.a lib/libLLVMScalarOpts.a lib/libLLVMAggressiveInstCombine.a lib/libLLVMInstCombine.a lib/libLLVMTransformUtils.a lib/libLLVMAnalysis.a lib/libLLVMProfileData.a lib/libLLVMObject.a lib/libLLVMMCParser.a lib/libLLVMMC.a lib/libLLVMDebugInfoCodeView.a lib/libLLVMDebugInfoMSF.a lib/libLLVMBitReader.a lib/libLLVMTextAPI.a lib/libLLVMCore.a lib/libLLVMBinaryFormat.a lib/libLLVMRemarks.a lib/libLLVMBitstreamReader.a lib/libLLVMSupport.a -lz -lrt -ldl -lpthread -lm lib/libLLVMDemangle.a && : clang-11: error: unable to execute command: Killed

Expected behavior
Script should build and install toolchain correctly.

Environment (please complete the following information):

  • Command you ran: see step 1
  • Distribution: Ubuntu 18.04 [amd64]
  • Python version: 2.7.17

Linux sources: Download XZ tarball, simplify decompressing and verify sign-file

For downloading better use the *.tar.xz as tarball (here: saves approx. 50MiB):

linux-5.2.tar.gz                                   08-Jul-2019 04:47    157M
linux-5.2.tar.xz                                   08-Jul-2019 04:47    102M

Sidenote: There is also a sign-file around. I have seen a signature-verify part in tc-build. Maybe we can have this here for the Linux sources, too.

linux-5.2.tar.sign                                 08-Jul-2019 04:47     983

Simplify decompressing of tarballs (modern tar versions are smart enough):

tar -xf $tarball

Snippet:

diff --git a/kernel/build.sh b/kernel/build.sh
index 3a9a08576e17..cb8be792973c 100755
--- a/kernel/build.sh
+++ b/kernel/build.sh
@@ -47,7 +47,7 @@ if [[ -n ${SRC_FOLDER} ]]; then
     cd "${SRC_FOLDER}" || exit 1
 else
     LINUX=linux-5.1
-    LINUX_TARBALL=${TC_BLD}/kernel/${LINUX}.tar.gz
+    LINUX_TARBALL=${TC_BLD}/kernel/${LINUX}.tar.xz
     LINUX_PATCH=${TC_BLD}/kernel/${LINUX}.patch
 
     # If we don't have the source tarball, download it
@@ -55,7 +55,7 @@ else
 
     # If there is a patch to apply, remove the folder so that we can patch it accurately (we cannot assume it has already been patched)
     [[ -f ${LINUX_PATCH} ]] && rm -rf ${LINUX}
-    [[ -d ${LINUX} ]] || { tar -xzf "${LINUX_TARBALL}" || exit ${?}; }
+    [[ -d ${LINUX} ]] || { tar -xf "${LINUX_TARBALL}" || exit ${?}; }
     cd ${LINUX} || exit 1
     [[ -f ${LINUX_PATCH} ]] && patch -p1 < "${LINUX_PATCH}"
 fi

Just some little improvements.

[1] https://mirrors.edge.kernel.org/pub/linux/kernel/v5.x/

[request]Specify the number of compilation threads

When compiling in the chroot container of the mobile phone, the heat is severe and the memory is insufficient.
It is recommended to specify the number of threads. A reasonable number of threads will neither cause fever or frequency reduction nor memory explosion.

tools/gold/X86/strip_names.ll fails after r367755

This is not directly related to the script but this repo seems like the best place to do it, rather than opening issues on the llvm-project repo and clogging up the main issue tracker since it isn't something that affects the kernel. This is more of an FYI issue, in case anyone is using --check-targets llvm like I am.

The test case tools/gold/X86/strip_names.ll fails after r367755:

$ /build-llvm.py --build-stage1-only --check-targets llvm --projects llvm --targets X86
...
[394/395] Running the LLVM regression tests
FAIL: LLVM :: tools/gold/X86/strip_names.ll (30941 of 32869)
******************** TEST 'LLVM :: tools/gold/X86/strip_names.ll' FAILED ********************
Script:
--
: 'RUN: at line 1';   /home/nathan/cbl/git/tc-build/build/llvm/stage1/bin/llvm-as /home/nathan/cbl/git/tc-build/llvm-project/llvm/test/tools/gold/X86/strip_names.ll -o /home/nathan/cbl/git/tc-build/build/llvm/stage1/test/tools/gold/X86/Output/strip_names.ll.tmp.o
: 'RUN: at line 3';   /usr/bin/ld.gold -plugin /home/nathan/cbl/git/tc-build/build/llvm/stage1/./lib/LLVMgold.so     -m elf_x86_64     --plugin-opt=save-temps     -shared /home/nathan/cbl/git/tc-build/build/llvm/stage1/test/tools/gold/X86/Output/strip_names.ll.tmp.o -o /home/nathan/cbl/git/tc-build/build/llvm/stage1/test/tools/gold/X86/Output/strip_names.ll.tmp2.o
: 'RUN: at line 7';   /home/nathan/cbl/git/tc-build/build/llvm/stage1/bin/llvm-dis /home/nathan/cbl/git/tc-build/build/llvm/stage1/test/tools/gold/X86/Output/strip_names.ll.tmp2.o.0.2.internalize.bc -o - | /home/nathan/cbl/git/tc-build/build/llvm/stage1/bin/FileCheck /home/nathan/cbl/git/tc-build/llvm-project/llvm/test/tools/gold/X86/strip_names.ll
: 'RUN: at line 9';   /usr/bin/ld.gold -plugin /home/nathan/cbl/git/tc-build/build/llvm/stage1/./lib/LLVMgold.so     -m elf_x86_64     --plugin-opt=emit-llvm     -shared /home/nathan/cbl/git/tc-build/build/llvm/stage1/test/tools/gold/X86/Output/strip_names.ll.tmp.o -o /home/nathan/cbl/git/tc-build/build/llvm/stage1/test/tools/gold/X86/Output/strip_names.ll.tmp2.o
: 'RUN: at line 13';   /home/nathan/cbl/git/tc-build/build/llvm/stage1/bin/llvm-dis /home/nathan/cbl/git/tc-build/build/llvm/stage1/test/tools/gold/X86/Output/strip_names.ll.tmp2.o -o - | /home/nathan/cbl/git/tc-build/build/llvm/stage1/bin/FileCheck --check-prefix=NONAME /home/nathan/cbl/git/tc-build/llvm-project/llvm/test/tools/gold/X86/strip_names.ll
--
Exit Code: 1

Command Output (stderr):
--
/home/nathan/cbl/git/tc-build/llvm-project/llvm/test/tools/gold/X86/strip_names.ll:23:11: error: NONAME: expected string not found in input
; NONAME: @foo(i32)
          ^
<stdin>:6:18: note: scanning from here
@GlobalValueName = global i32 0
                 ^
<stdin>:8:12: note: possible intended match here
define i32 @foo(i32 %0) {
           ^

--

********************
Testing Time: 29.71s
********************
Failing Tests (1):
    LLVM :: tools/gold/X86/strip_names.ll

  Expected Passes    : 18707
  Expected Failures  : 49
  Unsupported Tests  : 14112
  Unexpected Failures: 1
FAILED: test/CMakeFiles/check-llvm 

I pushed a fix: https://reviews.llvm.org/D65726

Hopefully I did that right, just waiting for review and acceptance now.

Add make version 4.3 to speedup parallel build

Hi,

as pointed out at the first ClangBuiltLinux Meetup in Zurich I suggest to try make version 4.3 to check if we see some speedups in parallel builds.

The official release announcement [1] and a review in German [2] is added.

While dealing with GNU make I fell over this commit ("pipe: use exclusive waits when reading or writing") in Linus tree [3] which reports to speedup parallel-make-jobs when building a Linux-kernel significantly.

Speaking for the Debian side there is 'make (4.2.1-1.2)` in buster/testing/unstable available.

[1] https://lists.gnu.org/archive/html/info-gnu/2020-01/msg00004.html
[2] https://www.heise.de/developer/meldung/Build-Tool-GNU-Make-4-3-verbessert-die-Performance-4641700.html
[3] https://git.kernel.org/linus/0ddad21d3e99c743a3aa473121dc5561679e26bb

Host Freezes while Building LLVM stage 3

Describe the bug
Compilation hangs at Building LLVM stage 3. Job number 2257. It freezes the host PC/cloud instance. I tried on PC, it freezes and tried on could, the same happened.

To Reproduce
Steps to reproduce the behavior:

./build-llvm.py --lto=full --march=corei7 --pgo --targets=AArch64

Expected behavior
Expected it to pass and complete the build process.

Screenshots
Stage 3

Environment (please complete the following information):

  • Command i ran: [./build-llvm.py --lto=full --march=corei7 --pgo --targets=AArch64]
  • Distribution: [Ubuntu 19.04]
  • Python version: [2.7.16 & 3.7.3]

Allow empty string in --clang-vendor option

Currently it is not possible to pass an empty string to --clang-vendor option.

This brutal method works:

[ build-llvm.py ]

def parse_parameters(root_folder):
...
-                        default="ClangBuiltLinux")
+                        default="")

I have opened a separate issue (see my report in [1]).

[1] #92 (comment)

Execute build-llvm.py to build a cross toolchain of AArch64

Describe the bug

    Clang :: SemaCXX/large-array-init.cpp          
    Clang :: SemaCXX/warn-unused-local-typedef-serialize.cpp                                                                        
                                 
  Expected Passes    : 15029
  Expected Failures  : 14
  Unsupported Tests  : 1459
  Unexpected Passes  : 2
  Unexpected Failures: 84
[457/573] Building CXX object unittests/Support/CMakeFiles/SupportTests.dir/FileCollectorTest.cpp.o
FAILED: tools/clang/test/CMakeFiles/check-clang 
cd /home/lina/tc-build/build/llvm/stage1/tools/clang/test && /usr/bin/python /home/lina/tc-build/build/llvm/stage1/./bin/llvm-lit -sv --param clang_site_config=/home/lina/tc-build/build/llvm/stage1/tools/clang/test/lit.site.cfg --param USE_Z3_SOLVER=0 /home/lina/tc-build/build/llvm/stage1/tools/clang/test
[459/573] Building CXX object unittests/Support/CMakeFiles/SupportTests.dir/GlobPatternTest.cpp.o
[460/573] Building CXX object unittests/Support/CMakeFiles/SupportTests.dir/Host.cpp.o
[461/573] Building CXX object unittests/Support/CMakeFiles/SupportTests.dir/FileCheckTest.cpp.o
[462/573] Building CXX object unittests/Support/CMakeFiles/SupportTests.dir/ItaniumManglingCanonicalizerTest.cpp.o
[463/573] Building CXX object unittests/Support/CMakeFiles/SupportTests.dir/ErrorTest.cpp.o
[464/573] Building CXX object unittests/Support/CMakeFiles/SupportTests.dir/FormatVariadicTest.cpp.o
[465/573] Building CXX object unittests/Support/CMakeFiles/SupportTests.dir/JSONTest.cpp.o
ninja: build stopped: subcommand failed.

========================
== Checking CC and LD ==
========================

CC: /usr/bin/clang-10
CXX: /usr/bin/clang++
LD: /usr/bin/ld.lld

===========================
== Checking dependencies ==
===========================

/usr/bin/cmake
/usr/bin/curl
/usr/bin/git
/usr/bin/ninja

===================
== Updating LLVM ==
===================


==============================
== Configuring LLVM stage 1 ==
==============================


===========================
== Building LLVM stage 1 ==
===========================

Traceback (most recent call last):
  File "./build-llvm.py", line 1061, in <module>
    main()
  File "./build-llvm.py", line 1057, in main
    do_multistage_build(args, dirs, env_vars)
  File "./build-llvm.py", line 1018, in do_multistage_build
    invoke_ninja(args, dirs, stage)
  File "./build-llvm.py", line 954, in invoke_ninja
    subprocess.run(['ninja'] +
  File "/usr/lib64/python3.8/subprocess.py", line 512, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', 'check-clang', 'check-lld', 'check-llvm']' returned non-zero exit status 1.

To Reproduce
Steps to reproduce the behavior:

./build-llvm.py \
          --assertions \
          --branch "release/10.x" \
          --build-stage1-only \
          --check-targets clang lld llvm \
          --install-stage1-only \
          --projects "clang;lld" \
          --shallow-clone \
          --targets AArch64  2>&1 | tee T-build-llvm.sh.log

Expected behavior
Execute build-llVm.py to build a cross toolchain of AArch64

Environment (please complete the following information):

  • Command you ran: [e.g. ./build-llvm.py]
cat T-build-llvm.sh
#!/bin/bash
set -e
set -x

./build-llvm.py \
          --assertions \
          --branch "release/10.x" \
          --build-stage1-only \
          --check-targets clang lld llvm \
          --install-stage1-only \
          --projects "clang;lld" \
          --shallow-clone \
          --targets AArch64  2>&1 | tee T-build-llvm.sh.log
  • Distribution: Fedora32 x86_64
  • Python version: 3.8.2
  • Shell: bash

CMake build error in ./build-llvm.py on Ubuntu Bionic

Describe the bug
CMake Error at CMakeLists.txt:3 (cmake_minimum_required):
CMake 3.13.4 or higher is required. You are running version 3.10.2

Readme (https://github.com/ClangBuiltLinux/tc-build/blob/main/README.md) says that Ubuntu Bionic is supported but latest CMake version for Bionic is 3.10.2.

To Reproduce
Steps to reproduce the behavior:

  1. Run ./build-llym.py
  2. See error

Expected behavior
Script success.

Environment (please complete the following information):

  • Command you ran: ./build-llvm.py
  • Distribution: Ubuntu Bionic 18.04.5
  • Python version: python3.6 2

Suppress version control revision info (Git revision id) is appended

Each time I build a new toolchain via tc-build the current Git revision id is appended and changes when llvm-project Git updates.

--- /boot/config-5.7.0-rc1-5-amd64-clang
+++ .config

-# Compiler: ClangBuiltLinux clang version 10.0.1 (https://github.com/llvm/llvm-project 61b6007157600d8080b87361397bb61ffbcfb196)
+# Compiler: ClangBuiltLinux clang version 10.0.1 (https://github.com/llvm/llvm-project edbe962459da6e3b7b4168118f93a77847b54e02)

The CCACHE benefit is gone as Linux build-system detects the change of compiler.

There exists LLVM_APPEND_VC_REV (default: ON) cmake-option to suppress this:

LLVM_APPEND_VC_REV:BOOL
    Embed version control revision info (Git revision id). The version info is provided by the LLVM_REVISION macro in llvm/include/llvm/Support/VCSRevision.h. Developers using git who don’t need revision info can disable this option to avoid re-linking most binaries after a branch switch. Defaults to ON.

What do you think of integrating an option for this in tc-build?

[1] https://llvm.org/docs/CMake.html#llvm-specific-variables

kernel/build.sh error

I used kernel/build.sh to build kernel 5.8.3 on my VM. The command I used is ./build.sh --allyesconfig -t X86.
The building process completed successfully. I can even run make modules_install and make install successfully.
However, I could not boot the OS with the newly-complied kernel. I got the boot log as shown below.
Could you please tell me how I should fix this?

[ 0.000000][ T0] Linux version 5.8.3 (lsm@lsm) (clang version 10.0.0 , LLD 10.0.0) #1 SMP Wed Sep 9 19:23:31 UTC 2020
[ 0.000000][ T0] Command line: BOOT_IMAGE=/vmlinuz-5.8.3 root=UUID=1b4b1ba4-7a4b-4ba1-9fd4-93576d223524 ro console=tty0 console=ttyS1,115200n8
[ 0.000000][ T0] KERNEL supported cpus:
[ 0.000000][ T0] Intel GenuineIntel
[ 0.000000][ T0] AMD AuthenticAMD
[ 0.000000][ T0] Hygon HygonGenuine
[ 0.000000][ T0] Centaur CentaurHauls
[ 0.000000][ T0] zhaoxin Shanghai
[ 0.000000][ T0] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
[ 0.000000][ T0] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[ 0.000000][ T0] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[ 0.000000][ T0] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256
[ 0.000000][ T0] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
[ 0.000000][ T0] BIOS-provided physical RAM map:
[ 0.000000][ T0] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
[ 0.000000][ T0] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
[ 0.000000][ T0] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
[ 0.000000][ T0] BIOS-e820: [mem 0x0000000000100000-0x00000000bffd4fff] usable
[ 0.000000][ T0] BIOS-e820: [mem 0x00000000bffd5000-0x00000000bfffffff] reserved
[ 0.000000][ T0] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
[ 0.000000][ T0] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
[ 0.000000][ T0] BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
[ 0.000000][ T0] NX (Execute Disable) protection: active
[ 0.000000][ T0] SMBIOS 2.8 present.
[ 0.000000][ T0] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014
[ 0.000000][ T0] Hypervisor detected: KVM
[ 0.000000][ T0] kvm-clock: Using msrs 4b564d01 and 4b564d00
[ 0.000004][ T0] kvm-clock: cpu 0, msr 21ce00001, primary cpu clock
[ 0.000004][ T0] kvm-clock: using sched offset of 523103859767 cycles
[ 0.000030][ T0] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
[ 0.000075][ T0] tsc: Detected 1992.000 MHz processor
[ 0.005589][ T0] last_pfn = 0x240000 max_arch_pfn = 0x400000000
[ 0.005825][ T0] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT
[ 0.005865][ T0] last_pfn = 0xbffd5 max_arch_pfn = 0x400000000
[ 0.014513][ T0] found SMP MP-table at [mem 0x000f6a00-0x000f6a0f]
[ 0.015579][ T0] check: Scanning 1 areas for low memory corruption
[ 0.155442][ T0] RAMDISK: [mem 0x3643f000-0x37216fff]
[ 0.155523][ T0] ACPI: Early table checksum verification disabled
[ 0.155548][ T0] ACPI: RSDP 0x00000000000F69B0 000014 (v00 BOCHS )
[ 0.155580][ T0] ACPI: RSDT 0x00000000BFFE13E7 00002C (v01 BOCHS BXPCRSDT 00000001 BXPC 00000001)
[ 0.155620][ T0] ACPI: FACP 0x00000000BFFE1263 000074 (v01 BOCHS BXPCFACP 00000001 BXPC 00000001)
[ 0.155671][ T0] ACPI: DSDT 0x00000000BFFDFDC0 0014A3 (v01 BOCHS BXPCDSDT 00000001 BXPC 00000001)
[ 0.155708][ T0] ACPI: FACS 0x00000000BFFDFD80 000040
[ 0.155739][ T0] ACPI: APIC 0x00000000BFFE1357 000090 (v01 BOCHS BXPCAPIC 00000001 BXPC 00000001)
[ 0.156093][ T0] No NUMA configuration found
[ 0.156116][ T0] Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
[ 0.156239][ T0] NODE_DATA(0) allocated [mem 0x23efc4000-0x23f002fff]
[ 0.183823][ T0] Zone ranges:
[ 0.183849][ T0] DMA [mem 0x0000000000001000-0x0000000000ffffff]
[ 0.183874][ T0] DMA32 [mem 0x0000000001000000-0x00000000ffffffff]
[ 0.183897][ T0] Normal [mem 0x0000000100000000-0x000000023fffffff]
[ 0.183921][ T0] Device empty
[ 0.183942][ T0] Movable zone start for each node
[ 0.183966][ T0] Early memory node ranges
[ 0.183988][ T0] node 0: [mem 0x0000000000001000-0x000000000009efff]
[ 0.184010][ T0] node 0: [mem 0x0000000000100000-0x00000000bffd4fff]
[ 0.184033][ T0] node 0: [mem 0x0000000100000000-0x000000023fffffff]
[ 0.184083][ T0] Zeroed struct page in unavailable ranges: 141 pages
[ 0.184090][ T0] Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
[ 1.190381][ T0] kasan: KernelAddressSanitizer initialized
[ 1.190988][ T0] ACPI: PM-Timer IO Port: 0x608
[ 1.191046][ T0] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
[ 1.191124][ T0] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
[ 1.191162][ T0] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[ 1.191187][ T0] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
[ 1.191211][ T0] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[ 1.191234][ T0] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
[ 1.191258][ T0] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
[ 1.191343][ T0] Using ACPI (MADT) for SMP configuration information
[ 1.191364][ T0] TSC deadline timer available
[ 1.191390][ T0] smpboot: Allowing 4 CPUs, 0 hotplug CPUs
[ 1.191581][ T0] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
[ 1.191611][ T0] PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
[ 1.191632][ T0] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
[ 1.191653][ T0] PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
[ 1.191682][ T0] PM: hibernation: Registered nosave memory: [mem 0xbffd5000-0xbfffffff]
[ 1.191703][ T0] PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
[ 1.191723][ T0] PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
[ 1.191744][ T0] PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
[ 1.191765][ T0] PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
[ 1.191799][ T0] [mem 0xc0000000-0xfeffbfff] available for PCI devices
[ 1.191818][ T0] Booting paravirtualized kernel on KVM
[ 1.191851][ T0] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns
[ 1.313559][ T0] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
[ 1.330133][ T0] percpu: Embedded 581 pages/cpu s2342912 r8192 d28672 u4194304
[ 1.330573][ T0] KVM setup async PF for cpu 0
[ 1.330603][ T0] kvm-stealtime: cpu 0, msr 1cc23b0c0
[ 1.330643][ T0] Built 1 zonelists, mobility grouping on. Total pages: 2064221
[ 1.330663][ T0] Policy zone: Normal
[ 1.330689][ T0] Kernel command line: BOOT_IMAGE=/vmlinuz-5.8.3 root=UUID=1b4b1ba4-7a4b-4ba1-9fd4-93576d223524 ro console=tty0 console=ttyS1,115200n8
[ 1.339836][ T0] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
[ 1.343711][ T0] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
[ 1.343983][ T0] mem auto-init: stack:all, heap alloc:on, heap free:on
[ 1.344004][ T0] mem auto-init: clearing system memory may take some time...
[ 13.694263][ T0] Memory: 3196436K/8388044K available (204806K kernel code, 26932K rwdata, 77276K rodata, 10512K init, 72204K bss, 1985352K reserved, 0K cma-reserved)
[ 13.694321][ T0] random: get_random_u64 called from __kmem_cache_create+0x2a/0x7e0 with crng_init=0
[ 13.697134][ T0] random: get_random_u64 called from cache_random_seq_create+0x9b/0x1e0 with crng_init=0
[ 13.697241][ T0] random: get_random_u64 called from __kmem_cache_create+0x2a/0x7e0 with crng_init=0
[ 13.699467][ T0] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
[ 13.699519][ T0] kmemleak: Kernel memory leak detector disabled
[ 13.711726][ T0] ODEBUG: selftest passed
[ 13.711931][ T0] Kernel/User page tables isolation: enabled
[ 13.763537][ T0] ftrace: allocating 273119 entries in 1067 pages
[ 14.188633][ T0] ftrace: allocated 1067 pages with 5 groups
[ 14.188696][ T0]
[ 14.188710][ T0] **********************************************************
[ 14.188723][ T0] ** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE **
[ 14.188736][ T0] ** **
[ 14.188749][ T0] ** trace_printk() being used. Allocating extra memory. **
[ 14.188762][ T0] ** **
[ 14.188780][ T0] ** This means that this is a DEBUG kernel and it is **
[ 14.188793][ T0] ** unsafe for production use. **
[ 14.188806][ T0] ** **
[ 14.188819][ T0] ** If you see this message and you are not debugging **
[ 14.188831][ T0] ** the kernel, report this immediately to your vendor! **
[ 14.188844][ T0] ** **
[ 14.188857][ T0] ** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE **
[ 14.188869][ T0] **********************************************************
[ 14.197956][ T0] Running RCU self tests
[ 14.198008][ T0] rcu: Hierarchical RCU implementation.
[ 14.198020][ T0] rcu: RCU event tracing is enabled.
[ 14.198032][ T0] rcu: RCU dyntick-idle grace-period acceleration is enabled.
[ 14.198044][ T0] rcu: RCU lockdep checking is enabled.
[ 14.198057][ T0] rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4.
[ 14.198069][ T0] rcu: RCU callback double-/use-after-free debug enabled.
[ 14.198080][ T0] rcu: RCU debug extended QS entry/exit.
[ 14.198091][ T0] Trampoline variant of Tasks RCU enabled.
[ 14.198102][ T0] Rude variant of Tasks RCU enabled.
[ 14.198114][ T0] Tracing variant of Tasks RCU enabled.
[ 14.198126][ T0] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
[ 14.198138][ T0] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
[ 14.258776][ T0] NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16
[ 14.260745][ T0] random: crng done (trusting CPU's manufacturer)
[ 14.270295][ T0] Console: colour VGA+ 80x25
[ 14.358735][ T0] printk: console [tty0] enabled
[ 14.517694][ T0] printk: console [ttyS1] enabled
[ 14.519376][ T0] serial port 1 not yet initialized
[ 14.520859][ T0] Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar
[ 14.523188][ T0] ... MAX_LOCKDEP_SUBCLASSES: 8
[ 14.524506][ T0] ... MAX_LOCK_DEPTH: 48
[ 14.525870][ T0] ... MAX_LOCKDEP_KEYS: 8192
[ 14.527411][ T0] ... CLASSHASH_SIZE: 4096
[ 14.529825][ T0] ... MAX_LOCKDEP_ENTRIES: 32768
[ 14.532239][ T0] ... MAX_LOCKDEP_CHAINS: 65536
[ 14.533786][ T0] ... CHAINHASH_SIZE: 32768
[ 14.535368][ T0] memory used by lock dependency info: 6813 kB
[ 14.537158][ T0] memory used for stack traces: 4224 kB
[ 14.538792][ T0] per task-struct memory footprint: 2688 bytes
[ 14.540524][ T0] ------------------------
[ 14.541845][ T0] | Locking API testsuite:
[ 14.543157][ T0] ----------------------------------------------------------------------------
[ 14.545715][ T0] | spin |wlock |rlock |mutex | wsem | rsem |
[ 14.550230][ T0] --------------------------------------------------------------------------
[ 14.552466][ T0] A-A deadlock: ok | ok | ok | ok | ok | ok | ok |
[ 14.584328][ T0] A-B-B-A deadlock: ok | ok | ok | ok | ok | ok | ok |
[ 14.623603][ T0] A-B-B-C-C-A deadlock: ok | ok | ok | ok | ok | ok | ok |
[ 14.663189][ T0] A-B-C-A-B-C deadlock: ok | ok | ok | ok | ok | ok | ok |
[ 14.695733][ T0] A-B-B-C-C-D-D-A deadlock: ok | ok | ok | ok | ok | ok | ok |
[ 14.731973][ T0] A-B-C-D-B-D-D-A deadlock: ok | ok | ok | ok | ok | ok | ok |
[ 14.768900][ T0] A-B-C-D-B-C-D-A deadlock: ok | ok | ok | ok | ok | ok | ok |
[ 14.806198][ T0] double unlock: ok | ok | ok | ok | ok | ok | ok |
[ 14.835136][ T0] initialize held: ok | ok | ok | ok | ok | ok | ok |
[ 14.862971][ T0] --------------------------------------------------------------------------
[ 14.867904][ T0] recursive read-lock: | ok | | ok |
[ 14.877732][ T0] recursive read-lock #2: | ok | | ok |
[ 14.889206][ T0] mixed read-write-lock: | ok | | ok |
[ 14.899164][ T0] mixed write-read-lock: | ok | | ok |
[ 14.909183][ T0] mixed read-lock/lock-write ABBA: | ok | | ok |
[ 14.919102][ T0] mixed read-lock/lock-read ABBA: |FAILED| | ok |
[ 14.930219][ T0] mixed write-lock/lock-write ABBA: | ok | | ok |
[ 14.941364][ T0] --------------------------------------------------------------------------
[ 14.946393][ T0] hard-irqs-on + irq-safe-A/12: ok | ok | ok |
[ 14.957426][ T0] soft-irqs-on + irq-safe-A/12: ok | ok | ok |
[ 14.968579][ T0] hard-irqs-on + irq-safe-A/21: ok | ok | ok |
[ 14.979596][ T0] soft-irqs-on + irq-safe-A/21: ok | ok | ok |
[ 14.990171][ T0] sirq-safe-A => hirqs-on/12: ok | ok | ok |
[ 15.001843][ T0] sirq-safe-A => hirqs-on/21: ok | ok | ok |
[ 15.015515][ T0] hard-safe-A + irqs-on/12: ok | ok | ok |
[ 15.025885][ T0] soft-safe-A + irqs-on/12: ok | ok | ok |
[ 15.036609][ T0] hard-safe-A + irqs-on/21: ok | ok | ok |
[ 15.046191][ T0] soft-safe-A + irqs-on/21: ok | ok | ok |
[ 15.061532][ T0] hard-safe-A + unsafe-B #1/123: ok | ok | ok |
[ 15.070994][ T0] soft-safe-A + unsafe-B #1/123: ok | ok | ok |
[ 15.085728][ T0] hard-safe-A + unsafe-B #1/132: ok | ok | ok |
[ 15.097188][ T0] soft-safe-A + unsafe-B #1/132: ok | ok | ok |
[ 15.106746][ T0] hard-safe-A + unsafe-B #1/213: ok | ok | ok |
[ 15.120259][ T0] soft-safe-A + unsafe-B #1/213: ok | ok | ok |
[ 15.131651][ T0] hard-safe-A + unsafe-B #1/231: ok | ok | ok |
[ 15.144309][ T0] soft-safe-A + unsafe-B #1/231: ok | ok | ok |
[ 15.155294][ T0] hard-safe-A + unsafe-B #1/312: ok | ok | ok |
[ 15.168262][ T0] soft-safe-A + unsafe-B #1/312: ok | ok | ok |
[ 15.182386][ T0] hard-safe-A + unsafe-B #1/321: ok | ok | ok |
[ 15.192718][ T0] soft-safe-A + unsafe-B #1/321: ok | ok | ok |
[ 15.206098][ T0] hard-safe-A + unsafe-B #2/123: ok | ok | ok |
[ 15.217838][ T0] soft-safe-A + unsafe-B #2/123: ok | ok | ok |
[ 15.231827][ T0] hard-safe-A + unsafe-B #2/132: ok | ok | ok |
[ 15.244147][ T0] soft-safe-A + unsafe-B #2/132: ok | ok | ok |
[ 15.259787][ T0] hard-safe-A + unsafe-B #2/213: ok | ok | ok |
[ 15.276159][ T0] soft-safe-A + unsafe-B #2/213: ok | ok | ok |
[ 15.291327][ T0] hard-safe-A + unsafe-B #2/231: ok | ok | ok |
[ 15.306414][ T0] soft-safe-A + unsafe-B #2/231: ok | ok | ok |
[ 15.319496][ T0] hard-safe-A + unsafe-B #2/312: ok | ok | ok |
[ 15.334591][ T0] soft-safe-A + unsafe-B #2/312: ok | ok | ok |
[ 15.349429][ T0] hard-safe-A + unsafe-B #2/321: ok | ok | ok |
[ 15.362212][ T0] soft-safe-A + unsafe-B #2/321: ok | ok | ok |
[ 15.377169][ T0] hard-irq lock-inversion/123: ok | ok | ok |
[ 15.392224][ T0] soft-irq lock-inversion/123: ok | ok | ok |
[ 15.407551][ T0] hard-irq lock-inversion/132: ok | ok | ok |
[ 15.421384][ T0] soft-irq lock-inversion/132: ok | ok | ok |
[ 15.436932][ T0] hard-irq lock-inversion/213: ok | ok | ok |
[ 15.452847][ T0] soft-irq lock-inversion/213: ok | ok | ok |
[ 15.467312][ T0] hard-irq lock-inversion/231: ok | ok | ok |
[ 15.482684][ T0] soft-irq lock-inversion/231: ok | ok | ok |
[ 15.497970][ T0] hard-irq lock-inversion/312: ok | ok | ok |
[ 15.513988][ T0] soft-irq lock-inversion/312: ok | ok | ok |
[ 15.526404][ T0] hard-irq lock-inversion/321: ok | ok | ok |
[ 15.542327][ T0] soft-irq lock-inversion/321: ok | ok | ok |
[ 15.554619][ T0] hard-irq read-recursion/123: ok |
[ 15.561523][ T0] soft-irq read-recursion/123: ok |
[ 15.567365][ T0] hard-irq read-recursion/132: ok |
[ 15.573296][ T0] soft-irq read-recursion/132: ok |
[ 15.580800][ T0] hard-irq read-recursion/213: ok |
[ 15.586322][ T0] soft-irq read-recursion/213: ok |
[ 15.591791][ T0] hard-irq read-recursion/231: ok |
[ 15.600226][ T0] soft-irq read-recursion/231: ok |
[ 15.605938][ T0] hard-irq read-recursion/312: ok |
[ 15.610816][ T0] soft-irq read-recursion/312: ok |
[ 15.618117][ T0] hard-irq read-recursion/321: ok |
[ 15.623292][ T0] soft-irq read-recursion/321: ok |
[ 15.629030][ T0] --------------------------------------------------------------------------
[ 15.631844][ T0] | Wound/wait tests |
[ 15.632998][ T0] ---------------------
[ 15.635227][ T0] ww api failures: ok | ok | ok |
[ 15.647862][ T0] ww contexts mixing: ok | ok |
[ 15.654951][ T0] finishing ww context: ok | ok | ok | ok |
[ 15.670724][ T0] locking mismatches: ok | ok | ok |
[ 15.682257][ T0] EDEADLK handling: ok | ok | ok | ok | ok | ok | ok | ok | ok | ok |
[ 15.715734][ T0] spinlock nest unlocked: ok |
[ 15.721670][ T0] -----------------------------------------------------
[ 15.724427][ T0] |block | try |context|
[ 15.726576][ T0] -----------------------------------------------------
[ 15.728095][ T0] context: ok | ok | ok |
[ 15.737873][ T0] try: ok | ok | ok |
[ 15.748552][ T0] block: ok | ok | ok |
[ 15.758972][ T0] spinlock: ok | ok | ok |
[ 15.771123][ T0] -------------------------------------------------------
[ 15.772795][ T0] Good, all 261 testcases passed! |
[ 15.774071][ T0] ---------------------------------
[ 15.775716][ T0] ACPI: Core revision 20200528
[ 15.777928][ T0] APIC: Switch to symmetric I/O mode setup
[ 15.780327][ T0] x2apic enabled
[ 15.784788][ T0] Switched APIC routing to physical x2apic.
[ 15.789576][ T0] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x396d519840e, max_idle_ns: 881590569543 ns
[ 15.792324][ T0] Calibrating delay loop (skipped) preset value.. 3984.00 BogoMIPS (lpj=7968000)
[ 15.794587][ T0] pid_max: default: 32768 minimum: 301
[ 15.796286][ T0] LSM: Security Framework initializing
[ 15.796286][ T0] Yama: becoming mindful.
[ 15.796286][ T0] LoadPin: ready to pin (currently enforcing)
[ 15.796286][ T0] SELinux: Initializing.
[ 15.796286][ T0] TOMOYO Linux initialized
[ 15.796286][ T0] LSM support for eBPF active
[ 15.796286][ T0] Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
[ 15.796286][ T0] Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
[ 15.796286][ T0] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
[ 15.796286][ T0] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
[ 15.796286][ T0] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
[ 15.796286][ T0] Spectre V2 : Mitigation: Full generic retpoline
[ 15.796286][ T0] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
[ 15.796286][ T0] Spectre V2 : Enabling Restricted Speculation for firmware calls
[ 15.796286][ T0] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
[ 15.796286][ T0] Speculative Store Bypass: Vulnerable
[ 15.796286][ T0] SRBDS: Unknown: Dependent on hypervisor status
[ 15.796286][ T0] MDS: Vulnerable: Clear CPU buffers attempted, no microcode
[ 15.796286][ T0] debug: unmapping init [mem 0xffffffff9b74b000-0xffffffff9b77bfff]
[ 15.796286][ T1] smpboot: CPU0: Intel Core Processor (Broadwell, no TSX, IBRS) (family: 0x6, model: 0x3d, stepping: 0x2)
[ 15.799655][ T1] Performance Events: unsupported p6 CPU model 61 no PMU driver, software events only.
[ 15.802182][ T1] rcu: Hierarchical SRCU implementation.
[ 15.876196][ T1] NMI watchdog: Perf NMI watchdog permanently disabled
[ 15.878315][ T1] smp: Bringing up secondary CPUs ...
[ 15.897520][ T1] x86: Booting SMP configuration:
[ 15.899026][ T1] .... node #0, CPUs: #1
[ 1.535871][ T0] kvm-clock: cpu 1, msr 21ce00041, secondary cpu clock
[ 1.535871][ T0] smpboot: CPU 1 Converting physical 0 to logical die 1
[ 15.905109][ T15] KVM setup async PF for cpu 1
[ 15.908286][ T15] kvm-stealtime: cpu 1, msr 1cc63b0c0
[ 15.928423][ T1] #2
[ 1.535871][ T0] kvm-clock: cpu 2, msr 21ce00081, secondary cpu clock
[ 1.535871][ T0] smpboot: CPU 2 Converting physical 0 to logical die 2
[ 15.936716][ T21] KVM setup async PF for cpu 2
[ 15.938180][ T21] kvm-stealtime: cpu 2, msr 1cca3b0c0
[ 15.952963][ T1] #3
[ 1.535871][ T0] kvm-clock: cpu 3, msr 21ce000c1, secondary cpu clock
[ 1.535871][ T0] smpboot: CPU 3 Converting physical 0 to logical die 3
[ 15.960558][ T27] KVM setup async PF for cpu 3
[ 15.961763][ T27] kvm-stealtime: cpu 3, msr 1cce3b0c0
[ 15.963192][ T1] smp: Brought up 1 node, 4 CPUs
[ 15.964309][ T1] smpboot: Max logical packages: 4
[ 15.965746][ T1] ----------------
[ 15.966883][ T1] | NMI testsuite:
[ 15.967972][ T1] --------------------
[ 15.968321][ T1] remote IPI: ok |
[ 15.969500][ T1] local IPI: ok |
[ 15.970522][ T1] --------------------
[ 15.971474][ T1] Good, all 2 testcases passed! |
[ 15.972307][ T1] ---------------------------------
[ 15.973507][ T1] smpboot: Total of 4 processors activated (15936.00 BogoMIPS)
[ 15.980988][ T33] workqueue: round-robin CPU selection forced, expect performance impact
[ 18.054270][ T33] node 0 deferred pages initialised in 2072ms
[ 18.056368][ T33] pgdatinit0 (33) used greatest stack depth: 30000 bytes left
[ 18.061495][ T1] devtmpfs: initialized
[ 18.070532][ T1] x86/mm: Memory block size: 128MB
[ 18.304897][ T1] DMA-API: preallocated 65536 debug entries
[ 18.306057][ T1] DMA-API: debugging enabled by kernel config
[ 18.307219][ T1] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
[ 18.308331][ T1] futex hash table entries: 1024 (order: 5, 131072 bytes, linear)
[ 18.310491][ T1] Running postponed tracer tests:
[ 18.312283][ T1] Testing tracer function: PASSED
[ 25.148306][ T1] Testing dynamic ftrace: PASSED
[ 26.008480][ T1] Testing dynamic ftrace ops #1:
[ 28.613753][ T1] (1 0 1 0 0)
[ 28.615386][ T1] (1 1 2 0 0)
[ 33.621979][ T1] (2 1 3 0 19494796)
[ 33.624320][ T1] (2 2 4 0 19495693) PASSED
[ 37.183332][ T1] Testing dynamic ftrace ops #2:
[ 42.625114][ T1] (1 0 1 19230610 0)
[ 42.625114][ T1] (1 1 2 19231395 0)
[ 63.876286][ C0] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [swapper/0:1]
[ 63.876420][ C0] Modules linked in:
[ 63.876420][ C0] irq event stamp: 9790254
[ 63.876420][ C0] hardirqs last enabled at (9790253): [] __text_poke+0x753/0x8b0
[ 63.876420][ C0] hardirqs last disabled at (9790254): [] idtentry_enter_cond_rcu+0x36/0x60
[ 63.876420][ C0] softirqs last enabled at (9789976): [] __do_softirq+0x384/0x3cd
[ 63.876420][ C0] softirqs last disabled at (9789969): [] asm_call_on_stack+0x12/0x20
[ 63.876420][ C0] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.8.3 #1
[ 63.876420][ C0] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014
[ 63.876420][ C0] RIP: 0010:__text_poke+0x772/0x8b0
[ 63.876420][ C0] Code: 94 c6 4a 00 eb 29 e8 2d be 4a 00 48 c7 c7 c8 91 cf 97 e8 01 c8 7a 00 48 83 3d 01 88 51 11 00 0f 84 40 01 00 00 48 89 df 57 9d <0f> 1f 44 00 00 65 48 8b 04 25 28 00 00 00 48 3b 45 d0 0f 85 ee 00
[ 63.876420][ C0] RSP: 0000:ffffc9000006fa78 EFLAGS: 00000286
[ 63.876420][ C0] RAX: ffffffff97cf91cf RBX: 0000000000000286 RCX: ffffffff867e09bf
[ 63.876420][ C0] RDX: 0000000000000000 RSI: dffffc0000000000 RDI: 0000000000000286
[ 63.876420][ C0] RBP: ffffc9000006faf8 R08: dffffc0000000000 R09: fffffbfff37d9587
[ 63.876420][ C0] R10: fffffbfff37d9587 R11: 0000000000000000 R12: ffff888106cd9000
[ 63.876420][ C0] R13: 00002d96296c5000 R14: ffff888106ce9000 R15: ffffffff9144e7f1
[ 63.876420][ C0] FS: 0000000000000000(0000) GS:ffff8881cc000000(0000) knlGS:0000000000000000
[ 63.876420][ C0] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 63.876420][ C0] CR2: 0000000000000000 CR3: 0000000214ea4001 CR4: 00000000003606f0
[ 63.876420][ C0] Call Trace:
[ 63.876420][ C0] ? snd_hdsp_midi_input_close+0x1/0x130
[ 63.876420][ C0] ? snd_hdsp_midi_input_close+0x1/0x130
[ 63.876420][ C0] text_poke_bp_batch+0x15e/0x310
[ 63.876420][ C0] ? snd_hdspm_release+0xb0/0xb0
[ 63.876420][ C0] text_poke_queue+0x8e/0xf0
[ 63.876420][ C0] ftrace_replace_code+0x19f/0x250
[ 63.876420][ C0] ftrace_modify_all_code+0x148/0x4b0
[ 63.876420][ C0] ftrace_run_update_code+0x46/0x190
[ 63.876420][ C0] ftrace_startup+0x2cf/0x360
[ 63.876420][ C0] register_ftrace_function+0xd2/0xf0
[ 63.876420][ C0] trace_selftest_ops+0x737/0x9d0
[ 63.876420][ C0] trace_selftest_startup_function+0x4f1/0xbc6
[ 63.876420][ C0] run_tracer_selftest+0x3a7/0x4c0
[ 63.876420][ C0] init_trace_selftests+0x102/0x34b
[ 63.876420][ C0] ? kernel_init+0x16/0x2b0
[ 63.876420][ C0] ? latency_fsnotify_init+0x83/0x83
[ 63.876420][ C0] do_one_initcall+0x79/0x1f0
[ 63.876420][ C0] do_initcall_level+0xca/0xf5
[ 63.876420][ C0] do_initcalls+0x63/0xa8
[ 63.876420][ C0] kernel_init_freeable+0x248/0x2fc
[ 63.876420][ C0] ? rest_init+0x2a0/0x2a0
[ 63.876420][ C0] kernel_init+0x16/0x2b0
[ 63.876420][ C0] ? rest_init+0x2a0/0x2a0
[ 63.876420][ C0] ret_from_fork+0x22/0x30
[ 63.876420][ C0] Kernel panic - not syncing: softlockup: hung tasks
[ 63.876420][ C0] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G L 5.8.3 #1
[ 63.876420][ C0] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014
[ 63.876420][ C0] Call Trace:
[ 63.876420][ C0]
[ 63.876420][ C0] dump_stack+0x184/0x27e
[ 63.876420][ C0] panic+0x232/0x62e
[ 63.876420][ C0] ? watchdog_timer_fn+0x371/0x3a0
[ 63.876420][ C0] ? panic+0x5/0x62e
[ 63.876420][ C0] watchdog_timer_fn+0x39f/0x3a0
[ 63.876420][ C0] ? proc_watchdog_cpumask+0xc0/0xc0
[ 63.876420][ C0] __hrtimer_run_queues+0x2db/0x4f0
[ 63.876420][ C0] hrtimer_run_queues+0x2d7/0x330
[ 63.876420][ C0] update_process_times+0x3d/0x140
[ 63.876420][ C0] tick_periodic+0xff/0x110
[ 63.876420][ C0] tick_handle_periodic+0x37/0x120
[ 63.876420][ C0] __sysvec_apic_timer_interrupt+0x84/0x150
[ 63.876420][ C0] asm_call_on_stack+0x12/0x20
[ 63.876420][ C0]
[ 63.876420][ C0] sysvec_apic_timer_interrupt+0x86/0xe0
[ 63.876420][ C0] asm_sysvec_apic_timer_interrupt+0x12/0x20
[ 63.876420][ C0] RIP: 0010:__text_poke+0x772/0x8b0
[ 63.876420][ C0] Code: 94 c6 4a 00 eb 29 e8 2d be 4a 00 48 c7 c7 c8 91 cf 97 e8 01 c8 7a 00 48 83 3d 01 88 51 11 00 0f 84 40 01 00 00 48 89 df 57 9d <0f> 1f 44 00 00 65 48 8b 04 25 28 00 00 00 48 3b 45 d0 0f 85 ee 00
[ 63.876420][ C0] RSP: 0000:ffffc9000006fa78 EFLAGS: 00000286
[ 63.876420][ C0] RAX: ffffffff97cf91cf RBX: 0000000000000286 RCX: ffffffff867e09bf
[ 63.876420][ C0] RDX: 0000000000000000 RSI: dffffc0000000000 RDI: 0000000000000286
[ 63.876420][ C0] RBP: ffffc9000006faf8 R08: dffffc0000000000 R09: fffffbfff37d9587
[ 63.876420][ C0] R10: fffffbfff37d9587 R11: 0000000000000000 R12: ffff888106cd9000
[ 63.876420][ C0] R13: 00002d96296c5000 R14: ffff888106ce9000 R15: ffffffff9144e7f1
[ 63.876420][ C0] ? snd_hdsp_midi_input_close+0x1/0x130
[ 63.876420][ C0] ? __text_poke+0x75f/0x8b0
[ 63.876420][ C0] ? snd_hdsp_midi_input_close+0x1/0x130
[ 63.876420][ C0] ? snd_hdsp_midi_input_close+0x1/0x130
[ 63.876420][ C0] text_poke_bp_batch+0x15e/0x310
[ 63.876420][ C0] ? snd_hdspm_release+0xb0/0xb0
[ 63.876420][ C0] text_poke_queue+0x8e/0xf0
[ 63.876420][ C0] ftrace_replace_code+0x19f/0x250
[ 63.876420][ C0] ftrace_modify_all_code+0x148/0x4b0
[ 63.876420][ C0] ftrace_run_update_code+0x46/0x190
[ 63.876420][ C0] ftrace_startup+0x2cf/0x360
[ 63.876420][ C0] register_ftrace_function+0xd2/0xf0
[ 63.876420][ C0] trace_selftest_ops+0x737/0x9d0
[ 63.876420][ C0] trace_selftest_startup_function+0x4f1/0xbc6
[ 63.876420][ C0] run_tracer_selftest+0x3a7/0x4c0
[ 63.876420][ C0] init_trace_selftests+0x102/0x34b
[ 63.876420][ C0] ? kernel_init+0x16/0x2b0
[ 63.876420][ C0] ? latency_fsnotify_init+0x83/0x83
[ 63.876420][ C0] do_one_initcall+0x79/0x1f0
[ 63.876420][ C0] do_initcall_level+0xca/0xf5
[ 63.876420][ C0] do_initcalls+0x63/0xa8
[ 63.876420][ C0] kernel_init_freeable+0x248/0x2fc
[ 63.876420][ C0] ? rest_init+0x2a0/0x2a0
[ 63.876420][ C0] kernel_init+0x16/0x2b0
[ 63.876420][ C0] ? rest_init+0x2a0/0x2a0
[ 63.876420][ C0] ret_from_fork+0x22/0x30
[ 63.876420][ C0]
[ 63.876420][ C0] =============================
[ 63.876420][ C0] [ BUG: Invalid wait context ]
[ 63.876420][ C0] 5.8.3 #1 Not tainted
[ 63.876420][ C0] -----------------------------
[ 63.876420][ C0] swapper/0/1 is trying to lock:
[ 63.876420][ C0] ffffffff9f7e0638 (&port->lock){....}-{3:3}, at: serial8250_console_write+0xfb/0x910
[ 63.876420][ C0] other info that might help us debug this:
[ 63.876420][ C0] context-{2:2}
[ 63.876420][ C0] 5 locks held by swapper/0/1:
[ 63.876420][ C0] #0: ffffffff983db380 (trace_types_lock){+.+.}-{4:4}, at: init_trace_selftests+0x2b/0x34b
[ 63.876420][ C0] #1: ffffffff983da220 (ftrace_lock){+.+.}-{4:4}, at: register_ftrace_function+0xc8/0xf0
[ 63.876420][ C0] #2: ffffffff97d2b000 (text_mutex){+.+.}-{4:4}, at: ftrace_arch_code_modify_prepare+0xe/0x20
[ 63.876420][ C0] #3: ffffffff97d34f00 (console_lock){+.+.}-{0:0}, at: vprintk_emit+0x304/0x400
[ 63.876420][ C0] #4: ffffffff97d35040 (console_owner){-...}-{0:0}, at: console_lock_spinning_enable+0x36/0x70
[ 63.876420][ C0] stack backtrace:
[ 63.876420][ C0] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.8.3 #1
[ 63.876420][ C0] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014
[ 63.876420][ C0] Call Trace:
[ 63.876420][ C0]
[ 63.876420][ C0] dump_stack+0x184/0x27e
[ 63.876420][ C0] __lock_acquire+0x145d/0x1610
[ 63.876420][ C0] lock_acquire+0x133/0x480
[ 63.876420][ C0] ? serial8250_console_write+0xfb/0x910
[ 63.876420][ C0] ? serial8250_console_write+0xfb/0x910
[ 63.876420][ C0] _raw_spin_lock_irqsave+0x82/0xd0
[ 63.876420][ C0] ? serial8250_console_write+0xfb/0x910
[ 63.876420][ C0] serial8250_console_write+0xfb/0x910
[ 63.876420][ C0] ? do_raw_spin_unlock+0xbf/0x480
[ 63.876420][ C0] ? console_unlock+0x710/0xb20
[ 63.876420][ C0] ? univ8250_console_write+0x25/0x50
[ 63.876420][ C0] ? s8250_options+0x10/0x10
[ 63.876420][ C0] console_unlock+0x752/0xb20
[ 63.876420][ C0] ? vprintk_emit+0x318/0x400
[ 63.876420][ C0] vprintk_emit+0x31d/0x400
[ 63.876420][ C0] printk+0x7e/0xa9
[ 63.876420][ C0] ? printk+0x5/0xa9
[ 63.876420][ C0] watchdog_timer_fn+0x2eb/0x3a0
[ 63.876420][ C0] ? proc_watchdog_cpumask+0xc0/0xc0
[ 63.876420][ C0] __hrtimer_run_queues+0x2db/0x4f0
[ 63.876420][ C0] hrtimer_run_queues+0x2d7/0x330
[ 63.876420][ C0] update_process_times+0x3d/0x140
[ 63.876420][ C0] tick_periodic+0xff/0x110
[ 63.876420][ C0] tick_handle_periodic+0x37/0x120
[ 63.876420][ C0] __sysvec_apic_timer_interrupt+0x84/0x150
[ 63.876420][ C0] asm_call_on_stack+0x12/0x20
[ 63.876420][ C0]
[ 63.876420][ C0] sysvec_apic_timer_interrupt+0x86/0xe0
[ 63.876420][ C0] asm_sysvec_apic_timer_interrupt+0x12/0x20
[ 63.876420][ C0] RIP: 0010:__text_poke+0x772/0x8b0
[ 63.876420][ C0] Code: 94 c6 4a 00 eb 29 e8 2d be 4a 00 48 c7 c7 c8 91 cf 97 e8 01 c8 7a 00 48 83 3d 01 88 51 11 00 0f 84 40 01 00 00 48 89 df 57 9d <0f> 1f 44 00 00 65 48 8b 04 25 28 00 00 00 48 3b 45 d0 0f 85 ee 00
[ 63.876420][ C0] RSP: 0000:ffffc9000006fa78 EFLAGS: 00000286
[ 63.876420][ C0] RAX: ffffffff97cf91cf RBX: 0000000000000286 RCX: ffffffff867e09bf
[ 63.876420][ C0] RDX: 0000000000000000 RSI: dffffc0000000000 RDI: 0000000000000286
[ 63.876420][ C0] RBP: ffffc9000006faf8 R08: dffffc0000000000 R09: fffffbfff37d9587
[ 63.876420][ C0] R10: fffffbfff37d9587 R11: 0000000000000000 R12: ffff888106cd9000
[ 63.876420][ C0] R13: 00002d96296c5000 R14: ffff888106ce9000 R15: ffffffff9144e7f1
[ 63.876420][ C0] ? snd_hdsp_midi_input_close+0x1/0x130
[ 63.876420][ C0] ? __text_poke+0x75f/0x8b0
[ 63.876420][ C0] ? snd_hdsp_midi_input_close+0x1/0x130
[ 63.876420][ C0] ? snd_hdsp_midi_input_close+0x1/0x130
[ 63.876420][ C0] text_poke_bp_batch+0x15e/0x310
[ 63.876420][ C0] ? snd_hdspm_release+0xb0/0xb0
[ 63.876420][ C0] text_poke_queue+0x8e/0xf0
[ 63.876420][ C0] ftrace_replace_code+0x19f/0x250
[ 63.876420][ C0] ftrace_modify_all_code+0x148/0x4b0
[ 63.876420][ C0] ftrace_run_update_code+0x46/0x190
[ 63.876420][ C0] ftrace_startup+0x2cf/0x360
[ 63.876420][ C0] register_ftrace_function+0xd2/0xf0
[ 63.876420][ C0] trace_selftest_ops+0x737/0x9d0
[ 63.876420][ C0] trace_selftest_startup_function+0x4f1/0xbc6
[ 63.876420][ C0] run_tracer_selftest+0x3a7/0x4c0
[ 63.876420][ C0] init_trace_selftests+0x102/0x34b
[ 63.876420][ C0] ? kernel_init+0x16/0x2b0
[ 63.876420][ C0] ? latency_fsnotify_init+0x83/0x83
[ 63.876420][ C0] do_one_initcall+0x79/0x1f0
[ 63.876420][ C0] do_initcall_level+0xca/0xf5
[ 63.876420][ C0] do_initcalls+0x63/0xa8
[ 63.876420][ C0] kernel_init_freeable+0x248/0x2fc
[ 63.876420][ C0] ? rest_init+0x2a0/0x2a0
[ 63.876420][ C0] kernel_init+0x16/0x2b0
[ 63.876420][ C0] ? rest_init+0x2a0/0x2a0
[ 63.876420][ C0] ret_from_fork+0x22/0x30
[ 63.876420][ C0] ---[ end Kernel panic - not syncing: softlockup: hung tasks ]---

investigate --symbol-ordering-file

https://reviews.llvm.org/D26130

I'm not sure how @GeorgiiR was creating these input files; there's a comment on an earlier version of the patch of just using readelf -ws for randomized order, but I recall CrOS folks discussing this at one point. I wonder if they already a file like this for clang?

Compile fails on Manjaro

Describe the bug
Stage 2 exits with an error

To Reproduce
clone tc-build, and run build-llvm.py

Expected behavior
it should compile cleanly

Screenshots
[26/4067] Building CXX object lib/Support/CMakeFiles/LLVMSupport.dir/CodeGenCoverage.cpp.o
FAILED: lib/Support/CMakeFiles/LLVMSupport.dir/CodeGenCoverage.cpp.o
/home/kristof/android/tc-build/build/llvm/stage1/bin/clang++ -DGTEST_HAS_RTTI=0 -D_GNU_SOURCE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS -Ilib/Support -I/home/kristof/android/tc-build/llvm-project/llvm/lib/Support -I/usr/include/libxml2 -Iinclude -I/home/kristof/android/tc-build/llvm-project/llvm/include -fPIC -fvisibility-inlines-hidden -Werror=date-time -Werror=unguarded-availability-new -w -fdiagnostics-color -ffunction-sections -fdata-sections -O3 -DNDEBUG -std=c++14 -fno-exceptions -fno-rtti -MD -MT lib/Support/CMakeFiles/LLVMSupport.dir/CodeGenCoverage.cpp.o -MF lib/Support/CMakeFiles/LLVMSupport.dir/CodeGenCoverage.cpp.o.d -o lib/Support/CMakeFiles/LLVMSupport.dir/CodeGenCoverage.cpp.o -c /home/kristof/android/tc-build/llvm-project/llvm/lib/Support/CodeGenCoverage.cpp
fatal error: error in backend: Cannot emit physreg copy instruction
[31/4067] Building CXX object lib/Support/CMakeFiles/LLVMSupport.dir/CommandLine.cpp.o
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "./build-llvm.py", line 890, in
main()
File "./build-llvm.py", line 886, in main
do_multistage_build(args, dirs, env_vars)
File "./build-llvm.py", line 858, in do_multistage_build
invoke_ninja(args, dirs, stage)
File "./build-llvm.py", line 793, in invoke_ninja
subprocess.run('ninja', check=True, cwd=build_folder)
File "/usr/lib/python3.8/subprocess.py", line 512, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'ninja' returned non-zero exit status 1.

Environment (please complete the following information):

  • Command you ran: ./build-llvm.py
  • Distribution: Manjaro Linux latest
  • Python version: 3.8.1

[BOLT] Minimum hardware requirements?

Hi,

Out of curiosity I gave BOLT (without PGO) a try, on a Zen2 platform with 32 GB of RAM, building from the 14.x branch.
Everything went well until the last step, when at some point the build process was SIGKILLed.
I was away from keyboard and didn't the save the build log, but I guess it was simply a OOM situation - the fdata profile was a whopping 128 GB after all. The profile merge log didn't show any errors.

So it seems that 32 GB are not enough, at least on Zen with llvm-bolt. The script is already well documented, but maybe add a line or two about RAM requirements.

On a sidenote, when using BOLT, does it matter whether the first stage is built with either GCC or Clang, and does using ccache have an influence on the further build process?

(EDIT: This computer only has swap via Zram but no disk based swap file or partition. Maybe adding more swap could help?)

Missing output of "Version information:"

When building a new llvm-toolchain from release/11.x Git branch this way:

python3 ./build-llvm.py --shallow-clone --no-update -b release/11.x --build-type Release -p clang;lld -t X86 --clang-vendor '' --check-targets clang lld --build-stage1-only -B /home/dileks/src/llvm-toolchain/build --install-stage1-only -I /home/dileks/src/llvm-toolchain/install

No output for Version information: is displayed:

===========================
== Building LLVM stage 1 ==
===========================


LLVM build duration: 0:18:26

LLVM toolchain installed to: /home/dileks/src/llvm-toolchain/install

To use, either run:

    $ export PATH=/home/dileks/src/llvm-toolchain/install/bin:${PATH}

or add:

    PATH=/home/dileks/src/llvm-toolchain/install/bin:${PATH}

to the command you want to use this toolchain.

Version information:

Unsure, what information should be displayed here and if --clang-vendor '' is the cause for this.

Allow shallow-clone to a specific branch or tag

I did a shallow-clone - but this clones Git master.
Switching to remote release/10.x is not possible out of this Git (or I did not find an appropriate solution).

Instructions:

cd tc-build

TC_OPTS=" --build-type Release -p clang;lld -t X86 -b release/10.x --shallow-clone --no-update --check-targets clang lld --build-stage1-only -B /home/dileks/src/llvm-toolchain/build --install-stage1-only -I /home/dileks/src/llvm-toolchain/install"

./build-llvm.py ${TC_OPTS}

A manual shallow-clone to a specific branch was succesful here (see [1], sorry partly German):

$ git clone --branch release/10.x --depth 1 git://github.com/llvm/llvm-project
Klone nach 'llvm-project' ...
remote: Enumerating objects: 94802, done.
remote: Counting objects: 100% (94802/94802), done.
remote: Compressing objects: 100% (83370/83370), done.
remote: Total 94802 (delta 15993), reused 53423 (delta 8081), pack-reused 0
Empfange Objekte: 100% (94802/94802), 127.46 MiB | 2.46 MiB/s, Fertig.
Löse Unterschiede auf: 100% (15993/15993), Fertig.
Aktualisiere Dateien: 100% (90213/90213), Fertig.

$ cd llvm-project/

$ git branch 
* release/10.x

$ git log --oneline -1
edbe962459da (grafted, HEAD -> release/10.x, origin/release/10.x) [COFF] Don't treat DWARF sections as GC roots

Desired is a working together of --branch BRANCH with --shallow-clone option.

[1] https://stackoverflow.com/questions/8932389/git-shallow-clone-to-specific-tag

utils.py: Bump binutils to version 2.35.1 and use sha512sum

"--use-good-revision" don't work now

Traceback (most recent call last):
File "./build-llvm.py", line 946, in
main()
File "./build-llvm.py", line 939, in main
fetch_llvm_binutils(root_folder, not args.no_update, ref)
File "./build-llvm.py", line 442, in fetch_llvm_binutils
check=True)
File "/usr/lib/python3.7/subprocess.py", line 487, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['git', 'clone', '-b', '957b9cdd2692178b9635cbbbcb94e78a5bc24473', 'git://github.com/llvm/llvm-project', '/llvm-project']' returned non-zero exit status 128.

also i ran the command locally and fails as well:
git clone -b 957b9cdd2692178b9635cbbbcb94e78a5bc24473 git://github.com/llvm/llvm-project
Cloning into 'llvm-project'...
fatal: Remote branch 957b9cdd2692178b9635cbbbcb94e78a5bc24473 not found in upstream origin

I speculate that github has imposed restrictions on this.

more info:
https://stackoverflow.com/questions/3489173/how-to-clone-git-repository-with-specific-revision-changeset

INSTALL_FOLDER not recognized when building with --stage1-only?

My build_tc-build.sh looks like this:

#!/bin/sh

export LANG=C
export LC_ALL=C

PROJECTS="clang;lld"
TARGETS="X86"

BRANCH="release/9.x"

WORKING_DIR=$(pwd)
BUILD_DIR="${WORKING_DIR}/build"
INSTALL_DIR="${WORKING_DIR}/install"

cd tc-build

./build-llvm.py -p "${PROJECTS}" -t "${TARGETS}" -b "${BRANCH}" -B "${BUILD_DIR}" -I "${INSTALL_DIR}" --stage1-only

Build-log says:

========================
== Checking CC and LD ==
========================

CC: /usr/lib/llvm-9/bin/clang
CXX: /usr/lib/llvm-9/bin/clang++
LD: /usr/bin/ld.lld-9

===========================
== Checking dependencies ==
===========================

/usr/bin/cmake
/usr/bin/curl
/usr/bin/git
/usr/bin/ninja

======================
== Downloading LLVM ==
======================

Cloning into '/home/sdi/src/llvm-toolchain/tc-build/llvm-project'...
remote: Enumerating objects: 51, done.
remote: Counting objects: 100% (51/51), done.
remote: Compressing objects: 100% (45/45), done.
remote: Total 3474257 (delta 15), reused 14 (delta 6), pack-reused 3474206
Receiving objects: 100% (3474257/3474257), 691.87 MiB | 21.60 MiB/s, done.
Resolving deltas: 100% (2850165/2850165), done.
Checking out files: 100% (84562/84562), done.

==============================
== Configuring LLVM stage 1 ==
==============================

-- The C compiler identification is Clang 9.0.0
-- The CXX compiler identification is Clang 9.0.0
-- The ASM compiler identification is Clang
-- Found assembler: /usr/lib/llvm-9/bin/clang
-- Check for working C compiler: /usr/lib/llvm-9/bin/clang
-- Check for working C compiler: /usr/lib/llvm-9/bin/clang -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/lib/llvm-9/bin/clang++
-- Check for working CXX compiler: /usr/lib/llvm-9/bin/clang++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- clang project is enabled
-- clang-tools-extra project is disabled
-- compiler-rt project is disabled
-- debuginfo-tests project is disabled
-- libclc project is disabled
-- libcxx project is disabled
-- libcxxabi project is disabled
-- libunwind project is disabled
-- lld project is enabled
-- lldb project is disabled
-- llgo project is disabled
-- openmp project is disabled
-- parallel-libs project is disabled
-- polly project is disabled
-- pstl project is disabled
-- Could NOT find Z3: Found unsuitable version "0.0.0", but required is at least "4.7.1" (found Z3_LIBRARIES-NOTFOUND)
-- Performing Test LLVM_LIBSTDCXX_MIN
-- Performing Test LLVM_LIBSTDCXX_MIN - Success
-- Performing Test LLVM_LIBSTDCXX_SOFT_ERROR
-- Performing Test LLVM_LIBSTDCXX_SOFT_ERROR - Success
-- Looking for dlfcn.h
-- Looking for dlfcn.h - found
-- Looking for errno.h
-- Looking for errno.h - found
-- Looking for fcntl.h
-- Looking for fcntl.h - found
-- Looking for link.h
-- Looking for link.h - found
-- Looking for malloc/malloc.h
-- Looking for malloc/malloc.h - not found
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for signal.h
-- Looking for signal.h - found
-- Looking for sys/ioctl.h
-- Looking for sys/ioctl.h - found
-- Looking for sys/mman.h
-- Looking for sys/mman.h - found
-- Looking for sys/param.h
-- Looking for sys/param.h - found
-- Looking for sys/resource.h
-- Looking for sys/resource.h - found
-- Looking for sys/stat.h
-- Looking for sys/stat.h - found
-- Looking for sys/time.h
-- Looking for sys/time.h - found
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for termios.h
-- Looking for termios.h - found
-- Looking for unistd.h
-- Looking for unistd.h - found
-- Looking for valgrind/valgrind.h
-- Looking for valgrind/valgrind.h - not found
-- Looking for zlib.h
-- Looking for zlib.h - found
-- Looking for fenv.h
-- Looking for fenv.h - found
-- Looking for FE_ALL_EXCEPT
-- Looking for FE_ALL_EXCEPT - found
-- Looking for FE_INEXACT
-- Looking for FE_INEXACT - found
-- Looking for mach/mach.h
-- Looking for mach/mach.h - not found
-- Looking for histedit.h
-- Looking for histedit.h - found
-- Looking for CrashReporterClient.h
-- Looking for CrashReporterClient.h - not found
-- Looking for linux/magic.h
-- Looking for linux/magic.h - found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Looking for pthread_getspecific in pthread
-- Looking for pthread_getspecific in pthread - found
-- Looking for pthread_rwlock_init in pthread
-- Looking for pthread_rwlock_init in pthread - found
-- Looking for pthread_mutex_lock in pthread
-- Looking for pthread_mutex_lock in pthread - found
-- Looking for dlopen in dl
-- Looking for dlopen in dl - found
-- Looking for clock_gettime in rt
-- Looking for clock_gettime in rt - found
-- Looking for pfm_initialize in pfm
-- Looking for pfm_initialize in pfm - not found
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Looking for compress2 in z
-- Looking for compress2 in z - found
-- Looking for el_init in edit
-- Looking for el_init in edit - found
-- Could NOT find LibXml2 (missing: LIBXML2_LIBRARY LIBXML2_INCLUDE_DIR) 
-- Looking for xar_open in xar
-- Looking for xar_open in xar - not found
-- Looking for arc4random
-- Looking for arc4random - not found
-- Looking for backtrace
-- Looking for backtrace - found
-- backtrace facility detected in default set of libraries
-- Found Backtrace: /usr/include  
-- Performing Test C_SUPPORTS_WERROR_UNGUARDED_AVAILABILITY_NEW
-- Performing Test C_SUPPORTS_WERROR_UNGUARDED_AVAILABILITY_NEW - Success
-- Looking for _Unwind_Backtrace
-- Looking for _Unwind_Backtrace - found
-- Looking for getpagesize
-- Looking for getpagesize - found
-- Looking for sysconf
-- Looking for sysconf - found
-- Looking for getrusage
-- Looking for getrusage - found
-- Looking for setrlimit
-- Looking for setrlimit - found
-- Looking for isatty
-- Looking for isatty - found
-- Looking for futimens
-- Looking for futimens - found
-- Looking for futimes
-- Looking for futimes - found
-- Looking for posix_fallocate
-- Looking for posix_fallocate - found
-- Looking for sigaltstack
-- Looking for sigaltstack - found
-- Looking for lseek64
-- Looking for lseek64 - found
-- Looking for mallctl
-- Looking for mallctl - not found
-- Looking for mallinfo
-- Looking for mallinfo - found
-- Looking for malloc_zone_statistics
-- Looking for malloc_zone_statistics - not found
-- Looking for getrlimit
-- Looking for getrlimit - found
-- Looking for posix_spawn
-- Looking for posix_spawn - found
-- Looking for pread
-- Looking for pread - found
-- Looking for sbrk
-- Looking for sbrk - found
-- Looking for strerror
-- Looking for strerror - found
-- Looking for strerror_r
-- Looking for strerror_r - found
-- Looking for strerror_s
-- Looking for strerror_s - not found
-- Looking for setenv
-- Looking for setenv - found
-- Looking for dlopen
-- Looking for dlopen - found
-- Looking for dladdr
-- Looking for dladdr - not found
-- Performing Test HAVE_STRUCT_STAT_ST_MTIMESPEC_TV_NSEC
-- Performing Test HAVE_STRUCT_STAT_ST_MTIMESPEC_TV_NSEC - Failed
-- Performing Test HAVE_STRUCT_STAT_ST_MTIM_TV_NSEC
-- Performing Test HAVE_STRUCT_STAT_ST_MTIM_TV_NSEC - Success
-- Looking for __GLIBC__
-- Looking for __GLIBC__ - found
-- Looking for sched_getaffinity
-- Looking for sched_getaffinity - found
-- Looking for CPU_COUNT
-- Looking for CPU_COUNT - found
-- Looking for pthread_getname_np
-- Looking for pthread_getname_np - found
-- Looking for pthread_setname_np
-- Looking for pthread_setname_np - found
-- Performing Test HAVE_STD_IS_TRIVIALLY_COPYABLE
-- Performing Test HAVE_STD_IS_TRIVIALLY_COPYABLE - Success
-- Performing Test HAVE_CXX_ATOMICS_WITHOUT_LIB
-- Performing Test HAVE_CXX_ATOMICS_WITHOUT_LIB - Success
-- Performing Test HAVE_CXX_ATOMICS64_WITHOUT_LIB
-- Performing Test HAVE_CXX_ATOMICS64_WITHOUT_LIB - Success
-- Performing Test LLVM_HAS_ATOMICS
-- Performing Test LLVM_HAS_ATOMICS - Success
-- Performing Test SUPPORTS_VARIADIC_MACROS_FLAG
-- Performing Test SUPPORTS_VARIADIC_MACROS_FLAG - Success
-- Performing Test SUPPORTS_GNU_ZERO_VARIADIC_MACRO_ARGUMENTS_FLAG
-- Performing Test SUPPORTS_GNU_ZERO_VARIADIC_MACRO_ARGUMENTS_FLAG - Success
-- Native target architecture is X86
-- Threads enabled.
-- Doxygen disabled.
-- Go bindings disabled.
-- Ninja version: 1.8.2
-- Found OCaml: /usr/bin/ocamlfind  
-- OCaml bindings disabled.
-- Could NOT find Python module pygments
-- Could NOT find Python module pygments.lexers.c_cpp
-- Could NOT find Python module yaml
-- LLVM host triple: x86_64-unknown-linux-gnu
-- LLVM default target triple: x86_64-unknown-linux-gnu
-- Performing Test CXX_SUPPORTS_CUSTOM_LINKER
-- Performing Test CXX_SUPPORTS_CUSTOM_LINKER - Success
-- Performing Test C_SUPPORTS_FPIC
-- Performing Test C_SUPPORTS_FPIC - Success
-- Performing Test CXX_SUPPORTS_FPIC
-- Performing Test CXX_SUPPORTS_FPIC - Success
-- Building with -fPIC
-- Performing Test SUPPORTS_FVISIBILITY_INLINES_HIDDEN_FLAG
-- Performing Test SUPPORTS_FVISIBILITY_INLINES_HIDDEN_FLAG - Success
-- Performing Test C_SUPPORTS_WERROR_DATE_TIME
-- Performing Test C_SUPPORTS_WERROR_DATE_TIME - Success
-- Performing Test CXX_SUPPORTS_WERROR_DATE_TIME
-- Performing Test CXX_SUPPORTS_WERROR_DATE_TIME - Success
-- Performing Test CXX_SUPPORTS_WERROR_UNGUARDED_AVAILABILITY_NEW
-- Performing Test CXX_SUPPORTS_WERROR_UNGUARDED_AVAILABILITY_NEW - Success
-- Performing Test CXX_SUPPORTS_CXX_STD
-- Performing Test CXX_SUPPORTS_CXX_STD - Success
-- Performing Test LINKER_SUPPORTS_COLOR_DIAGNOSTICS
-- Performing Test LINKER_SUPPORTS_COLOR_DIAGNOSTICS - Success
-- Performing Test C_SUPPORTS_FNO_FUNCTION_SECTIONS
-- Performing Test C_SUPPORTS_FNO_FUNCTION_SECTIONS - Success
-- Performing Test C_SUPPORTS_FFUNCTION_SECTIONS
-- Performing Test C_SUPPORTS_FFUNCTION_SECTIONS - Success
-- Performing Test CXX_SUPPORTS_FFUNCTION_SECTIONS
-- Performing Test CXX_SUPPORTS_FFUNCTION_SECTIONS - Success
-- Performing Test C_SUPPORTS_FDATA_SECTIONS
-- Performing Test C_SUPPORTS_FDATA_SECTIONS - Success
-- Performing Test CXX_SUPPORTS_FDATA_SECTIONS
-- Performing Test CXX_SUPPORTS_FDATA_SECTIONS - Success
-- Looking for os_signpost_interval_begin
-- Looking for os_signpost_interval_begin - not found
-- Found PythonInterp: /usr/bin/python3.7 (found version "3.7.3") 
-- Constructing LLVMBuild project information
-- Found Git: /usr/bin/git (found version "2.20.1") 
-- Linker detection: LLD
-- Targeting X86
-- Looking for sys/resource.h
-- Looking for sys/resource.h - found
-- Clang version: 9.0.0
-- Performing Test CXX_SUPPORTS_NO_NESTED_ANON_TYPES_FLAG
-- Performing Test CXX_SUPPORTS_NO_NESTED_ANON_TYPES_FLAG - Success
-- Looking for include file sys/inotify.h
-- Looking for include file sys/inotify.h - found
-- LLD version: 9.0.0
-- Failed to find LLVM FileCheck
-- Version: 0.0.0
-- Performing Test HAVE_CXX_FLAG_STD_CXX11
-- Performing Test HAVE_CXX_FLAG_STD_CXX11 - Success
-- Performing Test HAVE_CXX_FLAG_WALL
-- Performing Test HAVE_CXX_FLAG_WALL - Success
-- Performing Test HAVE_CXX_FLAG_WEXTRA
-- Performing Test HAVE_CXX_FLAG_WEXTRA - Success
-- Performing Test HAVE_CXX_FLAG_WSHADOW
-- Performing Test HAVE_CXX_FLAG_WSHADOW - Success
-- Performing Test HAVE_CXX_FLAG_PEDANTIC
-- Performing Test HAVE_CXX_FLAG_PEDANTIC - Success
-- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS
-- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS - Success
-- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32
-- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32 - Success
-- Performing Test HAVE_CXX_FLAG_WFLOAT_EQUAL
-- Performing Test HAVE_CXX_FLAG_WFLOAT_EQUAL - Success
-- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING
-- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING - Success
-- Performing Test HAVE_CXX_FLAG_FNO_EXCEPTIONS
-- Performing Test HAVE_CXX_FLAG_FNO_EXCEPTIONS - Success
-- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING
-- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING - Success
-- Performing Test HAVE_CXX_FLAG_WD654
-- Performing Test HAVE_CXX_FLAG_WD654 - Failed
-- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY
-- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY - Success
-- Performing Test HAVE_THREAD_SAFETY_ATTRIBUTES
-- Performing Test HAVE_THREAD_SAFETY_ATTRIBUTES
-- Performing Test HAVE_THREAD_SAFETY_ATTRIBUTES -- failed to compile
-- Performing Test HAVE_CXX_FLAG_COVERAGE
-- Performing Test HAVE_CXX_FLAG_COVERAGE - Success
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX -- success
-- Performing Test HAVE_STEADY_CLOCK
-- Performing Test HAVE_STEADY_CLOCK
-- Performing Test HAVE_STEADY_CLOCK -- success
-- Configuring done
-- Generating done
-- Build files have been written to: /home/sdi/src/llvm-toolchain/build/stage1

===========================
== Building LLVM stage 1 ==
===========================

[185/2639] Generating VCSRevision.h
-- Found Git: /usr/bin/git (found version "2.20.1") 
[1427/2639] Generating VCSVersion.inc
-- Found Git: /usr/bin/git (found version "2.20.1") 
[1644/2639] Generating VCSVersion.inc
-- Found Git: /usr/bin/git (found version "2.20.1") 
[2639/2639] Linking CXX static library lib/libbenchmark_main.a

LLVM build duration: 1:09:07

LLVM toolchain installed to: /home/sdi/src/llvm-toolchain/build/stage1

To use, either run:

    $ export PATH=/home/sdi/src/llvm-toolchain/build/stage1/bin:${PATH}

or add:

    PATH=/home/sdi/src/llvm-toolchain/build/stage1/bin:${PATH}

to the command you want to use this toolchain.

[Question] Does build-llvm.py need an update for clang-14?

I ran into some cmake warnings when building from 'release/14.x' branch:

CMake Warning at /path/tc-build/llvm-project/compiler-rt/cmake/Modules/CompilerRTUtils.cmake:352 (message):
  llvm-config finding testingsupport failed with status 1
Call Stack (most recent call first):
  CMakeLists.txt:28 (load_llvm_config)
CMake Warning:
  Manually-specified variables were not used by the project:

    CMAKE_CXX_COMPILER
    HAVE_LLVM_LIT
    LLVM_DEFAULT_TARGET_TRIPLE
    LLVM_HAVE_LINK_VERSION_SCRIPT
    LLVM_SOURCE_PREFIX
    LLVM_USE_RELATIVE_PATHS_IN_DEBUG_INFO
    LLVM_USE_RELATIVE_PATHS_IN_FILES
CMake Warning:
  Manually-specified variables were not used by the project:

    HAVE_LLVM_LIT
    LLVM_BUILD_TOOLS

Built with Arch's clang 13.0.1, the following build parameters were used:

-p "clang;lld" \
--lto thin \
--no-ccache \
--check-targets clang \
-t "X86" \
-D \
    CMAKE_C_FLAGS="-march=skylake -O2 -pipe" \
    CMAKE_CXX_FLAGS="-march=skylake -O2 -pipe" \
    LLVM_ENABLE_RUNTIMES="compiler-rt" \
-b release/14.x

N.B.: I added compiler-rt to LLVM_ENABLE_RUNTIMES because of this commit.

Two stage builds and ThinLTO

I am currently implementing the two stage build style that @stephenhines mentioned in #2, building a smaller clang + ld.lld and using those to build the full toolchain. I have added the ability to use ThinLTO in the second stage to provide a little bit of speed up.

However, the issue I am running into is there are points in the build where GNU ld is used and my system symlinks /usr/lib/LLVMgold.so to /usr/lib/bfd-plugins, which is then used instead of the LLVMgold.so that is built as a part of stage 1, resulting in really ugly errors:

bfd plugin: LLVM gold plugin has failed to create LTO module: Unknown attribute kind (60) (Producer: 'LLVM9.0.0svn' Reader: 'LLVM 8.0.0')
bfd plugin: LLVM gold plugin has failed to create LTO module: Unknown attribute kind (60) (Producer: 'LLVM9.0.0svn' Reader: 'LLVM 8.0.0')
bfd plugin: LLVM gold plugin has failed to create LTO module: Unknown attribute kind (60) (Producer: 'LLVM9.0.0svn' Reader: 'LLVM 8.0.0')
bfd plugin: LLVM gold plugin has failed to create LTO module: Unknown attribute kind (60) (Producer: 'LLVM9.0.0svn' Reader: 'LLVM 8.0.0')
bfd plugin: LLVM gold plugin has failed to create LTO module: Unknown attribute kind (60) (Producer: 'LLVM9.0.0svn' Reader: 'LLVM 8.0.0')
bfd plugin: LLVM gold plugin has failed to create LTO module: Unknown attribute kind (60) (Producer: 'LLVM9.0.0svn' Reader: 'LLVM 8.0.0')

Has anyone ever run into this and overcome it? I can't find anything on overriding bfd-plugins. I've tried to add -L <stage1_lib> to the linker flags via cmake variables but that doesn't help.

At this point, the only viable workaround I can think of is just building a copy of ld and ld.gold standalone to ensure that everything works properly regardless of host setup.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.