Giter Club home page Giter Club logo

archriscv-packages's Introduction

Arch Linux RISC-V Patches

This repository contains patches on top of Arch Linux package sources.

The long term goal is to upstream the patches as much as possible, so that riscv64 (riscv64gc) could be added to Arch Linux itself as an alternative architecture.

Homepage, image downloads and porting progress: https://archriscv.felixc.at

Detailed package status page: https://archriscv.felixc.at/.status/status.htm

Wiki/Contribution Guide: https://github.com/felixonmars/archriscv-packages/wiki

IRC: #archlinuxriscv at libera.chat

archriscv-packages's People

Contributors

a1ca7raz avatar aimixsaka avatar ast-x64 avatar avimitin avatar bastiple avatar coelacanthushex avatar cubercsl avatar fantasquex avatar felixonmars avatar github-actions[bot] avatar hack3ric avatar hexchain avatar iamtwz avatar ieast avatar khonoka avatar kxxt avatar moodyhunter avatar moui0 avatar phanen avatar piggynl avatar qyl27 avatar r-value avatar rapiz1 avatar spriteovo avatar tinysnow avatar xctan avatar xeonacid avatar xiejiss avatar xunop avatar zenithalhourlyrate avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

archriscv-packages's Issues

gst-plugins-bad can't be installed because pipewire required libcamera version 0.0.3-64

I was making a new install for my allwinnerD1 and i need gst-plugins-bad(who has v4l2 stateless codecs plugins) and pipewire is a dependency of him. But the install can't progress cause pacman couldn't solve some dependencies of him like below:

warning: cannot resolve "libcamera-base.so=0.0.3-64", a dependency of "pipewire"
warning: cannot resolve "libcamera.so=0.0.3-64", a dependency of "pipewire"
warning: cannot resolve "libpipewire-0.3.so=0-64", a dependency of "fluidsynth"
warning: cannot resolve "fluidsynth", a dependency of "gst-plugins-bad"
:: The following package cannot be upgraded due to unresolvable dependencies:
      gst-plugins-bad

:: Do you want to skip the above package for this upgrade? [y/N] 

The problem is that libcamera its already version 0.0.4-1 on repo and pipewire is requiring exacly the version 0.0.3-64.
Probably add a '>' on the PKGBUILD to allow newer versions of libcamera would solve, but you have some reason for they ask only a specific version?

EDIT: use -d on pacman to ignore dependency check could help who had the same problem. But its not recommended

Mesa useless on D1 due to "draw-use-llvm=false"

Since the addition of -D draw-use-llvm=false in ac8df15 i get like 10px/s (pixel being updated per second, i can literally watch how the image is composed pixel after pixel 👀 ) on an Allwinner D1.
I'm aware that this was changed due to other issues, i just want to point out that the D1 is currently useless when it comes to X/wayland. When compiling mesa with -D draw-use-llvm=true X works as before.

glibc stdlib/tst-strfrom failure

This issue will track glibc testsuite failure stdlib/tst-strfrom and stdlib/tst-strfrom-locale.

Upstream link: https://sourceware.org/bugzilla/show_bug.cgi?id=29501

A small reproduction example:

#define __STDC_WANT_IEC_60559_BFP_EXT__
#include <stdio.h>
#include <stdlib.h>
#include <math.h>

int main(void) {
    char s[100];
    int res = strfromf(s, sizeof(s), "%G", -NAN);
    printf("res: %d\n", res);
    printf("s: %s\n", s);
    return 0;
}

Default User

Sorry if this is not the right repo, but I didn't know where else to ask.

I downloaded the image from here: https://archriscv.felixc.at/ and it says:
Images (rootfs) (Default password: archriscv)

but it doesn't list a user. 😭

I've tried root, but then this default password doesn't work. I have it booted on a Pine64 Star64 SBC and connected to it via serial tty.

Please help or point me in the direction that I can get an answer 🙏

很抱歉这样打扰您:您打包的 fcitx5-rime 和 librime 有问题

从最早三年前:https://www.v2ex.com/t/726643

到两年前:fcitx/fcitx5-rime#16

再到如今:fcitx/fcitx5-rime#53

您打包的 fcitx5-rime 和 librime 一直被 Arch 收录,并扩散到各种 Arch 衍生版上。

但是:librime-lua 从来没有正常过,早期 fcitx5-rime 在源码里写了手动加载途径,还可以手动加载,现在自动识别,致使再无可行之法,只能重头编译整个项目。

不论是独立编译 librime ,还是编译 fcitx5-rime,请将 librime-lua 按照它的 librime-lua/wiki 放到 librime/plugins 下面,否则 librime 引擎是一定不能正确识别「lua」插件的。

冒昧打扰,见谅。

Add Solomon SSD1307 framebuffer support on linux package

I try to connect SSD1306 to my board, but i read the /proc/config.gz file, i goted # CONFIG_FB_SSD1307 is not set
Any support?

[topwuther@archlinux ~]$ uname -a
Linux archlinux 6.7.0-arch3-1 #1 SMP PREEMPT_DYNAMIC Thu, 18 Jan 2024 16:15:25 +0000 riscv64 GNU/Linux

pyside2 needs to be rebuilt

Package pyside2 is missing file typesystem_webenginecore.xml. The missing file cause another package falkon fail to build.

  • x86_64
$ pacman -Fl pyside2 | grep 'typesystem_webenginecore.xml'
pyside2 usr/share/PySide2/typesystems/typesystem_webenginecore.xml
  • riscv64
$ pacman -Fl pyside2 | grep 'typesystem_webenginecore.xml'
...empty output...

bootstrapping aarch64-linux-gnu-gcc and aarch64-linux-gnu-glibc

I'm not quite sure if this is the right approach, but I tried bootstrapping aarch64-linux-gnu-gcc and aarch64-linux-gnu-glibc.
Because I don't have any experience with bootstrapping gcc whatsoever, I used a modified version of aarch64-gcc-bootstrap from the AUR, following the procedure outlined in the PKGBUILD.

Either way, I now have natively compiled packages for aarch64-linux-gnu-gcc and aarch64-linux-gnu-glibc, which could be used for compiling the 'official' versions of those packages. Both aarch64-linux-gnu-gcc and aarch64-linux-gnu-glibc built fine without any patches to their PKGBUILDs.
I am not sure what the right procedure for bootstrapping these packages is here, or if I should share my binaries for this purpose?

D1 Arch Linux risc-v 运行 vim 报错

[root@archlinux ~]# vim /etc/ssh/sshd_config 
vim: /usr/lib/libncursesw.so.6: no version information available (required by vim)
vim: /usr/lib/libc.so.6: version `GLIBC_2.33' not found (required by vim)
vim: /usr/lib/libc.so.6: version `GLIBC_2.33' not found (required by /usr/lib/libgpm.so.2)
[root@archlinux ~]# 

滚动更新后 vim 报错问题解决

PyQt5 need rebuild

>> from PyQt5.Qt import QValidator
AttributeError: module 'PyQt5.Qt' has no attribute 'QValidator'

A rebuild fixes this.

jupyter-nbconvert 和 python-pytest-jupyter 打包

目前 python-jupyter-client 和 jupyter-server 的一些测试还过不去, 可以先考虑把 jupyter-nbconvert 和 python-pytest-jupyter 打出来. (jupyter-nbcovert 卡了很多包)

jupyter 这边依赖图太乱了, 先让 jupyter-nbconvertpython-pytest-jupyter 进仓库也能简化一下后续的打包.

处理 jupyter-nbconvert

先打一版 nocheck

extra-riscv64-build -- -d "$CACHE_DIR:/var/cache/pacman/pkg/" -- --nocheck

处理 python-jupyter-client

它依赖 python-ipykernel

注意 python-ipykernel 也依赖了 python-jupyter_client. 需要先 pacman -S python-ipykernel --assume-installed python-jupyter_client

然后 makepkg -s --nocheck 即可出一版 nocheck 的包

处理 jupyter-server

先打一版 nocheck

extra-riscv64-build -- -d "$CACHE_DIR:/var/cache/pacman/pkg/" -I ../../../python-jupyter-client-8.2.0-1-any.pkg.tar.zst -I ../../../jupyter-nbconvert-7.2.10-1-any.pkg.tar.zst -- --nocheck

处理 python-pytest-jupyter

extra-riscv64-build -- -d "$CACHE_DIR:/var/cache/pacman/pkg/" -I ../../../jupyter-server-2.5.0-1-any.pkg.tar.zst -I ../../../jupyter-nbconvert-7.2.10-1-any.pkg.tar.zst -I ../../../python-jupyter-client-8.2.0-1-any.pkg.tar.zst

处理 jupyter-nbconvert

注意这里需要把以前打出来的 nocheck 包塞进去. 否则测试会失败. (该包测试时依赖了已经安装在系统中的该包的文件)

extra-riscv64-build -- -d "$CACHE_DIR:/var/cache/pacman/pkg/" -I ../../../python-jupyter-client-8.2.0-1-any.pkg.tar.zst -I ../../../jupyter-nbconvert-7.2.10-1-any.pkg.tar.zst

这后面的可以暂时不打包


处理 python-jupyter-client

extra-riscv64-build -- -d "$CACHE_DIR:/var/cache/pacman/pkg/" -I ../../../python-jupyter-client-8.2.0-1-any.pkg.tar.zst -I ../../../python-pytest-jupyter-0.7.0-1-any.pkg.tar.zst -I ../../../jupyter-server-2.5.0-1-any.pkg.tar.zst -  
I ../../../jupyter-nbconvert-7.2.10-1-any.pkg.tar.zst

目前会有一个测试挂掉, 还没找到原因(jupyter/jupyter_client#946):

tests/test_client.py::TestAsyncKernelClient::test_input_request FAILED   [ 16%]

处理 jupyter-server

extra-riscv64-build -- -d "$CACHE_DIR:/var/cache/pacman/pkg/" -I ../../../python-pytest-jupyter-0.7.0-1-any.pkg.tar.zst -l kxxt2 -I ../../../jupyter-server-2.5.0-1-any.pkg.tar.zst  -I ../../../python-jupyter-client-8.2.0-1-any.pkg  
.tar.zst -I ../../../jupyter-nbconvert-7.2.10-1-any.pkg.tar.zst

Arch 上游把测试跳过去了, 进 chroot 手动跑测试:

cd /build/jupyter-server/src/jupyter_server-2.5.0
pytest -v
目前有 49 个测试挂掉
FAILED tests/test_terminal.py::test_no_terminals - tornado.httpclient.HTTPClientError: HTTP 404: Not Found  
FAILED tests/test_terminal.py::test_terminal_create - tornado.httpclient.HTTPClientError: HTTP 404: Not Found  
FAILED tests/test_terminal.py::test_terminal_create_with_kwargs - tornado.httpclient.HTTPClientError: HTTP 404: Not Found  
FAILED tests/test_terminal.py::test_terminal_create_with_cwd - tornado.httpclient.HTTPClientError: HTTP 404: Not Found  
FAILED tests/test_terminal.py::test_culling_config - KeyError: 'terminal_manager'  
FAILED tests/test_terminal.py::test_culling - tornado.httpclient.HTTPClientError: HTTP 404: Not Found  
FAILED tests/test_terminal.py::test_shell_command_override[shell_command="['/path/to/shell', '-l']"-expected_shell0-5.4] - KeyError: 'terminal_manager'  
FAILED tests/test_terminal.py::test_shell_command_override[shell_command="/string/path/to/shell -l"-expected_shell1-5.1] - KeyError: 'terminal_manager'  
FAILED tests/test_terminal.py::test_importing_shims - ModuleNotFoundError: No module named 'jupyter_server_terminals'  
FAILED tests/auth/test_authorizer.py::test_authorized_requests[True-DELETE-/api/contents/{nbpath}-None] - assert 500 in {200, 201, 204, None}  
FAILED tests/auth/test_authorizer.py::test_authorized_requests[True-POST-/api/terminals-] - KeyError: 'terminal_manager'  
FAILED tests/auth/test_authorizer.py::test_authorized_requests[True-GET-/api/terminals-None] - KeyError: 'terminal_manager'  
FAILED tests/auth/test_authorizer.py::test_authorized_requests[True-GET-/terminals/websocket/{term_name}-None] - KeyError: 'terminal_manager'  
FAILED tests/auth/test_authorizer.py::test_authorized_requests[True-DELETE-/api/terminals/{term_name}-None] - KeyError: 'terminal_manager'  
FAILED tests/auth/test_authorizer.py::test_authorized_requests[False-POST-/api/terminals-] - KeyError: 'terminal_manager'  
FAILED tests/auth/test_authorizer.py::test_authorized_requests[False-GET-/api/terminals-None] - KeyError: 'terminal_manager'  
FAILED tests/auth/test_authorizer.py::test_authorized_requests[False-GET-/terminals/websocket/{term_name}-None] - KeyError: 'terminal_manager'  
FAILED tests/auth/test_authorizer.py::test_authorized_requests[False-DELETE-/api/terminals/{term_name}-None] - KeyError: 'terminal_manager'  
FAILED tests/extension/test_app.py::test_stop_extension - AssertionError: assert {'tests.exten...ckextensions'} == {'jupyter_ser...ckextensions'}  
FAILED tests/services/contents/test_api.py::test_delete[FileContentsManager--inroot] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[FileContentsManager-Directory with spaces in-inspace] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[FileContentsManager-unicod\xe9-innonascii] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[FileContentsManager-foo-a] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[FileContentsManager-foo-b] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[FileContentsManager-foo-name with spaces] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[FileContentsManager-foo-unicod\xe9] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[FileContentsManager-foo/bar-baz] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[FileContentsManager-ordering-A] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[FileContentsManager-ordering-b] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[FileContentsManager-ordering-C] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[FileContentsManager-\xe5 b-\xe7 d] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[AsyncFileContentsManager--inroot] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[AsyncFileContentsManager-Directory with spaces in-inspace] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[AsyncFileContentsManager-unicod\xe9-innonascii] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[AsyncFileContentsManager-foo-a] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[AsyncFileContentsManager-foo-b] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[AsyncFileContentsManager-foo-name with spaces] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[AsyncFileContentsManager-foo-unicod\xe9] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[AsyncFileContentsManager-foo/bar-baz] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[AsyncFileContentsManager-ordering-A] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[AsyncFileContentsManager-ordering-b] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[AsyncFileContentsManager-ordering-C] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete[AsyncFileContentsManager-\xe5 b-\xe7 d] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete_dirs[FileContentsManager] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete_dirs[AsyncFileContentsManager] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete_non_empty_dir[FileContentsManager] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_delete_non_empty_dir[AsyncFileContentsManager] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_regression_is_hidden[FileContentsManager] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
FAILED tests/services/contents/test_api.py::test_regression_is_hidden[AsyncFileContentsManager] - tornado.httpclient.HTTPClientError: HTTP 500: Internal Server Error  
==================================================================================== 49 failed, 826 passed, 15 skipped, 20 warnings in 433.70s (0:07:13) ====================================================================================

Tweak Rust package guidelines

I don't have any RISC boards, but poking around in here one thing that seems to need patching a lot is something I put in the Rust packaging guidelines.

https://github.com/felixonmars/archriscv-packages/blob/82d8d619e862478a076ae657069c9caf2f1caed4/onefetch/riscv64.patch

This platform limited fetch can mean a huge savings in downloads when building packages (in some cases from hundreds of MB down to less than 10), so I am loath to drop it altogether.

Would it be helpful to modify the Rust recommendations like so:

- cargo fetch --target "$CARCH-unknown-linux-gnu"
+ cargo fetch --target "$CHOST"

...or is the issue here that you are cross compiling and need to fetch far an architecture other than the host? If so is there a variable we could suggest people use to get the benefit of only downloading required sources while not making so much work patching downstream?

python-nltk 和 nltk-data 打包

python-nltk checkdepends nltk-data, nltk-data makedepends python-nltk.

方案

  1. 先使用 --nocheck 选项构建 python-nltk
  2. 利用1. 打出的包构建 nltk-data
    • 这个包可能 package() 的时间有亿点点长 , 可以先喝几百杯茶)
  3. 然后即可利用 2. 打出的包正常构建 python-nltk (其实 check() 目前被上游注释掉了, 这一步可以跳过

手动 check 方法

  • python -c "import nltk;nltk.download('omw-1.4')"
  • pacman install python-pytest-mock
  • pytest

只有一个测试过不去, 因为这个测试依赖 matplotlib. (注意, 上游并没有把 matplotlib 加进 checkdepends 里)

目前仓库里的 matplotlib 还是 python 3.9 的. 我手动安装 matplotlib 之后该测试可以通过:

git clone https://github.com/matplotlib/matplotlib
cd matplotlib
cat << EOF > mplsetup.cfg
[libs]
system_freetype = true
system_qhull = true
EOF
sudo pip install .

freeradius - builded with old libssl

Tried to launch newly installed freeradius with radiusd -X command and caught this error:

ArchVF2# radiusd -X
libssl version mismatch.  built: 30100040 linked: 30200010

Freeradius info:

Name            : freeradius
Version         : 3.2.3-5
Description     : The premier open source RADIUS server
Architecture    : riscv64
URL             : https://freeradius.org/
Licenses        : GPL
Groups          : None
Provides        : None
Depends On      : krb5  net-snmp  postgresql-libs  mariadb-libs  talloc  libpcap  libxcrypt  libcrypt.so=2-64  smbclient
Optional Deps   : curl: for REST [installed]
                  freetds: for Sybase and MS SQL
                  hiredis: for redis support [installed]
                  json-c: rlm_rest module [installed]
                  libmemcached: for memcached
                  perl: for Perl [installed]
                  python: for Python [installed]
                  sqlite: for sqlite [installed]
                  unixodbc: for ODBC
                  yubico-c-client: for yubicloud
Required By     : None
Optional For    : None
Conflicts With  : None
Replaces        : None
Installed Size  : 9.66 MiB
Packager        : Felix Yan <[email protected]>
Build Date      : Mon Dec 18 16:21:55 2023
Install Date    : Sat Apr 13 14:03:59 2024
Install Reason  : Explicitly installed
Install Script  : No
Validated By    : Signature

System:
Linux ArchVF2 5.15.2-cwt-5.11.3-2 #1 SMP PREEMPT Fri Mar 22 19:02:42 +07 2024 riscv64 GNU/Linux

perl 5.38.0-1 broken

After doing my regular pacman -Syu it shows that perl package maybe broken.

Pacman reports:

WARNING: '/usr/lib/perl5/5.36' contains data from at least 3 packages which will NOT be used by the installed perl interpreter.
 -> Run the following command to get a list of affected packages: pacman -Qqo '/usr/lib/perl5/5.36'

Resulting in for ex. running "vim":

[xx@xx pkg]$ vim
vim: error while loading shared libraries: libperl.so: cannot open shared object file: No such file or directory

Mirror in devtools problem (network condition and correctness of rebuild)

We use https://archriscv.felixc.at/ as the default mirror in devtools now. But this tier-0 origin has bad network conditions somewhere in the world, such as in China Mainland. Sometimes it will be into timeout. And often have bed network speed.
Fortunately, Arch Linux provides us with a mirror(https://riscv.mirror.pkgbuild.com/) that is synchronized in time and has good network conditions around the world.
Unfortunately, this can break rebuild because of the tiny sync gap between the mirror and the origin.

Yes. Builders using a not up-to-date mirror may rebuild against an old version of a just-updated dependency.
Originally posted by @felixonmars in #2574 (comment)

Felix Yan also mentioned that one solution is to support overriding the mirrors per builder.

[libdrm] files installed by libdrm in x86_64 and riscv64 are not the same.

After checking by using pacman -Ql libdrm, I find that files installed by libdrm in x86_64 and riscv64 are not the same. Specifically, libdrm /usr/lib/pkgconfig/libdrm_intel.pc is missing in riscv64. Because there is no patch for libdrm, I believe there are some mistakes which should be investigated.

What's more, this issue blocks mesa-amber.

stackoverflow can't be correctly caught inside of qemu user

What happened

test-c-stack.sh, test-sigsegv-catch-stackoverflow1 and test-sigsegv-catch-stackoverflow2, these gnulib tests in packages diffutils and m4 will fail because the unexpected behavior of signal handling in qemu user.

These three tests will try to capture signal SIGSEGV and find out if the signal is triggered by the stack overflow exception. However, because of some unknown reason, the tests fail to capture the segmentation fault and so all of the tests will fail.

How to reproduce:

Here is a small reproduce example: https://github.com/Avimitin/stackoverflow-recover.

git clone https://github.com/Avimitin/stackoverflow-recover.git
cd stackoverflow-recover
make run

Current behavior

  • On normal x86_64 machine (Work as expected)
starting recursion
Stack overflow caught.
All done
  • On SiFive Unmatched (Work as expected)
starting recursion
Stack overflow caught.
All done
  • On QEMU-User (Failed)
starting recursion
Segmentation fault (core dumped)

Conclusion

The reason why those tests fail is still unknown, but we can confirm that this unexpected behavior will occur in qemu-user only. In my opinion, we can just add those packages to the qemu user blacklist now as it is really hard to find out the reason.

uncomplete desc file '.db/tesseract-data-kat_old-2:4.1.0-3/desc'!

I get this error when I try to sync from your repo via boxit:

:: Obtaining branch and repository errors...

-------------------------------------------------------------------------------
Errors of branch edge
-------------------------------------------------------------------------------
uncomplete desc file '/var/tmp/boxit-riscv/sessions/sync_session/.db/tesseract-data-kat_old-2:4.1.0-3/desc'!

May you check the packaging?

CMake 3.30.1 Illegal instruction

I installed Arch Linux into a Milk-V duo 256m using this guide: https://xyzdims.com/3d-printers/misc-hardware-notes/iot-milk-v-duo-risc-v-esbc-running-linux/

Every time I try to use CMake it segfaults.

$ fastfetch
                  -`                     aru@archlinux
                 .o+`                    -------------
                `ooo/                    OS: Arch Linux riscv64
               `+oooo:                   Host: Cvitek. CV181X ASIC. C906.
              `+oooooo:                  Kernel: Linux 5.10.4-tag-
              -+oooooo+:                 Uptime: 47 mins
            `/:-:++oooo+:                Packages: 223 (pacman)
           `/++++/+++++++:               Shell: bash 5.2.26
          `/++++++++++++++:              Terminal: dropbear
         `/+++ooooooooooooo/`            CPU: rv64gvcsu
        ./ooosssso++osssssso+`           Memory: 34.25 MiB / 240.61 MiB (14%)
       .oossssso-````/ossssss+`          Swap: Disabled
      -osssssso.      :ssssssso.         Disk (/): 2.16 GiB / 14.26 GiB (15%) - ext4
     :osssssss/        osssso+++.        Local IP (usb0): 192.168.42.1/24 *
    /ossssssss/        +ssssooo/-        Locale: C.UTF-8
  `/ossssso+/:-        -:/+osssso+-
 `+sso+:-`                 `.-/+oso:     ████████████████████████
`++:.                           `-/+/    ████████████████████████
.`                                 `/

$ cmake --version
cmake version 3.30.1

CMake suite maintained and supported by Kitware (kitware.com/cmake).
Illegal instruction (core dumped)

python-setuptool have different behavior between arch rv and arch x64

Packages that built with python-setuptools will have different build library path between Arch RISC-V and Arch x86_64.

The reason of this issue is that python-setuptools change the implementation of build library path generation in 62.1(Latest version in Arch x86_64). However we are still using the 61.3.1 version with the old implementation.

The build library path can be defined by build option build-lib. If user doesn't modify it manually, this option is default assigned to value of the base build directory(build/) plus a platform specfier.
The platform specifier is consist with current OS name and a tag. And the value of the tag is the key point of the whole issue:

  • In 61.3.1, python-setuptools use sys.version_info[:2] as tag value, which should be 3 and 10 in our riscv machine.
>>> import sys
>>> sys.version_info
sys.version_info(major=3, minor=10, micro=7, releaselevel='final', serial=0)
>>> sys.version_info[:2]
(3, 10)
  • However in 62.3.4, python-setuptools use sys.implementation.cache_tag as tag value, so we will get the below output in x86_64
>>> import sys
>>> sys.implementation.cache_tag
'cpython-310'

So that's why there is no "cpython" suffix when we build this package on Arch RV. Currently I suggest that we can add a patch to trim the path and remove those patch after we have newer version of the python-setuptools.

Reference:

Can not load fallback initramfs on nezha

I try to setup the latest linux package on my nezha board. As I setup the rootfs on x86, so I use fallback initramfs to boot the system. But the u-boot seems to fail to allocate data.

Is there something I left?

The uboot I use is from here with mainline opensbi.

Boot log:

Hit any key to stop autoboot:  0 
=> setenv kernel_comp_addr_r 0x50000000
=> setenv kernel_comp_size   0x04000000
=> setenv fdt_addr_r    0x43000000
=> setenv kernel_addr_r 0x41000000
=> setenv ramdisk_addr_r 0x44000000
=> 
=> run distro_bootcmd
PLL reg = 0xf8216300, freq = 1200000000
switch to partitions #0, OK
mmc0 is current device
Scanning mmc 0:1...
Found /extlinux/extlinux.conf
Retrieving file: /extlinux/extlinux.conf
2:      Arch-fallback
Retrieving file: /initramfs-linux-fallback.img
Retrieving file: /vmlinuz-linux
append: console=ttyS0,115200 console=sbi ignore_loglevel rw root=UUID=604bf002-e34b-4fb1-9020-4263bbc36ac9 rootwait
Retrieving file: /dtbs/allwinner/sun20i-d1-nezha.dtb
   Uncompressing Kernel Image
Moving Image from 0x41000000 to 0x40200000, end=41ea9000
## Flattened Device Tree blob at 43000000
   Booting using the fdt blob at 0x43000000
Working FDT set to 43000000
ERROR: Failed to allocate 0x4a5ab77 bytes below 0x42e00000.
ramdisk - allocation error

extlinux.conf

default Arch-fallback

label Arch-fallback
    linux /vmlinuz-linux
    initrd /initramfs-linux-fallback.img
    devicetree /dtbs/allwinner/sun20i-d1-nezha.dtb
    append console=ttyS0,115200 console=sbi ignore_loglevel rw root=UUID=604bf002-e34b-4fb1-9020-4263bbc36ac9 rootwait

Visionfive v2

Hi,
I get the error that mkinitcpio and systemd are in conflict when trying to upgrade the system. Is there a fix yet?

Thanks.

Add CI test support for bazaar source

I successfully fixed libappindicator. This package fetches source code from launchpad via breezy.
But it failed to pass the CI test. The test does not support breezy and fetching from launchpad via breezy.
I have fixed the test script and CI workflow. Is it possible to merge the pr?

packaging: pari and related packages

First, build pari with --nocheck. (patched in #2570)

Then, build pari-seadata-small with the built pari package:

extra-riscv64-build -I ~/pari/repos/community-x86_64/pari-2.15.3-1-riscv64.pkg.tar.zst

Then, build pari-seadata:

extra-riscv64-build -I ~/pari/repos/community-x86_64/pari-2.15.3-1-riscv64.pkg.tar.zst -I ~/pari-seadata-small/repos/community-any/pari-seadata-small-20090618-3-any.pkg.tar.zst

Then build pari-elldata, pari-galdata and pari-galpol.

extra-riscv64-build -I ~/pari/repos/community-x86_64/pari-2.15.3-1-riscv64.pkg.tar.zst

Finally, build pari without nocheck:

echo ~/pari-*/repos/community-any/pari-*-any.pkg.tar.zst | xargs printf -- '-I %s \n'  | xargs extra-riscv64-build --  -I ~/pari/repos/community-x86_64/  
pari-2.15.3-1-riscv64.pkg.tar.zst

dependency cycle python-zope-security and python-zope-component

python-zope-security depends on python-zope-component
python-zope-component checkdepends on python-zope-security

Building python-zope-component temporarily with --nocheck or BUILDENV+=('!check') solves this issue and both packages build fine on riscv64.

ffcall check error in callback minitests

Well, this is a weird problem. At first, I thought this is a trivial problem because one member of Gentoo RISC-V worked with upstream and solved the problem with pic. See details here. However, when using the patched tarball, I still failed to make a package.

Even weird thing is that when executing sudo chroot /path/to/buildroot, check is able to pass. However, when using systemd-nspawn, it cannot pass the check with segment fault, bus error or illegal instruction. Strace the test suite in systemd-nspawn,

riscv_flush_icache(0x3f8716a030, 0x3f8716a040, 0) = -1 EPERM (Operation not permitted)                                                                                                                            

While outside systemd-nspawn, it is pretty fine.

According to manual of systemd

--system-call-filter=
Alter the system call filter applied to containers. Takes a space-separated list of system call names or group names (the latter prefixed with "@", as listed by the syscall-filter command of systemd-analyze(1)). Passed system calls will be permitted. The list may optionally be prefixed by "", in which case all listed system calls are prohibited. If this command line option is used multiple times the configured lists are combined. If both a positive and a negative list (that is one system call list without and one with the "" prefix) are configured, the negative list takes precedence over the positive list. Note that systemd-nspawn always implements a system call allow list (as opposed to a deny list!), and this command line option hence adds or removes entries from the default allow list, depending on the "~" prefix. Note that the applied system call filter is also altered implicitly if additional capabilities are passed using the --capabilities=.

There is a default allow list of syscalls in systemd-nspawn, which is defined here. There is no architecture dependent syscall :(

As for solution, one way is asking ArchLinux devtools to export a configuration API which allows us to add RISC-V specific syscall to allow list. Another way is asking systemd to add some archtecture dependent syscall to the default allow list.

Usage and contribution

Hi,
I want to see arch linux on riscv too.
How I can setup build environment and contribute?
I tried to setup env with docker and qemu-static

docker image

hello

now if you have risc-v computer you can't run x86 docker image on it

so, can you please provide a docker image

it would be very nice to use arch in docker, podman, toolbox, and distrobox on risc-v

[FTBFS] libretro-mupen64plus-next

Error

  1. SSE
g++: error: unrecognized command-line option ‘-msse’
g++: error: unrecognized command-line option ‘-msse2’

SSE option is not available in RISC-V, so we have to close this build option in RISC-V. The PKGBUILD file hard-coded WITH_DYNAREC=x86_64, and it enable -msse and -msse2 in https://github.com/libretro/mupen64plus-libretro-nx/blob/c10546e333d57eb2e5a6ccef1e84cb6f9274c526/Makefile.common#L221

  1. build error

After deleted the WITH_DYNAREC option, the package fail to be built because it's component mupen64plus-rsp-paraLLEl is using some x86_64 simd code, so we need to set the build option HAVE_PARALLEL_RSP to 0. This is just a performance improvement plugin, so it is safe to disable it.

  1. link error

Besides, this package failed to links some dynarec_* function. Those function links to some x64 and x86 asm file. We can disable them by adding -DNO_ASM into CFLAGS and CXXFLAGS.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.