altlinux / linux-arm Goto Github PK
View Code? Open in Web Editor NEWThis project forked from torvalds/linux
Russian ARM SoCs (BE-M1000, MCom-03, etc) support patches for Linux kernel
License: Other
This project forked from torvalds/linux
Russian ARM SoCs (BE-M1000, MCom-03, etc) support patches for Linux kernel
License: Other
[ 1165.462498] panfrost 2a200000.gpu: js fault, js=1, status=DATA_INVALID_FAULT, head=0x61b0780, tail=0x61b0780
[ 1165.472405] panfrost 2a200000.gpu: gpu sched timeout, js=1, config=0x3700, status=0x58, head=0x61b0780, tail=0x61b0780, sched_job=000000001fcb7990
[ 1165.480837] audit: type=1701 audit(1637839393.603:123): auid=500 uid=500 gid=500 ses=3 pid=3313 comm="floaters" exe="/usr/libexec/mate-screensaver/floaters" sig=11 res=1
[ 1165.483229] Unable to handle kernel paging request at virtual address bfff000817e559c8
[ 1165.483233] Mem abort info:
[ 1165.483236] ESR = 0x96000004
[ 1165.483241] EC = 0x25: DABT (current EL), IL = 32 bits
[ 1165.483244] SET = 0, FnV = 0
[ 1165.483247] EA = 0, S1PTW = 0
[ 1165.483249] Data abort info:
[ 1165.483252] ISV = 0, ISS = 0x00000004
[ 1165.483255] CM = 0, WnR = 0
[ 1165.483258] [bfff000817e559c8] address between user and kernel address ranges
[ 1165.483265] Internal error: Oops: 96000004 [#1] SMP
[ 1165.483270] Modules linked in: af_packet rfkill nls_utf8 nls_cp866 vfat fat dwmac_generic psmouse serio_raw mali_kbase joydev micrel snd_soc_simple_card ahci_platform snd_soc_simple_card_utils libahci_platform crct10dif_ce dw_hdmi_ahb_audio panfrost libahci dwmac_baikal tp_serio snd_soc_nau8822 designware_i2s gpu_sched atr4 gpio_pcf857x stmmac_platform libata snd_soc_core stmmac scsi_mod pcs_xpcs snd_compress bt1_pvt phylink ac97_bus hwmon spi_dw_mmio snd_pcm_dmaengine snd_pcm spi_dw gpio_keys leds_gpio sch_fq_codel snd_seq_midi snd_seq_midi_event snd_seq snd_rawmidi snd_seq_device snd_timer snd soundcore fuse efi_pstore efivarfs ip_tables x_tables autofs4 input_leds hid_generic usbhid hid dwc3 udc_core roles xhci_plat_hcd xhci_hcd baikal_vdu_drm baikal_hdmi dw_hdmi drm_kms_helper cec rc_core drm dwc3_baikal evdev uio_pdrv_genirq uio
[ 1165.483398] CPU: 7 PID: 3313 Comm: floaters Not tainted 5.10.81+ #2
[ 1165.483401] Hardware name: Edelweiss TF307-MB-S-D/BM1BM1-D, BIOS 5.3 09/09/2021
[ 1165.483406] pstate: 80400005 (Nzcv daif +PAN -UAO -TCO BTYPE=--)
[ 1165.483419] pc : vma_interval_tree_remove+0x23c/0x2d0
[ 1165.483425] lr : __remove_shared_vm_struct+0x30/0xc0
[ 1165.483428] sp : ffff800016173b50
[ 1165.483431] x29: ffff800016173b50 x28: 000000000000000a
[ 1165.483437] x27: ffff80001141b000 x26: 000000000000000b
[ 1165.483442] x25: 000000000000000a x24: 0000000000000000
[ 1165.483447] x23: 0000000000000000 x22: ffff0008061ead38
[ 1165.483452] x21: ffff0008061ead70 x20: ffff00082a996320
** 3 printk messages dropped **
[ 1165.483471] x13: ffff00097f04b300 x12: 0000000000000009
[ 1165.483475] x11: 0000000000000000 x10: ffff800011434606
[ 1165.483480] x9 : ffff8000102c7f38 x8 : ffff0008217d7b48
[ 1165.483485] x7 : 0000000000000000 x6 : 0000ffffa7745000
[ 1165.483490] x5 : ffff000817f97e68 x4 : 0000ffffa7a06000
[ 1165.483495] x3 : bfff000817e559b8 x2 : bfff000817e559b9
[ 1165.483499] x1 : ffff0008061ead60 x0 : 0000000000000000
[ 1165.483505] Call trace:
[ 1165.483511] vma_interval_tree_remove+0x23c/0x2d0
[ 1165.483515] __remove_shared_vm_struct+0x30/0xc0
[ 1165.483520] unlink_file_vma+0x48/0x68
[ 1165.483524] free_pgtables+0xf4/0x150
[ 1165.483529] exit_mmap+0xf0/0x190
[ 1165.483535] mmput+0x90/0x1a8
[ 1165.483540] do_exit+0x34c/pxab8
[ 1165.483543] do_group_exit+px4c/0xbp
[ 1165.483548] get_signal+0x178/0x878
[ 1165.483555] do_notify_resume+0x254/0x8b8
[ 1165.483560] work_pending+0xc/0x410
[ 1165.483572] Code: b5fff040 f9402e82 f27ef443 540002a0 (f9400864)
[ 1165.483577] ---[ end trace dec3dd3ddb6a75a8 ]---
Hi there!
I've just found there is a serious degradation of SATA SSD disk performance over recent kernels on TF307-MB-A-0.
We tried 5.10.60 initially and it was working fine. Still working fine actually. Upgraded to 5.10.101 and things get more complicated.
At the same system with SSD attached I've got 4 times lower linear (kinda linear as it one big file operations) read/write performance. 5.10.60 able to achieve 440 MB/s at read and 87 MB/s at write as 5.10.101 produce only 120 MB/s at read and 17 MB/s at write.
At the same time I've also got ~100 times slower performance for multicore simultaneous files check-summing.
Nothing interesting can be found in dmesg.
After this unfortunate discovery I've tried other branches. baikalm-5.15.y happens to have exactly same problem.
At the same time baikalm-5.10.y-next, baikalm-5.15.y-next and baikalm-5.18.y-next performing better. All of those *-next branches still have same around 4 times slower performance but there is no such catastrophic degradation with multicore simultaneous files check-summing. With *-next branches it just same around 4 times slower. And 5.18.9 maybe has slightly slower I/O than 5.10.123.
Probably something should be done to kernel config. I just using default baikalm configs here.
For the Baikal-M board with two M2 PCIe disks the second disk is not initialized.
Linux version 5.10.82-std-def-alt1 ([email protected]) (gcc-10 (GCC) 10.3.1 20210703 (ALT Sisyphus 10.3.1-alt2), GNU ld (GNU Binutils) 2.35.2.20210110) #1 SMP Fri Dec 3 14:50:06 UTC 2021
Here is the dmesg output for the case:
[ 1.763823] ------------[ cut here ]------------
[ 1.763886] WARNING: CPU: 0 PID: 134 at kernel/irq/manage.c:2036 request_threaded_irq+0x160/0x1b0
[ 1.763961] Modules linked in:
[ 1.763998] CPU: 0 PID: 134 Comm: kworker/u16:2 Not tainted 5.10.82-std-def-alt1 #1
[ 1.764063] Hardware name: Delta Computers Bober/Rhodeola, BIOS 5.3 01/11/2022
[ 1.764079] arm-ccn 9000000.ccn: No access to interrupts, using timer.
[ 1.764137] Workqueue: nvme-reset-wq nvme_reset_work
[ 1.764230] pstate: a0400005 (NzCv daif +PAN -UAO -TCO BTYPE=--)
[ 1.764283] pc : request_threaded_irq+0x160/0x1b0
[ 1.764327] lr : request_threaded_irq+0x80/0x1b0
[ 1.764368] sp : ffff800012d1bb60
[ 1.764400] x29: ffff800012d1bb60 x28: 0000000000000000
[ 1.764451] x27: ffff000800ed0500 x26: 0000000000000000
[ 1.764501] x25: 0000000000000000 x24: 0000000000000002
[ 1.764551] x23: ffff0008031b6800 x22: ffff80001093b8b0
[ 1.764601] x21: ffff000800107400 x20: 0000000000000080
[ 1.764707] NET: Registered protocol family 10
[ 1.766975] x19: ffff80001093b8b0 x18: 0000000000000020
[ 1.766981] x17: 0000000000000001 x16: 0000000000000019
[ 1.766986] x15: ffffffffffffffff x14: ffff000800ed0508
[ 1.767821] nvme nvme0: 1/0/0 default/read/poll queues
[ 1.769998] Segment Routing with IPv6
[ 1.771707] x13: ffffffffffffffff x12: 0000000000000040
[ 1.771712] x11: ffff000800400240 x10: ffff000800400242
[ 1.771721] x9 : ffff800011d88410
[ 1.774063] RPL Segment Routing with IPv6
[ 1.776351] x8 : ffff000800400268
[ 1.779041] registered taskstats version 1
[ 1.780901] x7 : 0000000000000000 x6 : ffff000800400270
[ 1.780906] x5 : ffff000800400240 x4 : ffff000800400278
[ 1.780910] x3 : 0000000000000000 x2 : 0000000000000000
[ 1.783190] Loading compiled-in X.509 certificates
[ 1.785446] x1 : 0000000000000002 x0 : 0000000000131600
[ 1.789262] Loaded X.509 cert 'Build time autogenerated kernel key: 4e78bc91b859ec08082639ec20b1616089bf8910'
[ 1.789999] Call trace:
[ 1.790009] request_threaded_irq+0x160/0x1b0
[ 1.796434] zswap: loaded using pool zstd/zbud
[ 1.796732] pci_request_irq+0xc0/0x110
[ 1.796744] queue_request_irq+0x78/0x8c
[ 1.799392] Key type ._fscrypt registered
[ 1.801240] nvme_reset_work+0x488/0x1580
[ 1.801247] process_one_work+0x1e4/0x4ac
[ 1.801250] worker_thread+0x170/0x524
[ 1.801255] kthread+0x130/0x13c
[ 1.801265] ret_from_fork+0x10/0x38
[ 1.803524] Key type .fscrypt registered
[ 1.805733] ---[ end trace fed6bc90951fb949 ]---
[ 1.805798] nvme nvme1: Removing after probe failure status: -22
The following dts configuration for PCIe was used:
pcie0: pcie@2200000 { /* PCIe x4 #0 */
compatible = "baikal,pcie-m", "snps,dw-pcie";
reg = <0x0 0x02200000 0x0 0x1000>, /* RC config space */
<0x0 0x40100000 0x0 0x100000>; /* PCI config space */
reg-names = "dbi", "config";
interrupts = <GIC_SPI 426 IRQ_TYPE_LEVEL_HIGH>, /* AER */
<GIC_SPI 429 IRQ_TYPE_LEVEL_HIGH>; /* MSI */
#interrupt-cells = <1>;
baikal,pcie-lcru = <&pcie_lcru 0>;
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
ranges = <0x81000000 0x0 0x00000000 0x0 0x40200000 0x0 0x100000>, /* I/O */
<0x82000000 0x0 0x40000000 0x4 0x00000000 0x0 0x40000000>; /* 32b non-prefetchable memory */
msi-parent = <&its 0x0>;
msi-map = <0x0 &its 0x0 0x10000>;
num-lanes = <4>;
num-viewport = <4>;
bus-range = <0x0 0xff>;
status = "disabled";
};
pcie1: pcie@2210000 { /* PCIe x4 #1 */
compatible = "baikal,pcie-m", "snps,dw-pcie";
reg = <0x0 0x02210000 0x0 0x1000>, /* RC config space */
<0x0 0x50100000 0x0 0x100000>; /* PCI config space */
reg-names = "dbi", "config";
interrupts = <GIC_SPI 402 IRQ_TYPE_LEVEL_HIGH>, /* AER */
<GIC_SPI 405 IRQ_TYPE_LEVEL_HIGH>; /* MSI */
#interrupt-cells = <1>;
baikal,pcie-lcru = <&pcie_lcru 1>;
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
ranges = <0x81000000 0x0 0x00100000 0x0 0x50200000 0x0 0x100000>, /* I/O */
<0x82000000 0x0 0x40000000 0x5 0x00000000 0x0 0x40000000>; /* 32b non-prefetchable memory */
msi-parent = <&its 0x0>;
msi-map = <0x0 &its 0x0 0x10000>;
num-lanes = <4>;
num-viewport = <4>;
bus-range = <0x0 0xff>;
status = "disabled";
};
pcie2: pcie@2220000 { /* PCIe x8 */
compatible = "baikal,pcie-m", "snps,dw-pcie";
reg = <0x0 0x02220000 0x0 0x1000>, /* RC config space */
<0x0 0x60000000 0x0 0x100000>; /* PCI config space */
reg-names = "dbi", "config";
interrupts = <GIC_SPI 378 IRQ_TYPE_LEVEL_HIGH>, /* AER */
<GIC_SPI 381 IRQ_TYPE_LEVEL_HIGH>; /* MSI */
#interrupt-cells = <1>;
baikal,pcie-lcru = <&pcie_lcru 2>;
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
ranges = <0x81000000 0x0 0x00200000 0x0 0x60100000 0x0 0x100000>, /* I/O */
<0x82000000 0x0 0x80000000 0x6 0x00000000 0x0 0x80000000>; /* 32b non-prefetchable memory */
msi-parent = <&its 0x0>;
msi-map = <0x0 &its 0x0 0x10000>;
num-lanes = <8>;
num-viewport = <4>;
bus-range = <0x0 0xff>;
status = "disabled";
};
M2 disks are using pcie0 and pcie1 (both x4).
When using one M2 disk - it is working in any of PCIe slot (pcie0 or pcie1). This does not depend on the vendor of M2 disk.
BR, Ilya.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.