Comments (15)
If there is a segfault, open a new report, please. We are discussing a different issue here.
And while it might unavoidable to change netdata.con
I think I didn't word it correctly. What I mean is that we change the internal default netdata.conf
, the one you get with new installs or when accessing http://IP:19999/netdata.conf
. We don't change the netdata.conf
file in /etc/netdata
.
Just in case tagging @netdata/agent-sre.
from netdata.
Hi, @Pingger. I followed your "Steps to reproduce" but couldn't reproduce the issue - updating Netdata doesn't overwrite netdata.conf
(so as any other use files in /etc/netdata/
). Make sure you are using files in user directory and not in stock
User Configurations ________________________________________ : /etc/netdata
Stock Configurations _______________________________________ : /usr/lib/netdata/conf.d
from netdata.
Yes, I am using the config in /etc/netdata
.
As you also might have noticed, I stated, that the config is not reset on every nightly update. My guess is, that the issue is caused by a lingering feature, that was meant to enable storage tiers, when storage tiers were first introduced and considered stable.
I also have some further information on the data that goes missing:
For some reason netdata Segmentation faults at somewhat random intervals, but always aligned to 30 minutes (aka at ??:00 or ??:30, but the time between varies between a few minutes to several days).
netdata.service: Main process exited, code=killed, status=11/SEGV
The issue occurs:
- On the main netdata with a 32GiB
dbengine
- With a test instance on a different machine, that gets the data streamed to it, resulting in
dbengine
8GiB (8GiB max)dbengine-tier1
6GiB (8GiB max)dbengine-tier2
567MiB (8GiB max)
The issue does not occur, on the 'leaf' nodes (that have no other instances streaming to them).
The issue does not occur, when storage-tiers is set to 0.
The issue can be forced, by setting storage tiers to 5 and setting Tier 1, 2 and 3 to dbengine tier <> update every iterations = 60
This also triggers a log message, that tier 3 is more than 65535 than tier 0 and being disabled.
My guess is, that this is not handled properly somewhere in the code, and the code accesses the disabled tier 3 and tier 4 databases, that are disabled, and thus segmentation faults.
The daemon.log
just ends in those cases, with no discernible similarities.
Often with several minutes of silence before the crash.
Sometimes with immediate thread=RCVR[...] ...: receive thread ended (task id ...)
just before the crash.
from netdata.
I was referring to your "Step to reproduce"
Config reverting chages:
Install netdata with nightly and auto-updates
check after each nightly-update, whether your config is still untouched.
and "Expected behavior"
I expect my settings to remain untouched
My understanding of that + the issue title = netdata.conf
is being overwritten on every update.
And now you are saying
I stated, that the config is not reset on every nightly update
If the config/updates are not relevant, could you please remove this info from the description? It is not clear what the issue is.
from netdata.
I have set the storage tiers setting to 1 at least 3 times over the last month, but netdata keeps changing it back and destroying data irrecoverably.
That means, over the course of 30 nightly updates, I discovered the reset 3 times, and there might have been a reset that went unnoticed, but that is less than 25% of the updates, but still more than the 0% I would like to have, but extremely very far from on every update. I very specifically phrased it the way I did, and not just "on every update", because that would have been very easy to reproduce and would have caused me to notice the culprit much earlier.
And it is not reverting all settings, but only the one setting [db] storage tiers
.
I agree, that I phrased it somewhat unclear.
The expectation for settings to remain unchanged I mean for all settings, not just the specific one in this instance. I know, that in the past other settings have been updated in the netdata.conf
, that didn't cause issues for me, but where still changes, that I discovered later by accident. (For example at one point the database setting where moved from the [netdata]
category into its own [db]
category. Also an instance, that I explicitly do not want to be done automatically!)
Add a popup to the dashboard/alert all health receivers/just prevent netdata from starting with a log notification/whatever, just don't touch the configs without even properly informing me.
I do not read every single nightly changelog, as I was expecting to be informed by the monitoring software, that something has become wonky with the current setup and that I need to take steps or the very least should pay a bit more attention to the server(s), that just letting them do their thing and the TV on the wall showing "all green".
from netdata.
- Yes, we change
netdata.conf
. This is unavoidable. - But we keep backward compatibility. So database settings that was in
[global]
still work. - Netdata updater doesn't overwrite the
netdata.conf
file. - We do non-backward compatible changes in general, that is true. But we announce them in the release notes. See v1.44.0 deprecation notice, for example.
I do not read every single nightly changelog
Consider using the stable version.
from netdata.
That is what I have changed to a few days ago, in the hopes that it works better. The segmentation fault I mentioned is in the stable version aswell.
kickstart.sh --reinstall --stable-channel --auto-update --disable-cloud
and on some nodes
kickstart.sh --reinstall --stable-channel --auto-update --disable-cloud --build-only
, because the static build wouldn't even start without segfaulting...
And while it might unavoidable to change netdata.conf
. Please, for the love of god, at least create backup before changing stuff!
As I have mentioned earlier, my netdata configs are now all flagged immutable, because I apparently can't trust netdata with its config housekeeping.
from netdata.
We are discussing, that the automatic reenabling causes dataloss in the database, which is caused by some updates reenabling storage tiers, which then causes segmentation faults.
So, yes we are discussing the segmentation fault.
from netdata.
Ok, the "reenabling" issue I can't reproduce. Unfortunately, I am not sure how I can help you with it. I am pretty sure there is some misunderstanding, I am certain that updating Netdata doesn't overwrite netdata.conf
. If that is the case it is a severe bug. That is why I am tagging @netdata/agent-sre. If they will be able to help you - good.
@Pingger changing the title won't do. If you have segfaults, please do the following:
- open a new ticket.
- provide crashdump. You can do it by installing
netdata-dbgsym
package and usingcoredumpctl
to get crashdump.
from netdata.
Closing this one because the author saying that #16888 is the same. Couldn't reproduce the OP issue (config overwrite). @netdata/agent-sre reopen if needed.
from netdata.
@Pingger changing the title won't do. If you have segfaults, please do the following:
* open a new ticket. * provide crashdump. You can do it by installing `netdata-dbgsym` package and using `coredumpctl` to get crashdump.
ubuntu
$ apt install netdata-dbgsym
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package netdata-dbgsym
alpine
$ apk add netdata-dbgsym
apk add netdata-dbgsym
fetch https://dl-cdn.alpinelinux.org/alpine/edge/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/edge/community/x86_64/APKINDEX.tar.gz
ERROR: unable to select packages:
netdata-dbgsym (no such package):
required by: world[netdata-dbgsym]
Manjaro
$ pacman -Sy netdata-dbgsym
:: Synchronizing package databases...
core is up to date
extra is up to date
community is up to date
multilib is up to date
chaotic-aur is up to date
error: target not found: netdata-dbgsym
Sooo ... how exactly do I get that package?! @ilyam8
from netdata.
Hello @Pingger,
dgsym
packages are available only for Debian distros ATM.- I read your previous messages and am sorry you were disappointed. Let's keep it calm and take it one step at a time so we can resolve this issue.
First, we focus on one node, the node that has been hurt the most, the Ubuntu 20.04 machine.
- Double down the version of the Agent running (post it here)
- Disable auto-updating (remove the netdata symlink under /etc/cron.daily)
- if your database was corrupted you shouldn't have seen these gaps, I have seen gaps like these when two netdata instances were running or, two netdata instances were installed. So double down on that, check how many netdata instances are running (remember the netdata daemon is spawning multiple threads so you should see something like this
$ps -e --forest
. . . .
2908 ? 00:00:34 netdata
3013 ? 00:00:00 \_ netdata
3286 ? 00:00:00 \_ bash
3303 ? 00:00:32 \_ apps.plugin
3309 ? 00:00:00 \_ debugfs.plugin
3313 ? 00:00:02 \_ ebpf.plugin
3314 ? 00:00:09 \_ go.d.plugin
3315 ? 00:00:00 \_ nfacct.plugin
3318 ? 00:00:06 \_ python3
3351 ? 00:00:00 \_ sd-jrnl.plugin
3352 ? 00:00:00 \_ NETWORK-VIEWER
but once
- Also check the netdata deployments (
whereis netdata
)
feel free to post any output here but keep in mind to redact any information that you want.
- If we verify 3 and 4 together, now reapply your changes under
/etc/netdata
, make use of our script ./edit-config to do so. Cat the file /etc/netdata/netdata.conf and check that the changes have been applied (maybe post them here to have it as a reference. Relax and observe it for a day or two or even more, this kind of issue takes time to resolve.
from netdata.
-
As stated, I switched to stable, since posting the issue:
netdata -W buildinfo
Packaging: Netdata Version ____________________________________________ : v1.44.1 Installation Type __________________________________________ : binpkg-deb Package Architecture _______________________________________ : x86_64 Package Distro _____________________________________________ : Configure Options __________________________________________ : '--build=x86_64-linux-gnu' '--includedir=${prefix}/include' '--mandir=${prefix}/share/man' '--infodir=${prefix}/share/info' '--disable-silent-rules' '--libdir=${prefix}/lib/x86_64-linux-gnu' '--libexecdir=${prefix}/lib/x86_64-linux-gnu' '--disable-maintainer-mode' '--prefix=/usr' '--sysconfdir=/etc' '--localstatedir=/var' '--libdir=/usr/lib' '--libexecdir=/usr/libexec' '--with-user=netdata' '--with-math' '--with-zlib' '--with-webdir=/var/lib/netdata/www' '--disable-dependency-tracking' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -fdebug-prefix-map=/usr/src/netdata=. -fstack-protector-strong -Wformat -Werror=format-security' 'LDFLAGS=-Wl,-Bsymbolic-functions -Wl,-z,relro' 'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2' 'CXXFLAGS=-g -O2 -fdebug-prefix-map=/usr/src/netdata=. -fstack-protector-strong -Wformat -Werror=format-security' Default Directories: User Configurations ________________________________________ : /etc/netdata Stock Configurations _______________________________________ : /usr/lib/netdata/conf.d Ephemeral Databases (metrics data, metadata) _______________ : /var/cache/netdata Permanent Databases ________________________________________ : /var/lib/netdata Plugins ____________________________________________________ : /usr/libexec/netdata/plugins.d Static Web Files ___________________________________________ : /var/lib/netdata/www Log Files __________________________________________________ : /var/log/netdata Lock Files _________________________________________________ : /var/lib/netdata/lock Home _______________________________________________________ : /var/lib/netdata Operating System: Kernel _____________________________________________________ : Linux Kernel Version _____________________________________________ : 5.4.0-167-generic Operating System ___________________________________________ : Ubuntu Operating System ID ________________________________________ : ubuntu Operating System ID Like ___________________________________ : debian Operating System Version ___________________________________ : 20.04.6 LTS (Focal Fossa) Operating System Version ID ________________________________ : none Detection __________________________________________________ : /etc/os-release Hardware: CPU Cores __________________________________________________ : 8 CPU Frequency ______________________________________________ : 1996000000 RAM Bytes __________________________________________________ : 33659760640 Disk Capacity ______________________________________________ : 858993459200 CPU Architecture ___________________________________________ : x86_64 Virtualization Technology __________________________________ : kvm Virtualization Detection ___________________________________ : systemd-detect-virt Container: Container __________________________________________________ : none Container Detection ________________________________________ : systemd-detect-virt Container Orchestrator _____________________________________ : none Container Operating System _________________________________ : none Container Operating System ID ______________________________ : none Container Operating System ID Like _________________________ : none Container Operating System Version _________________________ : none Container Operating System Version ID ______________________ : none Container Operating System Detection _______________________ : none Features: Built For __________________________________________________ : Linux Netdata Cloud ______________________________________________ : YES Health (trigger alerts and send notifications) _____________ : YES Streaming (stream metrics to parent Netdata servers) _______ : YES Back-filling (of higher database tiers) ____________________ : YES Replication (fill the gaps of parent Netdata servers) ______ : YES Streaming and Replication Compression ______________________ : YES (zstd lz4 gzip) Contexts (index all active and archived metrics) ___________ : YES Tiering (multiple dbs with different metrics resolution) ___ : YES (5) Machine Learning ___________________________________________ : YES Database Engines: dbengine ___________________________________________________ : YES alloc ______________________________________________________ : YES ram ________________________________________________________ : YES map ________________________________________________________ : YES save _______________________________________________________ : YES none _______________________________________________________ : YES Connectivity Capabilities: ACLK (Agent-Cloud Link: MQTT over WebSockets over TLS) _____ : YES static (Netdata internal web server) _______________________ : YES h2o (web server) ___________________________________________ : YES WebRTC (experimental) ______________________________________ : NO Native HTTPS (TLS Support) _________________________________ : YES TLS Host Verification ______________________________________ : YES Libraries: LZ4 (extremely fast lossless compression algorithm) ________ : YES ZSTD (fast, lossless compression algorithm) ________________ : YES zlib (lossless data-compression library) ___________________ : YES Judy (high-performance dynamic arrays and hashtables) ______ : YES (bundled) dlib (robust machine learning toolkit) _____________________ : YES (bundled) protobuf (platform-neutral data serialization protocol) ____ : YES (system) OpenSSL (cryptography) _____________________________________ : YES libdatachannel (stand-alone WebRTC data channels) __________ : NO JSON-C (lightweight JSON manipulation) _____________________ : YES libcap (Linux capabilities system operations) ______________ : NO libcrypto (cryptographic functions) ________________________ : YES libm (mathematical functions) ______________________________ : YES jemalloc ___________________________________________________ : NO TCMalloc ___________________________________________________ : NO Plugins: apps (monitor processes) ___________________________________ : YES cgroups (monitor containers and VMs) _______________________ : YES cgroup-network (associate interfaces to CGROUPS) ___________ : YES proc (monitor Linux systems) _______________________________ : YES tc (monitor Linux network QoS) _____________________________ : YES diskspace (monitor Linux mount points) _____________________ : YES freebsd (monitor FreeBSD systems) __________________________ : NO macos (monitor MacOS systems) ______________________________ : NO statsd (collect custom application metrics) ________________ : YES timex (check system clock synchronization) _________________ : YES idlejitter (check system latency and jitter) _______________ : YES bash (support shell data collection jobs - charts.d) _______ : YES debugfs (kernel debugging metrics) _________________________ : YES cups (monitor printers and print jobs) _____________________ : YES ebpf (monitor system calls) ________________________________ : YES freeipmi (monitor enterprise server H/W) ___________________ : YES nfacct (gather netfilter accounting) _______________________ : YES perf (collect kernel performance events) ___________________ : YES slabinfo (monitor kernel object caching) ___________________ : YES Xen ________________________________________________________ : NO Xen VBD Error Tracking _____________________________________ : NO Logs Management ____________________________________________ : YES Exporters: AWS Kinesis ________________________________________________ : NO GCP PubSub _________________________________________________ : NO MongoDB ____________________________________________________ : YES Prometheus (OpenMetrics) Exporter __________________________ : YES Prometheus Remote Write ____________________________________ : YES Graphite ___________________________________________________ : YES Graphite HTTP / HTTPS ______________________________________ : YES JSON _______________________________________________________ : YES JSON HTTP / HTTPS __________________________________________ : YES OpenTSDB ___________________________________________________ : YES OpenTSDB HTTP / HTTPS ______________________________________ : YES All Metrics API ____________________________________________ : YES Shell (use metrics in shell scripts) _______________________ : YES Debug/Developer Features: Trace All Netdata Allocations (with charts) ________________ : NO Developer Mode (more runtime checks, slower) _______________ : NO
-
done
-
You will notice, there is the one "primary" netdata (the one I opened the issue with) and there are 2 netdata instances in incus (lxd) containers. One is the parent of the primary netdata and is used as a backup. The other is a "web-monitor", that does http-checks but far less frequently and with different dbengine settings. (interestingly: the "web-monitor" has yet to crash once)
ps -e --forest
I suggest scanning the list from the bottom
PID TTY TIME CMD 2 ? 00:00:06 kthreadd 3 ? 00:00:00 \_ rcu_gp 4 ? 00:00:00 \_ rcu_par_gp 6 ? 00:00:00 \_ kworker/0:0H-kblockd 8 ? 00:00:00 \_ mm_percpu_wq 9 ? 06:58:58 \_ ksoftirqd/0 10 ? 02:44:26 \_ rcu_sched 11 ? 00:02:46 \_ migration/0 12 ? 00:00:00 \_ idle_inject/0 14 ? 00:00:00 \_ cpuhp/0 15 ? 00:00:00 \_ cpuhp/1 16 ? 00:00:00 \_ idle_inject/1 17 ? 00:04:07 \_ migration/1 18 ? 08:05:15 \_ ksoftirqd/1 20 ? 00:00:00 \_ kworker/1:0H-kblockd 21 ? 00:00:00 \_ cpuhp/2 22 ? 00:00:00 \_ idle_inject/2 23 ? 00:04:21 \_ migration/2 24 ? 04:59:23 \_ ksoftirqd/2 26 ? 00:00:00 \_ kworker/2:0H-kblockd 27 ? 00:00:00 \_ cpuhp/3 28 ? 00:00:00 \_ idle_inject/3 29 ? 00:04:23 \_ migration/3 30 ? 04:56:56 \_ ksoftirqd/3 32 ? 00:00:00 \_ kworker/3:0H-kblockd 33 ? 00:00:00 \_ cpuhp/4 34 ? 00:00:00 \_ idle_inject/4 35 ? 00:04:26 \_ migration/4 36 ? 04:45:01 \_ ksoftirqd/4 38 ? 00:00:00 \_ kworker/4:0H-kblockd 39 ? 00:00:00 \_ cpuhp/5 40 ? 00:00:00 \_ idle_inject/5 41 ? 00:02:59 \_ migration/5 42 ? 03:40:32 \_ ksoftirqd/5 44 ? 00:00:00 \_ kworker/5:0H-kblockd 45 ? 00:00:00 \_ cpuhp/6 46 ? 00:00:00 \_ idle_inject/6 47 ? 00:02:51 \_ migration/6 48 ? 06:41:15 \_ ksoftirqd/6 50 ? 00:00:00 \_ kworker/6:0H-kblockd 51 ? 00:00:00 \_ cpuhp/7 52 ? 00:00:00 \_ idle_inject/7 53 ? 00:02:51 \_ migration/7 54 ? 06:39:29 \_ ksoftirqd/7 56 ? 00:00:00 \_ kworker/7:0H-kblockd 57 ? 00:00:00 \_ kdevtmpfs 58 ? 00:00:00 \_ netns 59 ? 00:00:00 \_ rcu_tasks_kthre 60 ? 00:00:00 \_ kauditd 61 ? 00:00:30 \_ khungtaskd 62 ? 00:00:00 \_ oom_reaper 63 ? 00:00:00 \_ writeback 64 ? 00:03:52 \_ kcompactd0 65 ? 00:00:00 \_ ksmd 66 ? 00:00:33 \_ khugepaged 113 ? 00:00:00 \_ kintegrityd 114 ? 00:00:00 \_ kblockd 115 ? 00:00:00 \_ blkcg_punt_bio 116 ? 00:00:00 \_ tpm_dev_wq 117 ? 00:00:00 \_ ata_sff 118 ? 00:00:00 \_ md 119 ? 00:00:00 \_ edac-poller 120 ? 00:00:00 \_ devfreq_wq 122 ? 00:00:00 \_ watchdogd 128 ? 04:18:47 \_ kswapd0 129 ? 00:00:00 \_ ecryptfs-kthrea 131 ? 00:00:00 \_ kthrotld 132 ? 00:00:00 \_ acpi_thermal_pm 135 ? 00:00:00 \_ scsi_eh_0 136 ? 00:00:00 \_ scsi_tmf_0 137 ? 00:00:00 \_ scsi_eh_1 138 ? 00:00:00 \_ scsi_tmf_1 140 ? 00:00:00 \_ vfio-irqfd-clea 142 ? 00:00:00 \_ ipv6_addrconf 151 ? 00:00:00 \_ kstrp 167 ? 00:01:35 \_ kworker/5:1H-kblockd 168 ? 00:00:00 \_ charger_manager 224 ? 00:00:00 \_ scsi_eh_2 226 ? 00:00:00 \_ scsi_tmf_2 227 ? 00:00:00 \_ cryptd 274 ? 00:00:00 \_ ttm_swap 296 ? 00:00:00 \_ raid5wq 347 ? 00:00:00 \_ btrfs-worker 348 ? 00:00:00 \_ btrfs-worker-hi 349 ? 00:00:00 \_ btrfs-delalloc 350 ? 00:00:00 \_ btrfs-flush_del 351 ? 00:00:00 \_ btrfs-cache 352 ? 00:00:00 \_ btrfs-submit 353 ? 00:00:00 \_ btrfs-fixup 354 ? 00:00:00 \_ btrfs-endio 355 ? 00:00:00 \_ btrfs-endio-met 356 ? 00:00:00 \_ btrfs-endio-met 357 ? 00:00:00 \_ btrfs-endio-rai 358 ? 00:00:00 \_ btrfs-endio-rep 359 ? 00:00:00 \_ btrfs-rmw 360 ? 00:00:00 \_ btrfs-endio-wri 361 ? 00:00:00 \_ btrfs-freespace 362 ? 00:00:00 \_ btrfs-delayed-m 363 ? 00:00:00 \_ btrfs-readahead 364 ? 00:00:00 \_ btrfs-qgroup-re 366 ? 00:02:29 \_ btrfs-cleaner 367 ? 03:32:59 \_ btrfs-transacti 385 ? 00:02:31 \_ kworker/7:1H-kblockd 401 ? 00:05:17 \_ kworker/1:1H-kblockd 405 ? 00:02:41 \_ kworker/0:1H-kblockd 406 ? 00:04:15 \_ kworker/2:1H-kblockd 408 ? 00:04:05 \_ kworker/4:1H-kblockd 409 ? 00:04:14 \_ kworker/3:1H-kblockd 421 ? 00:02:36 \_ kworker/6:1H-kblockd 458 ? 00:00:00 \_ rpciod 459 ? 00:00:00 \_ xprtiod 621 ? 00:00:00 \_ kaluad 623 ? 00:00:00 \_ kmpath_rdacd 624 ? 00:00:00 \_ kmpathd 625 ? 00:00:00 \_ kmpath_handlerd 643 ? 00:00:00 \_ jbd2/sda2-8 644 ? 00:00:00 \_ ext4-rsv-conver 1735 ? 00:00:00 \_ dio/sda3 1743 ? 00:00:00 \_ spl_system_task 1744 ? 00:00:00 \_ spl_delay_taskq 1745 ? 00:00:00 \_ spl_dynamic_tas 1746 ? 00:00:00 \_ spl_kmem_cache 1766 ? 00:00:00 \_ zvol 1767 ? 00:00:00 \_ arc_prune 1768 ? 00:00:00 \_ zthr_procedure 1769 ? 00:01:32 \_ zthr_procedure 1770 ? 00:00:00 \_ dbu_evict 1771 ? 00:01:24 \_ dbuf_evict 1785 ? 00:00:00 \_ z_vdev_file 1786 ? 00:01:14 \_ l2arc_feed 1798 ? 00:03:54 \_ none 3564 ? 00:00:00 \_ cfg80211 982151 ? 00:00:00 \_ xfsalloc 982157 ? 00:00:00 \_ xfs_mru_cache 982160 ? 00:00:00 \_ jfsIO 982161 ? 00:00:00 \_ jfsCommit 982164 ? 00:00:00 \_ jfsCommit 982165 ? 00:00:00 \_ jfsCommit 982168 ? 00:00:00 \_ jfsCommit 982169 ? 00:00:00 \_ jfsCommit 982170 ? 00:00:00 \_ jfsCommit 982171 ? 00:00:00 \_ jfsCommit 982172 ? 00:00:00 \_ jfsCommit 982173 ? 00:00:00 \_ jfsSync 1583449 ? 00:00:33 \_ loop1 526250 ? 00:00:00 \_ kworker/u17:2-btrfs-worker-high 2005415 ? 00:00:00 \_ kworker/u17:3-btrfs-worker-high 519674 ? 00:00:09 \_ kworker/u16:3-btrfs-endio-write 580179 ? 00:00:00 \_ kworker/4:3-events 627158 ? 00:00:00 \_ kworker/2:0-events 646066 ? 00:00:05 \_ kworker/u16:11-events_unbound 659873 ? 00:00:00 \_ kworker/4:1 668985 ? 00:00:03 \_ kworker/u16:10-events_unbound 681798 ? 00:00:00 \_ kworker/7:0-events 693969 ? 00:00:03 \_ kworker/u16:6-btrfs-delayed-meta 695164 ? 00:00:00 \_ kworker/2:2-mm_percpu_wq 695168 ? 00:00:00 \_ kworker/5:1-events 708025 ? 00:00:00 \_ kworker/5:2 721012 ? 00:00:00 \_ kworker/6:0-cgroup_destroy 735721 ? 00:00:00 \_ kworker/0:0-events 755847 ? 00:00:00 \_ kworker/7:1-events 759703 ? 00:00:00 \_ kworker/0:1-events 760643 ? 00:00:00 \_ kworker/3:0-events 760684 ? 00:00:00 \_ kworker/u16:5-btrfs-delayed-meta 762682 ? 00:00:00 \_ kworker/6:2-events 762688 ? 00:00:00 \_ kworker/6:3-events 762919 ? 00:00:00 \_ kworker/u16:7-btrfs-endio-write 773804 ? 00:00:00 \_ kworker/u16:2-btrfs-endio-write 773806 ? 00:00:00 \_ kworker/u16:4-btrfs-endio-write 773807 ? 00:00:00 \_ kworker/u16:9-btrfs-delayed-meta 775755 ? 00:00:00 \_ kworker/1:1-events 775760 ? 00:00:00 \_ kworker/1:3-events 777881 ? 00:00:00 \_ kworker/0:2-events 777883 ? 00:00:00 \_ kworker/3:2 783652 ? 00:00:00 \_ kworker/6:1-cgroup_destroy 786164 ? 00:00:00 \_ kworker/7:2 788652 ? 00:00:00 \_ kworker/0:3-events 1 ? 00:28:19 systemd 732 ? 00:00:13 rpcbind 769 ? 00:02:34 accounts-daemon 770 ? 00:00:00 avahi-daemon 817 ? 00:00:00 \_ avahi-daemon 771 ? 02:53:01 dbus-daemon 799 ? 00:04:13 irqbalance 802 ? 00:00:00 networkd-dispat 803 ? 00:00:01 polkitd 808 ? 02:17:12 systemd-logind 809 ? 00:00:05 udisksd 810 ? 00:00:19 wpa_supplicant 813 ? 07:35:21 lxcfs 829 ? 01:57:24 java 839 ? 00:00:00 vpnclient 853 ? 17:51:27 \_ vpnclient 846 ? 00:00:00 ModemManager 856 ? 00:00:00 unattended-upgr 938 ? 00:00:08 cron 949 ? 00:00:00 atd 991 tty1 00:00:00 agetty 1912 ? 00:00:01 systemd 1913 ? 00:00:00 \_ (sd-pam) 2023 ? 00:00:08 incusd 2034 ? 00:00:00 \_ init 2412 ? 00:00:10 \_ syslogd 2547 ? 00:00:47 \_ crond 537572 ? 00:00:00 | \_ sh 537574 ? 00:00:00 | \_ sleep 2876 ? 00:00:01 \_ udhcpc 3102 pts/0 00:00:00 \_ getty 174341 ? 00:02:17 \_ gpg-agent 2167 ? 00:00:09 incusd 2250 ? 00:01:01 \_ systemd 2536 ? 00:22:57 \_ systemd-journal 2591 ? 00:00:06 \_ systemd-udevd 2675 ? 00:00:12 \_ systemd-network 2773 ? 00:00:10 \_ systemd-resolve 2812 ? 00:00:37 \_ cron 2814 ? 00:00:03 \_ dbus-daemon 2858 ? 00:00:01 \_ networkd-dispat 2862 ? 00:00:36 \_ rsyslogd 2863 ? 00:00:09 \_ systemd-logind 2981 pts/0 00:00:00 \_ agetty 3031 ? 00:00:16 \_ sshd 3193 ? 00:03:55 \_ apache2 3792722 ? 00:00:00 \_ apache2 3792723 ? 00:00:00 \_ apache2 3792724 ? 00:00:00 \_ apache2 3792725 ? 00:00:00 \_ apache2 3792726 ? 00:00:00 \_ apache2 2374 ? 00:00:09 incusd 2404 ? 00:01:07 \_ systemd 2915 ? 00:00:27 \_ systemd-journal 3001 ? 00:00:07 \_ systemd-udevd 3151 ? 00:00:48 \_ systemd-network 3213 ? 00:00:07 \_ cron 3216 ? 00:00:01 \_ dbus-daemon 3229 ? 00:00:01 \_ networkd-dispat 3232 ? 00:00:01 \_ rsyslogd 3233 ? 00:00:00 \_ vpnserver 3325 ? 12:45:53 | \_ vpnserver 3240 ? 00:00:08 \_ systemd-logind 3243 ? 05:27:28 \_ systemd-resolve 3431 pts/0 00:00:00 \_ agetty 2633 ? 00:00:09 incusd 2772 ? 00:00:00 \_ init 3534 ? 00:00:00 \_ syslogd 3630 ? 00:00:09 \_ crond 3739 ? 00:00:01 \_ udhcpc 3794 pts/0 00:00:00 \_ getty 130597 ? 00:02:53 \_ netdata 130601 ? 00:00:00 \_ netdata 130858 ? 00:01:45 \_ go.d.plugin 3980 ? 04:11:12 tmux: server 3981 pts/1 00:00:00 \_ fish 1411432 pts/1 9-09:04:12 | \_ htop 295978 pts/2 00:00:00 \_ fish 296078 pts/2 00:00:00 | \_ fish 295982 pts/3 00:00:00 \_ fish 296264 pts/3 00:00:00 | \_ fish 296038 pts/4 00:00:00 \_ fish 296220 pts/4 00:00:00 | \_ fish 296779 pts/4 17-12:44:10 | \_ htop 296065 pts/5 00:00:00 \_ fish 296209 pts/5 00:00:00 | \_ fish 296606 pts/5 00:00:16 | \_ tail 296097 pts/6 00:00:00 \_ fish 296143 pts/7 00:00:00 \_ fish 296534 pts/7 00:00:00 | \_ fish 1591119 pts/7 00:29:37 | \_ watch 296214 pts/8 00:00:00 \_ fish 296277 pts/9 00:00:00 \_ fish 296887 pts/9 00:00:00 | \_ fish 297042 pts/9 00:00:14 | \_ tail 296591 pts/10 00:00:00 \_ fish 297059 pts/10 00:00:00 | \_ fish 297136 pts/10 00:08:08 | \_ watch 304739 pts/12 00:00:16 \_ fish 3474922 pts/28 00:00:57 \_ fish 1942166 pts/15 00:00:04 \_ fish 1684631 pts/19 00:00:02 \_ fish 1785518 pts/24 00:00:00 \_ fish 2779688 pts/11 00:00:00 \_ fish 2782018 pts/20 00:00:01 \_ fish 790416 pts/20 00:00:00 | \_ ps 2787652 pts/21 00:00:00 \_ fish 1546521 pts/16 00:00:02 \_ fish 1577423 pts/17 00:00:13 \_ fish 2463407 pts/17 00:00:03 | \_ journalctl 2355355 pts/18 00:00:01 \_ fish 497716 pts/13 00:00:01 \_ fish 616541 pts/13 00:02:29 | \_ incus 501059 pts/14 00:00:02 \_ fish 1084574 pts/14 00:00:02 | \_ journalctl 626320 pts/22 00:00:00 \_ fish 626454 pts/22 00:00:27 | \_ incus 2001695 pts/23 00:00:01 \_ fish 11653 ? 00:00:09 incusd 11757 ? 00:00:34 \_ systemd 12126 ? 00:00:18 \_ systemd-journal 12335 ? 00:00:06 \_ systemd-udevd 12431 ? 00:00:23 \_ systemd-network 12618 ? 00:00:15 \_ systemd-resolve 12632 ? 00:00:08 \_ cron 12636 ? 00:00:01 \_ dbus-daemon 12683 ? 00:00:00 \_ networkd-dispat 12687 ? 00:00:01 \_ rsyslogd 12689 ? 00:00:08 \_ systemd-logind 12844 pts/0 00:00:00 \_ agetty 2413982 ? 02:26:38 \_ ts3server 11690 ? 00:00:09 incusd 11818 ? 00:00:34 \_ systemd 12597 ? 00:00:24 \_ systemd-journal 12776 ? 00:00:07 \_ systemd-udevd 12868 ? 00:00:12 \_ systemd-network 12908 ? 00:00:07 \_ cron 12911 ? 00:00:01 \_ dbus-daemon 12922 ? 00:00:01 \_ networkd-dispat 12923 ? 00:00:01 \_ rsyslogd 12925 ? 00:00:08 \_ systemd-logind 12944 ? 00:00:17 \_ systemd-resolve 13092 pts/0 00:00:00 \_ agetty 11705 ? 00:00:09 incusd 11815 ? 00:00:31 \_ systemd 12612 ? 00:00:28 \_ systemd-journal 12767 ? 00:00:07 \_ systemd-udevd 12870 ? 00:00:11 \_ systemd-network 12878 ? 00:00:08 \_ cron 12879 ? 00:00:01 \_ dbus-daemon 12896 ? 00:00:01 \_ networkd-dispat 12897 ? 00:00:01 \_ rsyslogd 12903 ? 00:00:07 \_ systemd-logind 12939 ? 00:00:10 \_ systemd-resolve 13094 pts/0 00:00:00 \_ agetty 11845 ? 00:00:09 incusd 11886 ? 00:00:28 \_ systemd 12648 ? 00:00:31 \_ systemd-journal 12805 ? 00:00:06 \_ systemd-udevd 12869 ? 00:00:11 \_ systemd-network 12995 ? 00:00:10 \_ systemd-resolve 13010 ? 00:00:07 \_ cron 13011 ? 00:00:01 \_ dbus-daemon 13031 ? 00:00:00 \_ networkd-dispat 13033 ? 00:00:01 \_ rsyslogd 13037 ? 00:00:08 \_ systemd-logind 13178 pts/0 00:00:00 \_ agetty 482042 ? 37-15:23:43 \_ tor 12129 ? 00:00:05 incusd 14919 ? 00:00:09 incusd 14930 ? 00:04:27 \_ systemd 15083 ? 00:35:40 \_ systemd-journal 15122 ? 00:00:07 \_ systemd-udevd 15134 ? 00:00:35 \_ systemd-network 15142 ? 00:01:19 \_ cron 15143 ? 00:00:01 \_ dbus-daemon 15146 ? 00:00:01 \_ networkd-dispat 15150 ? 00:00:08 \_ systemd-logind 15153 ? 00:05:44 \_ systemd-resolve 15167 ? 00:00:00 \_ polkitd 15176 ? 01:20:31 \_ fail2ban-server 15178 ? 00:12:02 \_ memcached 15181 ? 02:27:00 \_ redis-server 15182 ? 00:15:02 \_ supervisord 15728 ? 00:00:07 | \_ python 15730 ? 00:00:00 | \_ radicale 15184 ? 00:00:00 \_ unattended-upgr 15185 pts/0 00:00:00 \_ agetty 15189 ? 00:00:19 \_ dovecot 15335 ? 00:00:01 | \_ anvil 15336 ? 00:00:03 | \_ log 15341 ? 00:00:08 | \_ config 16726 ? 00:01:37 | \_ stats 2238070 ? 00:00:00 | \_ auth 15213 ? 00:00:00 \_ nginx 15215 ? 00:00:00 | \_ nginx 15216 ? 00:00:00 | \_ nginx 15218 ? 00:00:00 | \_ nginx 15219 ? 00:00:00 | \_ nginx 15220 ? 00:00:00 | \_ nginx 15221 ? 00:00:00 | \_ nginx 15222 ? 00:00:00 | \_ nginx 15223 ? 00:00:00 | \_ nginx 15217 ? 00:00:52 \_ /usr/sbin/amavi 1766049 ? 00:00:02 | \_ /usr/sbin/amavi 2078730 ? 00:00:01 | \_ /usr/sbin/amavi 15423 ? 00:04:10 \_ uwsgi 15447 ? 00:00:00 | \_ uwsgi 15448 ? 00:00:00 | \_ uwsgi 15540 ? 00:04:19 \_ uwsgi 15936 ? 00:00:00 | \_ uwsgi 15937 ? 00:00:00 | \_ uwsgi 15612 ? 00:02:37 \_ opendkim 85158 ? 00:00:25 \_ packagekitd 1282893 ? 00:11:08 \_ clamd 1283992 ? 00:00:34 \_ freshclam 1068213 ? 00:00:15 \_ master 3943275 ? 00:00:00 | \_ qmgr 3945335 ? 00:00:00 | \_ tlsmgr 3945336 ? 00:00:00 | \_ anvil 579896 ? 00:00:00 | \_ smtpd 630661 ? 00:00:00 | \_ pickup 1068269 ? 00:00:45 \_ rsyslogd 730502 ? 00:04:14 \_ mariadbd 285271 ? 00:00:00 upowerd 1997779 ? 00:26:16 ffmpeg 2821604 ? 03:24:38 java 311222 ? 00:00:09 incusd 311232 ? 00:04:43 \_ systemd 311361 ? 00:00:27 \_ cron 311362 ? 00:00:51 \_ dbus-daemon 311365 ? 00:00:00 \_ networkd-dispat 311366 ? 00:00:50 \_ rsyslogd 311367 ? 00:00:28 \_ systemd-logind 311376 pts/0 00:00:00 \_ agetty 311377 ? 00:00:00 \_ unattended-upgr 311589 ? 00:05:26 \_ tmux: server 707995 pts/1 00:00:00 | \_ bash 707997 pts/1 01:40:28 | | \_ AMP_Linux_x86_6 710838 pts/2 00:00:00 | \_ bash 710840 pts/2 00:30:21 | | \_ AMP_Linux_x86_6 711499 pts/3 00:00:00 | \_ bash 711501 pts/3 01:04:34 | \_ AMP_Linux_x86_6 714366 pts/3 6-00:02:50 | \_ java 315415 ? 00:00:00 \_ polkitd 2860713 ? 00:00:00 \_ sshd 1047799 ? 00:00:01 \_ systemd-udevd 1048039 ? 00:00:05 \_ systemd-network 1048051 ? 00:00:04 \_ systemd-resolve 1048086 ? 00:06:03 \_ systemd-journal 596540 ? 00:00:01 \_ containerd 596728 ? 00:00:03 \_ dockerd 1541415 ? 00:03:56 multipathd 1548452 ? 00:00:54 rsyslogd 2879763 ? 00:01:29 sshd 2237904 ? 00:00:00 \_ sshd 2238120 ? 00:01:35 | \_ sshd 2238655 ? 00:00:25 \_ sshd 2238818 ? 00:00:10 | \_ sftp-server 777933 ? 00:00:00 \_ sshd 778903 pts/0 00:00:00 \_ bash 778924 pts/0 00:00:00 \_ sudo 778925 pts/0 00:00:00 \_ fish 778971 pts/0 00:00:00 \_ sudo 778972 pts/0 00:00:00 \_ fish 778996 pts/0 00:00:00 \_ tmux: client 269482 ? 00:03:09 sshfs 269901 ? 00:00:09 incusd 269928 ? 00:00:18 \_ systemd 270036 ? 00:03:54 \_ systemd-journal 270071 ? 00:00:04 \_ systemd-udevd 270086 ? 00:00:10 \_ systemd-network 270111 ? 00:00:11 \_ systemd-resolve 270117 ? 00:00:03 \_ cron 270119 ? 00:00:00 \_ dbus-daemon 270123 ? 00:00:00 \_ networkd-dispat 270124 ? 00:00:27 \_ rsyslogd 270125 ? 00:00:04 \_ systemd-logind 270126 ? 00:00:00 \_ atd 270138 pts/0 00:00:00 \_ agetty 309290 ? 01:45:49 \_ jellyfin 463115 ? 00:00:09 incusd 463140 ? 00:00:09 \_ systemd 463289 ? 00:00:13 \_ systemd-journal 463328 ? 00:00:01 \_ systemd-udevd 463347 ? 00:00:02 \_ systemd-network 463353 ? 00:00:01 \_ cron 463354 ? 00:00:00 \_ dbus-daemon 463357 ? 00:00:00 \_ networkd-dispat 463358 ? 00:00:00 \_ rsyslogd 463361 ? 00:00:01 \_ systemd-logind 463363 ? 00:00:02 \_ systemd-resolve 463393 pts/0 00:00:00 \_ agetty 463432 ? 00:15:52 \_ pihole-FTL 463441 ? 00:03:52 \_ lighttpd 463461 ? 00:00:00 \_ php-cgi 3085862 ? 00:00:09 \_ php-cgi 3120291 ? 00:00:09 \_ php-cgi 3137958 ? 00:00:09 \_ php-cgi 3145142 ? 00:00:09 \_ php-cgi 1518890 ? 00:00:04 incusd 1518927 ? 00:00:18 \_ systemd 1519292 ? 00:03:50 \_ systemd-journal 1519410 ? 00:00:00 \_ systemd-udevd 1519763 ? 00:00:00 \_ snapfuse 1519764 ? 00:00:00 \_ snapfuse 1519768 ? 00:00:00 \_ snapfuse 1519780 ? 00:00:00 \_ snapfuse 1519791 ? 00:00:00 \_ snapfuse 1519799 ? 00:00:00 \_ snapfuse 1519800 ? 00:00:00 \_ snapfuse 1519804 ? 00:00:20 \_ snapfuse 1519809 ? 00:00:00 \_ snapfuse 1519810 ? 00:00:00 \_ snapfuse 1519939 ? 00:00:03 \_ systemd-network 1519967 ? 00:00:23 \_ systemd-resolve 1520181 ? 00:00:15 \_ accounts-daemon 1520194 ? 00:00:09 \_ freshclam 1520197 ? 00:00:10 \_ cron 1520200 ? 00:00:00 \_ dbus-daemon 1520210 ? 00:00:00 \_ networkd-dispat 1520233 ? 00:00:00 \_ polkitd 1520241 ? 00:00:12 \_ rsyslogd 1520243 ? 00:00:34 \_ snapd 1520247 ? 00:00:01 \_ systemd-logind 1520249 ? 00:00:00 \_ udisksd 1520251 ? 00:00:00 \_ atd 1520252 ? 01:13:42 \_ f2b/server 1520282 pts/0 00:00:00 \_ agetty 1520292 ? 00:00:00 \_ sshd 1520307 ? 00:00:00 \_ ModemManager 1520394 ? 00:00:00 \_ unattended-upgr 1520412 ? 00:23:28 \_ redis-server 3034 ? 00:00:01 \_ apache2 3036 ? 00:00:00 \_ apache2 3060 ? 00:00:16 \_ apache2 3061 ? 00:00:16 \_ apache2 3087 ? 00:00:18 \_ apache2 3477 ? 00:00:15 \_ apache2 3511 ? 00:00:10 \_ apache2 202629 ? 00:00:13 \_ apache2 202663 ? 00:00:07 \_ apache2 732783 ? 00:00:01 \_ apache2 772292 ? 00:00:00 \_ apache2 772655 ? 00:00:00 \_ apache2 772688 ? 00:00:00 \_ apache2 772689 ? 00:00:00 \_ apache2 1785839 ? 00:00:02 mount.davfs 2575612 ? 00:00:01 incusd 2575626 ? 00:00:08 \_ systemd 2575726 ? 00:00:11 \_ systemd-journal 2575791 ? 00:00:00 \_ systemd-udevd 2575918 ? 00:00:00 \_ snapfuse 2575919 ? 00:00:00 \_ snapfuse 2575920 ? 00:00:00 \_ snapfuse 2575921 ? 00:00:00 \_ snapfuse 2575922 ? 00:00:00 \_ snapfuse 2575923 ? 00:00:16 \_ snapfuse 2576010 ? 00:00:04 \_ systemd-network 2576015 ? 00:00:01 \_ systemd-resolve 2576063 ? 00:00:09 \_ accounts-daemon 2576069 ? 00:00:01 \_ cron 2576072 ? 00:00:00 \_ dbus-daemon 2576078 ? 00:00:00 \_ networkd-dispat 2576079 ? 00:00:00 \_ polkitd 2576080 ? 00:00:00 \_ rsyslogd 2576082 ? 00:00:28 \_ snapd 2576084 ? 00:00:01 \_ systemd-logind 2576100 ? 00:00:00 \_ udisksd 2576101 ? 00:00:00 \_ atd 2576109 ? 00:00:00 \_ sshd 2576117 pts/0 00:00:00 \_ agetty 2576129 ? 00:00:00 \_ ModemManager 2576131 ? 00:00:00 \_ unattended-upgr 2576134 ? 00:00:25 \_ apache2 3794880 ? 00:00:00 | \_ apache2 3794881 ? 00:00:00 | \_ apache2 3794883 ? 00:00:00 | \_ apache2 3794884 ? 00:00:00 | \_ apache2 3794885 ? 00:00:00 | \_ apache2 2576231 ? 00:18:39 \_ mariadbd 1650438 ? 00:00:00 incusd 1650457 ? 00:00:06 \_ systemd 1650570 ? 00:00:20 \_ systemd-journal 1650611 ? 00:00:00 \_ systemd-udevd 1650628 ? 00:00:16 \_ systemd-network 1650647 ? 00:00:00 \_ cron 1650648 ? 00:00:00 \_ dbus-daemon 1650659 ? 00:00:00 \_ networkd-dispat 1650665 ? 00:00:01 \_ rsyslogd 1650679 ? 00:00:00 \_ systemd-logind 1650680 ? 00:00:02 \_ systemd-resolve 1650682 ? 02:44:00 \_ java 1650793 ? 00:00:04 \_ sshd 1650808 pts/0 00:00:00 \_ agetty 1650840 ? 00:54:25 \_ Xtigervnc 1650850 ? 00:00:00 \_ vncserver 1650851 ? 00:00:00 | \_ Xvnc-session 1650859 ? 00:00:00 | \_ vncconfig 1650860 ? 00:00:20 | \_ x-window-manage 1651017 ? 00:00:01 | \_ ssh-agent 1651143 ? 00:15:04 \_ Xtigervnc 1651152 ? 00:00:00 \_ vncserver 1651153 ? 00:00:00 | \_ Xvnc-session 1651154 ? 00:00:00 | \_ vncconfig 1651155 ? 00:00:17 | \_ x-window-manage 1651224 ? 00:00:01 | \_ ssh-agent 1651477 ? 00:00:02 \_ Xtigervnc 1651481 ? 00:00:00 \_ vncserver 1651482 ? 00:00:00 | \_ Xvnc-session 1651483 ? 00:00:00 | \_ vncconfig 1651484 ? 00:00:16 | \_ x-window-manage 1651569 ? 00:00:01 | \_ ssh-agent 2312655 ? 01:20:05 \_ firefox 2312798 ? 00:00:00 \_ Socket Process 2312873 ? 00:00:00 \_ Privileged Cont 2312989 ? 00:39:48 \_ WebExtensions 2313137 ? 00:24:22 \_ Isolated Web Co 2313276 ? 00:00:00 \_ Utility Process 2313285 ? 00:19:42 \_ Isolated Web Co 2313334 ? 00:05:13 \_ Isolated Web Co 2313947 ? 00:19:23 \_ Isolated Web Co 2314418 ? 00:04:50 \_ Isolated Web Co 2314450 ? 00:06:12 \_ Isolated Web Co 244835 ? 00:00:00 \_ RDD Process 253943 ? 00:04:04 \_ Isolated Web Co 254605 ? 00:10:42 \_ Isolated Web Co 254773 ? 00:00:00 \_ Web Content 1650506 ? 00:00:33 incusd 1650646 ? 00:00:18 incusd 1650760 ? 00:00:05 incusd 1650900 ? 00:00:00 incusd 2484018 ? 00:01:31 systemd-udevd 2484390 ? 00:00:20 systemd-network 2484438 ? 00:27:12 systemd-resolve 2484448 ? 00:01:47 systemd-journal 2484672 ? 00:00:00 systemd-timesyn 2486499 ? 00:09:51 incusd 2486684 ? 00:14:01 \_ dnsmasq 2486721 ? 00:00:00 \_ dnsmasq 2486761 ? 00:00:45 \_ dnsmasq 616556 ? 00:00:00 \_ incusd 616565 pts/1 00:00:00 | \_ fish 3839004 pts/1 00:00:41 | \_ watch 626463 ? 00:00:00 \_ incusd 626468 pts/2 00:00:01 \_ fish 526620 ? 00:00:00 incusd 526634 ? 00:00:00 \_ init 526960 ? 00:00:00 \_ syslogd 526996 ? 00:00:00 \_ crond 527097 ? 00:00:00 \_ udhcpc 527156 pts/0 00:00:00 \_ getty 3263170 ? 00:26:41 \_ netdata 3263184 ? 00:00:00 \_ netdata 3263590 ? 00:00:27 \_ go.d.plugin 741603 ? 19:32:16 netdata 741612 ? 00:00:00 \_ netdata 742065 ? 00:43:09 \_ python3 790401 ? 00:00:00 | \_ uptime <defunct> 742067 ? 00:14:47 \_ fping 742072 ? 00:47:06 \_ go.d.plugin 742100 ? 06:01:55 \_ apps.plugin 742101 ? 00:10:34 \_ ebpf.plugin 742108 ? 00:02:39 \_ SDMAIN 2278477 ? 00:00:06 \_ debugfs.plugin 2278953 ? 00:00:08 \_ nfacct.plugin 740023 ? 00:00:02 \_ bash 741604 ? 00:01:39 systemd-journal 2006287 ? 00:00:00 incusd 2006304 ? 00:00:01 \_ init 2006559 ? 00:00:01 \_ syslogd 2006593 ? 00:00:00 \_ crond 2006701 ? 00:00:00 \_ udhcpc 2006740 ? 00:00:00 \_ supervise-daemo 2006741 ? 00:03:49 | \_ gitea 2006813 ? 00:00:04 \_ sshd 2006818 pts/0 00:00:00 \_ getty 2237930 ? 00:00:00 systemd 2237937 ? 00:00:00 \_ (sd-pam) 3274626 ? 00:00:13 ssh 236588 ? 00:00:18 java 384083 ? 10:37:49 \_ ffmpeg 680409 ? 00:00:00 fwupd
-
whereis netdata:
netdata: /usr/sbin/netdata /usr/lib/netdata /etc/netdata /usr/libexec/netdata /usr/share/netdata
Additional Information:_
The config of the mentioned "web-monitor", that didn't crash yet:
web-monitor: netdata.conf
[global]
# run as user = netdata
# glibc malloc arena max for plugins = 1
# cpu cores = 8
# libuv worker threads = 48
# host access prefix =
hostname = <hostname>
# enable metric correlations = yes
# metric correlations method = ks2
# timezone = UTC
# OOM score = 0
# process scheduling policy = batch
# process nice level = 19
# pthread stack size = 131072
[db]
update every = 10
mode = dbengine
dbengine page cache size MB = 32
# dbengine extent cache size MB = 0
# dbengine enable journal integrity check = no
dbengine disk space MB = 4096
# dbengine multihost disk space MB = 256
# memory deduplication (ksm) = yes
# cleanup obsolete charts after secs = 3600
# gap when lost iterations above = 1
# enable replication = yes
# seconds to replicate = 86400
# seconds per replication step = 600
# cleanup orphan hosts after secs = 3600
# dbengine use direct io = yes
# dbengine pages per extent = 64
storage tiers = 1
dbengine parallel initialization = yes
dbengine tier 1 multihost disk space MB = 128
dbengine tier 1 update every iterations = 60
dbengine tier 1 backfill = new
dbengine tier 2 multihost disk space MB = 64
dbengine tier 2 update every iterations = 60
dbengine tier 2 backfill = new
delete obsolete charts files = yes
delete orphan hosts files = yes
enable zero metrics = yes
# replication threads = 1
[directories]
# config = /opt/netdata/etc/netdata
# stock config = /opt/netdata/usr/lib/netdata/conf.d
# log = /opt/netdata/var/log/netdata
# web = /opt/netdata/usr/share/netdata/web
# cache = /opt/netdata/var/cache/netdata
# lib = /opt/netdata/var/lib/netdata
# home = /root
# lock = /opt/netdata/var/lib/netdata/lock
# plugins = "/opt/netdata/usr/libexec/netdata/plugins.d" "/opt/netdata/etc/netdata/custom-plugins.d"
# registry = /opt/netdata/var/lib/netdata/registry
# stock health config = /opt/netdata/usr/lib/netdata/conf.d/health.d
# health config = /opt/netdata/etc/netdata/health.d
[logs]
# debug flags = 0x0000000000000000
# facility = daemon
# logs flood protection period = 60
# logs to trigger flood protection = 1000
# level = info
# debug = /opt/netdata/var/log/netdata/debug.log
# daemon = /opt/netdata/var/log/netdata/daemon.log
# collector = /opt/netdata/var/log/netdata/collector.log
# access = /opt/netdata/var/log/netdata/access.log
# health = /opt/netdata/var/log/netdata/health.log
[environment variables]
# PATH = /opt/netdata/bin:/bin:/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin
# PYTHONPATH =
# TZ = :/etc/localtime
[host labels]
# name = value
[sqlite]
# auto vacuum = INCREMENTAL
# synchronous = NORMAL
# journal mode = WAL
# temp store = MEMORY
# journal size limit = 16777216
# cache size = -2000
[cloud]
# conversation log = no
# proxy = env
# query thread count = 4
[ml]
enabled = no
# maximum num samples to train = 21600
# minimum num samples to train = 900
# train every = 10800
# number of models per dimension = 18
# delete models older than = 604800
# num samples to diff = 1
# num samples to smooth = 3
# num samples to lag = 5
# random sampling ratio = 0.20000
# maximum number of k-means iterations = 1000
# dimension anomaly score threshold = 0.99000
# host anomaly rate threshold = 1.00000
# anomaly detection grouping method = average
# anomaly detection grouping duration = 300
# num training threads = 4
# flush models batch size = 128
# dimension anomaly rate suppression window = 900
# dimension anomaly rate suppression threshold = 450
# enable statistics charts = yes
# hosts to skip from training = !*
# charts to skip from training = netdata.*
# stream anomaly detection charts = yes
[health]
# silencers file = /opt/netdata/var/lib/netdata/health.silencers.json
# enabled = yes
# is ephemeral = no
# has unstable connection = no
# run at least every seconds = 10
# postpone alarms during hibernation for seconds = 60
# default repeat warning = never
# default repeat critical = never
# in memory max health log entries = 1000
# health log history = 432000
# enabled alarms = *
# script to execute on alarm = /opt/netdata/usr/libexec/netdata/plugins.d/alarm-notify.sh
# use summary for notifications = yes
# enable stock health configuration = yes
[web]
# ssl key = /opt/netdata/etc/netdata/ssl/key.pem
# ssl certificate = /opt/netdata/etc/netdata/ssl/cert.pem
# tls version = 1.3
# tls ciphers = none
# ses max tg_des_window = 15
# des max tg_des_window = 15
# mode = static-threaded
# listen backlog = 4096
# default port = 19999
# bind to = *
# disconnect idle clients after seconds = 60
# timeout for first request = 60
# accept a streaming request every seconds = 0
# respect do not track policy = no
# x-frame-options response header =
# allow connections from = localhost *
# allow connections by dns = heuristic
# allow dashboard from = localhost *
# allow dashboard by dns = heuristic
# allow badges from = *
# allow badges by dns = heuristic
# allow streaming from = *
# allow streaming by dns = heuristic
# allow netdata.conf from = localhost fd* 10.* 192.168.* 172.16.* 172.17.* 172.18.* 172.19.* 172.20.* 172.21.* 172.22.* 172.23.* 172.24.* 172.25.* 172.26.* 172.27.* 172.28.* 172.29.* 172.30.* 172.31.* UNKNOWN
# allow netdata.conf by dns = no
# allow management from = localhost
# allow management by dns = heuristic
# enable gzip compression = yes
# gzip compression strategy = default
# gzip compression level = 3
# ssl skip certificate verification = no
# web server threads = 6
# web server max sockets = 262144
[httpd]
# enabled = no
[registry]
enabled = no
# netdata unique id file = /opt/netdata/var/lib/netdata/registry/netdata.public.unique.id
# registry db file = /opt/netdata/var/lib/netdata/registry/registry.db
# registry log file = /opt/netdata/var/lib/netdata/registry/registry-log.db
# registry save db every new entries = 1000000
# registry expire idle persons days = 365
# registry domain =
# registry to announce = https://registry.my-netdata.io
# registry hostname = web-monitor
# verify browser cookies support = yes
# enable cookies SameSite and Secure = yes
# max URL length = 1024
# max URL name length = 50
# use mmap = no
# netdata management api key file = /opt/netdata/var/lib/netdata/netdata.api.key
# allow from = *
# allow by dns = heuristic
[global statistics]
update every = 1
[plugins]
timex = no
idlejitter = no
netdata monitoring = yes
profile = no
tc = no
diskspace = no
proc = no
cgroups = no
statsd = no
enable running new plugins = no
# check for new plugins every = 60
slabinfo = no
apps = no
charts.d = no
debugfs = no
ebpf = no
go.d = yes
ioping = no
nfacct = no
perf = no
python.d = no
[...]
Another issue:
The "backup" container can't use the 'edit-config' tool. I have been copying the configs from 'orig' (/usr/lib/netdata/conf.d
) to /etc/netdata
and edited them manually with nano
.
ERROR: Unable to find a usable container tool stack. I support Docker and Podman.
I find that error somewhat weird since, while I am running in a incus/lxd container, netdata should just behave as if it was not running in a container, since incus/lxd containers are full containers and not partial containers like with docker.
Looking at the code of edit-config
it appears, that it attempts to access a nested container. There are no nested containers and nesting is, in fact, not even enabled.
It might be confused by the containers-hostname "netdata-backup-storage".
The seperate command in the running_in_container
function all error out, thus no return 0
is reached ...
Edit:
edit-config.sh
was fixed in #16825
from netdata.
wow ok there are a ton of things to take into account to have multiple netdata instances running on the same host even as containers.
Not an expert with lxd but I think the principles are nearly the same. First of all you need to do (on top of my head)
- Each deployment, different port,
- If someone uses storage (dbengine) make sure that it's has it's isolated directories to manage.
Q1: why do you have a backup on the same machine? I mean there is no use for it except in the case of something like an EBS volume/ CEPH volume as storage.
Q2: How the web-monitor container communicate with the main deployment? or it doesn't and you inspect those metrics how??
Q3: Could you share a little bit more about the "backup" container. I mean it's a netdata container that we ship in our dockerhub that. . . . .and how you implement back ups there...?
from netdata.
- Yes each instance, different port 19997, 19998, 19999. And even then, each container has its own personal virtual interface, that is different from the host interface, that routes through a virtual incus/lxd-router for internet access, if configured. (https://linuxcontainers.org/incus/docs/main/networks/)
- That is automatically done, because each container has its own rootfs
Q1. Because of the data loss issue being caused by segfaults, streaming the data between 2 instances keeps it alive, because it is restored from the one, that didn't segfault.
An additional use is, that I can reinstall one of the instances, without losing the historic data.
Q2. Very easily. Effectively, you have a virtual host-only LAN between container groups and the host. The x.x.x.1 is the host. So streaming to x.x.x.1:19999 works flawlessly. (e.g. in a 10.1.1.0/24 'container'-subnet the 10.1.1.1 is the host, while 10.1.1.2 - 10.1.1.254 are assigned to containers.)
Also, the web-monitor is accessible from the outside using an apache2 reverse-proxy (the apache2 is in the same incus-network can access the web-monitor as if it were on the same physical LAN)
Q3. Backup container: A netdata instance with everything disabled except netdata monitoring and go.d/filecheck plugin for dbengine directory size. Running in an alpine container. Unlike docker, where you start application images, in incus/lxd you start OS-images and customize them like you would with a bare-metal server. That backup container is also accessible from the outside using an apache2 reverse-proxy.
from netdata.
Related Issues (20)
- [Bug]: Runing netdata plugin from nested directory uses wrong config dir HOT 1
- [Bug]: Increased number of Read/Write operations per second HOT 3
- [question]: my netdata cloud plan does not allow more than 5 active nodes.?earlier plans cancel and +8505USD/YEAR ! HOT 2
- [Bug]: cron netdata-updater script failed HOT 2
- [Feat]: carts for mariadb circular buffers
- [Bug]: Netdata agent that reports offline and then back online
- [Bug]: netdata plugins segfaults after update 1.45.3 to 1.45.4 HOT 7
- [Feat]: rspamd monitoring with netdata HOT 3
- [Bug]: You have no nodes HOT 2
- [Bug]: Table chart renders empty for custom dashboard HOT 3
- [Bug]: HTTP 101 requests should not pollute web_log.request_processing_time by default
- [Feat]: Ability to better filter systemd units (transient podman healthchecks) HOT 4
- [Bug]: Netdata directories missing files HOT 2
- [Feat]: Netdata needs a new and improved SNMP collector HOT 1
- [Feat]: Netdata native collector for AWS cloudwatch metrics HOT 1
- [Bug]: Netdata plugin systemd-journal eat disk space HOT 6
- [Bug]: `stable` Docker image tag stuck at v1.45.3 HOT 1
- [Feat]: Automatically update vnodes.conf when user defines a vnode in collector conf HOT 2
- [Bug]: NETDATA_UPDATER_JITTER can cause slow shutdowns
- [Feat]: support json format weblog HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from netdata.