Recently, OVN performance pressure test is being carried out, this time based on standalone mode.
After the stress test, the CPU utilization of the ovn-northd process has been very high, close to 100%, but the CPU utilization of the OVN-NB/OVN-SB process is very low, close to 0.
2020-06-29T02:01:09.667Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 2.12.0
2020-06-29T02:01:19.677Z|00003|memory|INFO|6656 kB peak resident set size after 10.0 seconds
2020-06-29T02:01:19.677Z|00004|memory|INFO|cells:31 monitors:2 sessions:1
2020-06-29T02:48:42.312Z|00005|memory|INFO|peak resident set size grew 56% in last 2842.6 seconds, from 6656 kB to 10352 kB
2020-06-29T02:48:42.312Z|00006|memory|INFO|cells:34572 monitors:2 sessions:2
2020-06-29T02:50:32.331Z|00007|memory|INFO|peak resident set size grew 54% in last 110.0 seconds, from 10352 kB to 15956 kB
2020-06-29T02:50:32.332Z|00008|memory|INFO|cells:139273 monitors:2 sessions:2
2020-06-29T02:53:52.381Z|00009|memory|INFO|peak resident set size grew 52% in last 200.0 seconds, from 15956 kB to 24316 kB
2020-06-29T02:53:52.381Z|00010|memory|INFO|cells:299166 monitors:2 sessions:2
2020-06-29T03:00:22.523Z|00011|memory|INFO|peak resident set size grew 52% in last 390.1 seconds, from 24316 kB to 36872 kB
2020-06-29T03:00:22.523Z|00012|memory|INFO|cells:537462 monitors:2 sessions:2
2020-06-29T03:00:32.532Z|00013|memory|INFO|peak resident set size grew 116% in last 10.0 seconds, from 36872 kB to 79768 kB
2020-06-29T03:00:32.532Z|00014|memory|INFO|cells:542471 monitors:2 sessions:2
2020-06-29T03:16:32.928Z|00015|memory|INFO|peak resident set size grew 79% in last 960.4 seconds, from 79768 kB to 142532 kB
2020-06-29T03:16:32.928Z|00016|memory|INFO|cells:918878 monitors:2 sessions:2
2020-06-29T04:03:20.635Z|00017|poll_loop|INFO|wakeup due to 0-ms timeout at ovsdb/jsonrpc-server.c:599 (52% CPU usage)
2020-06-29T04:03:20.651Z|00018|poll_loop|INFO|wakeup due to 0-ms timeout at ovsdb/jsonrpc-server.c:599 (52% CPU usage)
2020-06-29T04:03:20.651Z|00019|poll_loop|INFO|wakeup due to [POLLIN] on fd 22 (/var/run/ovn/ovnnb_db.sock<->) at lib/stream-fd.c:157 (52% CPU usage)
2020-06-29T04:03:20.662Z|00020|poll_loop|INFO|wakeup due to 0-ms timeout at ovsdb/jsonrpc-server.c:599 (52% CPU usage)
2020-06-29T04:03:20.707Z|00021|poll_loop|INFO|wakeup due to [POLLIN] on fd 22 (/var/run/ovn/ovnnb_db.sock<->) at lib/stream-fd.c:157 (52% CPU usage)
2020-06-29T04:03:20.713Z|00022|poll_loop|INFO|wakeup due to 0-ms timeout at ovsdb/jsonrpc-server.c:599 (52% CPU usage)
2020-06-29T04:03:20.727Z|00023|poll_loop|INFO|wakeup due to [POLLIN] on fd 22 (/var/run/ovn/ovnnb_db.sock<->) at lib/stream-fd.c:157 (52% CPU usage)
2020-06-29T04:03:20.748Z|00024|poll_loop|INFO|wakeup due to 0-ms timeout at ovsdb/jsonrpc-server.c:599 (52% CPU usage)
2020-06-29T04:03:20.748Z|00025|poll_loop|INFO|wakeup due to [POLLIN] on fd 22 (/var/run/ovn/ovnnb_db.sock<->) at lib/stream-fd.c:157 (52% CPU usage)
2020-06-29T04:03:20.763Z|00026|poll_loop|INFO|wakeup due to 0-ms timeout at ovsdb/jsonrpc-server.c:599 (52% CPU usage)
2020-06-29T04:03:25.019Z|00027|memory|INFO|peak resident set size grew 54% in last 2812.1 seconds, from 142532 kB to 219124 kB
2020-06-29T04:03:25.019Z|00028|memory|INFO|cells:1599256 monitors:2 sessions:2
2020-06-29T02:01:09.728Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 2.12.0
2020-06-29T02:01:19.738Z|00003|memory|INFO|6656 kB peak resident set size after 10.0 seconds
2020-06-29T02:01:19.738Z|00004|memory|INFO|cells:227 monitors:2 sessions:1
2020-06-29T02:37:25.624Z|00005|jsonrpc|WARN|unix#2: send error: Broken pipe
2020-06-29T02:37:25.625Z|00006|reconnect|WARN|unix#2: connection dropped (Broken pipe)
2020-06-29T02:37:27.489Z|00007|jsonrpc|WARN|unix#3: send error: Broken pipe
2020-06-29T02:37:27.490Z|00008|reconnect|WARN|unix#3: connection dropped (Broken pipe)
2020-06-29T02:39:56.123Z|00009|jsonrpc|WARN|unix#6: receive error: Connection reset by peer
2020-06-29T02:39:56.123Z|00010|reconnect|WARN|unix#6: connection dropped (Connection reset by peer)
2020-06-29T02:40:46.428Z|00011|jsonrpc|WARN|unix#8: receive error: Connection reset by peer
2020-06-29T02:40:46.428Z|00012|reconnect|WARN|unix#8: connection dropped (Connection reset by peer)
2020-06-29T02:48:12.367Z|00013|memory|INFO|peak resident set size grew 90% in last 2812.6 seconds, from 6656 kB to 12628 kB
2020-06-29T02:48:12.367Z|00014|memory|INFO|cells:46047 monitors:3 sessions:34
2020-06-29T02:48:22.370Z|00015|memory|INFO|peak resident set size grew 151% in last 10.0 seconds, from 12628 kB to 31644 kB
2020-06-29T02:48:22.370Z|00016|memory|INFO|cells:97656 monitors:3 sessions:34
2020-06-29T02:49:12.608Z|00017|memory|INFO|peak resident set size grew 67% in last 50.2 seconds, from 31644 kB to 52816 kB
2020-06-29T02:49:12.608Z|00018|memory|INFO|cells:419671 monitors:3 sessions:34
2020-06-29T02:50:03.187Z|00019|memory|INFO|peak resident set size grew 62% in last 50.6 seconds, from 52816 kB to 85820 kB
2020-06-29T02:50:03.187Z|00020|memory|INFO|cells:729595 monitors:3 sessions:34
2020-06-29T02:51:14.654Z|00021|memory|INFO|peak resident set size grew 52% in last 71.5 seconds, from 85820 kB to 130108 kB
2020-06-29T02:51:14.654Z|00022|memory|INFO|cells:1116719 monitors:3 sessions:34
2020-06-29T02:53:34.956Z|00023|memory|INFO|peak resident set size grew 55% in last 140.3 seconds, from 130108 kB to 201172 kB
2020-06-29T02:53:34.957Z|00024|memory|INFO|cells:1813179 monitors:3 sessions:34
2020-06-29T02:57:27.556Z|00025|memory|INFO|peak resident set size grew 52% in last 232.6 seconds, from 201172 kB to 305564 kB
2020-06-29T02:57:27.556Z|00026|memory|INFO|cells:2789603 monitors:3 sessions:34
2020-06-29T03:04:24.792Z|00027|timeval|WARN|Unreasonably long 7210ms poll interval (6019ms user, 674ms system)
2020-06-29T03:04:24.792Z|00028|timeval|WARN|faults: 315533 minor, 0 major
2020-06-29T03:04:24.792Z|00029|timeval|WARN|disk: 0 reads, 304840 writes
2020-06-29T03:04:24.792Z|00030|timeval|WARN|context switches: 1376 voluntary, 602 involuntary
2020-06-29T03:04:24.792Z|00031|coverage|INFO|Event coverage, avg rate over last: 5 seconds, last minute, last hour, hash=db894c43:
2020-06-29T03:04:24.792Z|00032|coverage|INFO|hmap_pathological 7.4/sec 10.200/sec 4.8067/sec total: 17306
2020-06-29T03:04:24.792Z|00033|coverage|INFO|hmap_expand 42731.0/sec 8333.533/sec 757.0369/sec total: 2726445
2020-06-29T03:04:24.792Z|00034|coverage|INFO|lockfile_lock 0.0/sec 0.000/sec 0.0000/sec total: 1
2020-06-29T03:04:24.792Z|00035|coverage|INFO|poll_create_node 129.2/sec 218.500/sec 145.9342/sec total: 525996
2020-06-29T03:04:24.792Z|00036|coverage|INFO|poll_zero_timeout 0.0/sec 0.000/sec 0.0261/sec total: 99
2020-06-29T03:04:24.792Z|00037|coverage|INFO|seq_change 0.0/sec 0.000/sec 0.0000/sec total: 3
2020-06-29T03:04:24.792Z|00038|coverage|INFO|pstream_open 0.0/sec 0.000/sec 0.0003/sec total: 3
2020-06-29T03:04:24.792Z|00039|coverage|INFO|util_xalloc 2314482.0/sec 445723.267/sec 41307.0894/sec total: 148731186
2020-06-29T03:04:24.792Z|00040|coverage|INFO|71 events never hit
2020-06-29T03:04:24.792Z|00041|poll_loop|INFO|wakeup due to [POLLIN] on fd 21 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (92% CPU usage)
2020-06-29T03:04:24.792Z|00042|memory|INFO|peak resident set size grew 405% in last 417.2 seconds, from 305564 kB to 1541964 kB
2020-06-29T03:04:24.792Z|00043|memory|INFO|cells:4077200 monitors:3 sessions:34
2020-06-29T03:04:24.793Z|00044|poll_loop|INFO|wakeup due to [POLLIN] on fd 21 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (92% CPU usage)
2020-06-29T03:04:24.794Z|00045|poll_loop|INFO|wakeup due to [POLLIN] on fd 21 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (92% CPU usage)
2020-06-29T03:04:24.795Z|00046|poll_loop|INFO|wakeup due to [POLLIN] on fd 21 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (92% CPU usage)
2020-06-29T03:04:24.795Z|00047|poll_loop|INFO|wakeup due to [POLLIN] on fd 21 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (92% CPU usage)
2020-06-29T03:04:24.796Z|00048|poll_loop|INFO|wakeup due to [POLLIN] on fd 21 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (92% CPU usage)
2020-06-29T03:04:24.797Z|00049|poll_loop|INFO|wakeup due to [POLLIN] on fd 21 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (92% CPU usage)
2020-06-29T03:04:24.798Z|00050|poll_loop|INFO|wakeup due to [POLLIN] on fd 21 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (92% CPU usage)
2020-06-29T03:04:24.798Z|00051|poll_loop|INFO|wakeup due to [POLLIN] on fd 21 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (92% CPU usage)
2020-06-29T03:04:24.799Z|00052|poll_loop|INFO|wakeup due to [POLLIN] on fd 21 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (92% CPU usage)
2020-06-29T03:34:28.261Z|00053|timeval|WARN|Unreasonably long 13569ms poll interval (11639ms user, 1074ms system)
2020-06-29T03:34:28.262Z|00054|timeval|WARN|faults: 614892 minor, 0 major
2020-06-29T03:34:28.262Z|00055|timeval|WARN|disk: 0 reads, 573808 writes
2020-06-29T03:34:28.262Z|00056|timeval|WARN|context switches: 2355 voluntary, 1133 involuntary
2020-06-29T03:34:28.262Z|00057|coverage|INFO|Skipping details of duplicate event coverage for hash=db894c43
2020-06-29T03:34:28.262Z|00058|poll_loop|INFO|Dropped 63 log messages in last 1804 seconds (most recently, 1801 seconds ago) due to excessive rate
2020-06-29T03:34:28.262Z|00059|poll_loop|INFO|wakeup due to [POLLIN] on fd 18 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:274 (94% CPU usage)
2020-06-29T03:34:28.262Z|00060|poll_loop|INFO|wakeup due to [POLLIN][POLLHUP] on fd 21 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (94% CPU usage)
2020-06-29T03:34:28.262Z|00061|memory|INFO|peak resident set size grew 87% in last 1803.5 seconds, from 1541964 kB to 2888308 kB
2020-06-29T03:34:28.262Z|00062|memory|INFO|cells:7666606 monitors:3 sessions:34
2020-06-29T03:34:28.263Z|00063|poll_loop|INFO|wakeup due to [POLLIN][POLLHUP] on fd 21 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (94% CPU usage)
2020-06-29T03:34:28.264Z|00064|poll_loop|INFO|wakeup due to [POLLIN][POLLHUP] on fd 21 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (94% CPU usage)
2020-06-29T03:34:28.264Z|00065|poll_loop|INFO|wakeup due to [POLLIN] on fd 17 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (94% CPU usage)
2020-06-29T03:34:28.265Z|00066|poll_loop|INFO|wakeup due to [POLLIN][POLLHUP] on fd 21 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (94% CPU usage)
2020-06-29T03:34:28.266Z|00067|poll_loop|INFO|wakeup due to [POLLIN][POLLHUP] on fd 21 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (94% CPU usage)
2020-06-29T03:34:37.858Z|00068|timeval|WARN|Unreasonably long 9593ms poll interval (9501ms user, 58ms system)
2020-06-29T03:34:37.858Z|00069|timeval|WARN|faults: 919 minor, 0 major
2020-06-29T03:34:37.858Z|00070|timeval|WARN|context switches: 0 voluntary, 10 involuntary
2020-06-29T03:34:37.858Z|00071|coverage|INFO|Skipping details of duplicate event coverage for hash=db894c43
2020-06-29T03:34:37.858Z|00072|poll_loop|INFO|wakeup due to [POLLIN][POLLHUP] on fd 21 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (100% CPU usage)
2020-06-29T03:34:37.859Z|00073|poll_loop|INFO|wakeup due to [POLLIN][POLLHUP] on fd 21 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (100% CPU usage)
2020-06-29T03:34:37.861Z|00074|poll_loop|INFO|wakeup due to [POLLIN][POLLHUP] on fd 21 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (100% CPU usage)
2020-06-29T03:34:37.862Z|00075|poll_loop|INFO|wakeup due to [POLLIN][POLLHUP] on fd 21 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:157 (100% CPU usage)
2020-06-29T03:34:40.262Z|00076|poll_loop|INFO|Dropped 442 log messages in last 3 seconds (most recently, 0 seconds ago) due to excessive rate
2020-06-29T03:34:40.262Z|00077|poll_loop|INFO|wakeup due to [POLLOUT] on fd 17 (/var/run/ovn/ovnsb_db.sock<->) at lib/stream-fd.c:153 (100% CPU usage)
2020-06-29T04:32:02.261Z|07385|timeval|WARN|faults: 4 minor, 0 major
2020-06-29T04:32:02.261Z|07386|timeval|WARN|context switches: 24 voluntary, 5 involuntary
2020-06-29T04:32:02.261Z|07387|poll_loop|INFO|wakeup due to [POLLIN] on fd 15 (<->/var/run/ovn/ovnsb_db.sock) at lib/stream-fd.c:157 (99% CPU usage)
2020-06-29T04:32:06.519Z|07388|timeval|WARN|Unreasonably long 4259ms poll interval (4245ms user, 0ms system)
2020-06-29T04:32:06.519Z|07389|timeval|WARN|faults: 128 minor, 0 major
2020-06-29T04:32:06.519Z|07390|timeval|WARN|context switches: 16 voluntary, 1 involuntary
2020-06-29T04:32:06.520Z|07391|poll_loop|INFO|wakeup due to 0-ms timeout at lib/reconnect.c:643 (101% CPU usage)
2020-06-29T04:32:10.769Z|07392|timeval|WARN|Unreasonably long 4250ms poll interval (4236ms user, 0ms system)
2020-06-29T04:32:10.770Z|07393|timeval|WARN|faults: 128 minor, 0 major
2020-06-29T04:32:10.770Z|07394|timeval|WARN|context switches: 12 voluntary, 3 involuntary
2020-06-29T04:32:15.024Z|07395|timeval|WARN|Unreasonably long 4255ms poll interval (4238ms user, 0ms system)
2020-06-29T04:32:15.024Z|07396|timeval|WARN|faults: 4 minor, 0 major
2020-06-29T04:32:15.024Z|07397|timeval|WARN|context switches: 14 voluntary, 1 involuntary
2020-06-29T04:32:15.024Z|07398|poll_loop|INFO|Dropped 1 log messages in last 5 seconds (most recently, 5 seconds ago) due to excessive rate
2020-06-29T04:32:15.024Z|07399|poll_loop|INFO|wakeup due to [POLLIN] on fd 15 (<->/var/run/ovn/ovnsb_db.sock) at lib/stream-fd.c:157 (100% CPU usage)
2020-06-29T04:32:19.266Z|07400|timeval|WARN|Unreasonably long 4241ms poll interval (4227ms user, 0ms system)
2020-06-29T04:32:19.266Z|07401|timeval|WARN|faults: 126 minor, 0 major
2020-06-29T04:32:19.266Z|07402|timeval|WARN|context switches: 18 voluntary, 1 involuntary
2020-06-29T04:32:19.266Z|07403|poll_loop|INFO|wakeup due to 0-ms timeout at lib/reconnect.c:643 (100% CPU usage)
2020-06-29T04:32:23.501Z|07404|timeval|WARN|Unreasonably long 4236ms poll interval (4219ms user, 0ms system)
2020-06-29T04:32:23.501Z|07405|timeval|WARN|faults: 4 minor, 0 major
2020-06-29T04:32:23.501Z|07406|timeval|WARN|context switches: 16 voluntary, 7 involuntary
2020-06-29T04:32:23.501Z|07407|coverage|INFO|Dropped 13 log messages in last 55 seconds (most recently, 4 seconds ago) due to excessive rate
2020-06-29T04:32:23.501Z|07408|coverage|INFO|Skipping details of duplicate event coverage for hash=1a8dc4b6
2020-06-29T04:32:27.754Z|07409|timeval|WARN|Unreasonably long 4252ms poll interval (4238ms user, 0ms system)
2020-06-29T04:32:27.754Z|07410|timeval|WARN|faults: 78 minor, 0 major
2020-06-29T04:32:27.754Z|07411|timeval|WARN|context switches: 10 voluntary, 3 involuntary
2020-06-29T04:32:27.754Z|07412|poll_loop|INFO|Dropped 1 log messages in last 4 seconds (most recently, 4 seconds ago) due to excessive rate
2020-06-29T04:32:27.754Z|07413|poll_loop|INFO|wakeup due to [POLLIN] on fd 15 (<->/var/run/ovn/ovnsb_db.sock) at lib/stream-fd.c:157 (100% CPU usage)
2020-06-29T04:32:31.996Z|07414|timeval|WARN|Unreasonably long 4242ms poll interval (4227ms user, 1ms system)
2020-06-29T04:32:31.996Z|07415|timeval|WARN|context switches: 18 voluntary, 1 involuntary
2020-06-29T04:32:31.996Z|07416|poll_loop|INFO|wakeup due to 0-ms timeout at lib/reconnect.c:643 (100% CPU usage)
2020-06-29T04:32:36.253Z|07417|timeval|WARN|Unreasonably long 4258ms poll interval (4243ms user, 0ms system)
2020-06-29T04:32:36.253Z|07418|timeval|WARN|faults: 92 minor, 0 major
2020-06-29T04:32:36.253Z|07419|timeval|WARN|context switches: 12 voluntary, 3 involuntary
2020-06-29T04:32:36.254Z|07420|poll_loop|INFO|wakeup due to [POLLIN] on fd 14 (<->/var/run/ovn/ovnnb_db.sock) at lib/stream-fd.c:157 (101% CPU usage)
2020-06-29T04:32:40.547Z|07421|timeval|WARN|Unreasonably long 4294ms poll interval (4279ms user, 0ms system)
2020-06-29T04:32:40.547Z|07422|timeval|WARN|context switches: 18 voluntary, 2 involuntary
2020-06-29T04:32:44.777Z|07423|timeval|WARN|Unreasonably long 4229ms poll interval (4213ms user, 1ms system)
2020-06-29T04:32:44.777Z|07424|timeval|WARN|faults: 62 minor, 0 major
2020-06-29T04:32:44.777Z|07425|timeval|WARN|context switches: 11 voluntary, 3 involuntary
2020-06-29T04:32:44.777Z|07426|poll_loop|INFO|Dropped 1 log messages in last 4 seconds (most recently, 4 seconds ago) due to excessive rate
2020-06-29T04:32:44.777Z|07427|poll_loop|INFO|wakeup due to 0-ms timeout at lib/reconnect.c:643 (100% CPU usage)
0.046 ( 0.004 ms): getrusage(who: 1, ru: 0x7ffef8a0deb0 ) = 0
0.068 ( 0.028 ms): write(fd: 5</var/log/ovn/ovn-northd.log>, buf: 0x5560d3e1f310, count: 109) = 109
0.101 ( 0.003 ms): write(fd: 5</var/log/ovn/ovn-northd.log>, buf: 0x5560d3e1f310, count: 89) = 89
0.107 ( 0.002 ms): write(fd: 5</var/log/ovn/ovn-northd.log>, buf: 0x5560d3e1f310, count: 141) = 141
0.129 ( 0.002 ms): write(fd: 5</var/log/ovn/ovn-northd.log>, buf: 0x5560d3e1f310, count: 108) = 108
0.134 ( 0.006 ms): poll(ufds: 0x5560db42dbf0, nfds: 4 ) = 0 Timeout
0.141 ( 0.002 ms): getrusage(who: 1, ru: 0x55609741ca18 ) = 0
0.147 ( 0.012 ms): write(fd: 5</var/log/ovn/ovn-northd.log>, buf: 0x5560d3e1f310, count: 140) = 140
0.165 ( 0.002 ms): write(fd: 5</var/log/ovn/ovn-northd.log>, buf: 0x5560d3e1f310, count: 113) = 113
0.511 ( 0.014 ms): sendto(fd: 14<socket:[552516759]>, buff: 0x55611315ff60, len: 41 ) = 41
0.528 ( 0.002 ms): recvfrom(fd: 14<socket:[552516759]>, ubuf: 0x55609741d4e4, size: 1532 ) = -1 EAGAIN Resource temporarily unavailable
0.533 ( 0.002 ms): recvfrom(fd: 15<socket:[554039164]>, ubuf: 0x55609741e1fe, size: 2914 ) = -1 EAGAIN Resource temporarily unavailable
4095.413 ( 0.021 ms): accept(fd: 12<socket:[552516757]>, upeer_sockaddr: 0x7ffef8a0de60, upeer_addrlen: 0x7ffef8a0de5c) = -1 EAGAIN Resource temporarily unavailable
4095.458 ( 0.003 ms): getrusage(who: 1, ru: 0x7ffef8a0deb0 ) = 0
4095.477 ( 0.025 ms): write(fd: 5</var/log/ovn/ovn-northd.log>, buf: 0x5560d3e1f310, count: 109) = 109
4095.508 ( 0.003 ms): write(fd: 5</var/log/ovn/ovn-northd.log>, buf: 0x5560d3e1f310, count: 70) = 70
4095.514 ( 0.002 ms): write(fd: 5</var/log/ovn/ovn-northd.log>, buf: 0x5560d3e1f310, count: 68) = 68
4095.519 ( 0.002 ms): write(fd: 5</var/log/ovn/ovn-northd.log>, buf: 0x5560d3e1f310, count: 89) = 89
4095.525 ( 0.005 ms): poll(ufds: 0x5560e3a55ad0, nfds: 4 ) = 1
4095.531 ( 0.002 ms): getrusage(who: 1, ru: 0x55609741ca18 ) = 0
4095.535 ( 0.003 ms): fstat(fd: 14<socket:[552516759]>, statbuf: 0x7ffef8a0da60 ) = 0
4095.540 ( 0.002 ms): getsockname(fd: 14<socket:[552516759]>, usockaddr: 0x7ffef8a0d970, usockaddr_len: 0x7ffef8a0d8f8) = 0
4095.545 ( 0.003 ms): getpeername(fd: 14<socket:[552516759]>, usockaddr: 0x7ffef8a0d970, usockaddr_len: 0x7ffef8a0d8f8) = 0
4095.556 ( 0.003 ms): write(fd: 5</var/log/ovn/ovn-northd.log>, buf: 0x5560d3e1f310, count: 150) = 150
4095.564 ( 0.006 ms): recvfrom(fd: 14<socket:[552516759]>, ubuf: 0x55609741d4e4, size: 1532 ) = 38
4095.586 ( 0.012 ms): sendto(fd: 15<socket:[554039164]>, buff: 0x5560efee8e80, len: 41 ) = 41
4095.601 ( 0.002 ms): recvfrom(fd: 15<socket:[554039164]>, ubuf: 0x55609741e1fe, size: 2914 ) = -1 EAGAIN Resource temporarily unavailable
8211.797 ( 0.023 ms): accept(fd: 12<socket:[552516757]>, upeer_sockaddr: 0x7ffef8a0de60, upeer_addrlen: 0x7ffef8a0de5c) = -1 EAGAIN Resource temporarily unavailable
8211.967 ( 0.005 ms): getrusage(who: 1, ru: 0x7ffef8a0deb0 ) = 0
8211.992 ( 0.025 ms): write(fd: 5</var/log/ovn/ovn-northd.log>, buf: 0x5560d3e1f310, count: 109) = 109
8212.023 ( 0.004 ms): write(fd: 5</var/log/ovn/ovn-northd.log>, buf: 0x5560d3e1f310, count: 89) = 89
8212.030 ( 0.005 ms): poll(ufds: 0x5560df0002e0, nfds: 4, timeout_msecs: 883 ) = 1
8212.037 ( 0.002 ms): getrusage(who: 1, ru: 0x55609741ca18 ) = 0
8212.044 ( 0.003 ms): recvfrom(fd: 14<socket:[552516759]>, ubuf: 0x55609741d50a, size: 1494 ) = -1 EAGAIN Resource temporarily unavailable
8212.050 ( 0.006 ms): recvfrom(fd: 15<socket:[554039164]>, ubuf: 0x55609741e1fe, size: 2914 ) = 38
12381.888 ( 0.021 ms): accept(fd: 12<socket:[552516757]>, upeer_sockaddr: 0x7ffef8a0de60, upeer_addrlen: 0x7ffef8a0de5c) = -1 EAGAIN Resource temporarily unavailable
12381.931 ( 0.003 ms): getrusage(who: 1, ru: 0x7ffef8a0deb0 ) = 0
12381.953 ( 0.028 ms): write(fd: 5</var/log/ovn/ovn-northd.log>, buf: 0x5560d3e1f310, count: 109) = 109
12381.987 ( 0.003 ms): write(fd: 5</var/log/ovn/ovn-northd.log>, buf: 0x5560d3e1f310, count: 70) = 70
12381.993 ( 0.002 ms): write(fd: 5</var/log/ovn/ovn-northd.log>, buf: 0x5560d3e1f310, count: 89) = 89
12381.998 ( 0.004 ms): poll(ufds: 0x5560e3a526d0, nfds: 4 ) = 0 Timeout
12382.004 ( 0.002 ms): getrusage(who: 1, ru: 0x55609741ca18 ) = 0
12382.010 ( 0.002 ms): write(fd: 5</var/log/ovn/ovn-northd.log>, buf: 0x5560d3e1f310, count: 140) = 140
12382.018 ( 0.002 ms): write(fd: 5</var/log/ovn/ovn-northd.log>, buf: 0x5560d3e1f310, count: 113) = 113
12382.386 ( 0.015 ms): sendto(fd: 14<socket:[552516759]>, buff: 0x5560f94f4ef0, len: 41 ) = 41
12382.406 ( 0.003 ms): recvfrom(fd: 14<socket:[552516759]>, ubuf: 0x55609741d50a, size: 1494 ) = -1 EAGAIN Resource temporarily unavailable
12382.414 ( 0.002 ms): recvfrom(fd: 15<socket:[554039164]>, ubuf: 0x55609741e224, size: 2876 ) = -1 EAGAIN Resource temporarily unavailable
Tasks: 3 total, 1 running, 2 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2.7 us, 0.7 sy, 0.0 ni, 96.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 257572.1 total, 184899.2 free, 40058.3 used, 32614.6 buff/cache
MiB Swap: 16384.0 total, 16384.0 free, 0.0 used. 211729.8 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
75974 openvsw+ 10 -10 2107672 2.0g 3116 R 99.3 0.8 202:51.65 ovn-northd
75949 openvsw+ 20 0 420572 237748 4924 S 0.0 0.1 30:20.48 ovsdb-server
75965 openvsw+ 20 0 2776408 2.5g 4860 S 0.0 1.0 2:11.96 ovsdb-server