datalust / seq-input-gelf Goto Github PK
View Code? Open in Web Editor NEWIngest GELF payloads into Seq
Home Page: https://datalust.co/seq
License: Apache License 2.0
Ingest GELF payloads into Seq
Home Page: https://datalust.co/seq
License: Apache License 2.0
E.g. replacing eprintln
calls in server.rs.
For non-Docker environments (Windows and in the future macOS or Linux), the GELF input will be packaged as a Seq App. Some new features in Seq 5.1 will improve performance and ergonomics for Seq Apps that act as inputs, but we expect the app will be installable into Seq 4.2 and later versions.
The plug-in app should support Docker in theory (we include Linux binaries), but even with the GELF port mapped, events aren't currently ingested. Needs investigation. (Under Docker, using datalust/sqelf
is recommended anyway, so this is not a critical issue.)
If you have a fork or local clone from before 9 July, 2020, you will need to run the following commands to update your forked/cloned repo.
git checkout master
git branch -m master release
git fetch
git branch --unset-upstream
//if fork: git push -u origin release
git branch -u origin/release
git symbolic-ref refs/remotes/origin/HEAD refs/remotes/origin/release
git pull upstream release
You can now safely delete master
branch in your forked repo.
Easier option: delete your fork and fork a fresh version
There are some new versions of libraries to consider upgrading:
tokio
: 0.2-alpha
-> 0.2
bytes
: 0.4
-> 0.5
futures-preview
-> futures
It looks like tokio
is still based on the 0.6
version of mio
, so on Windows it doesn't include the switch to wepoll
.
The server
module is the only place we have any futures-aware code, but there's a bit of control-flow buried in there that would probably be clearer if it was based on the async-await syntax rather than combinators.
The second formulation is needed in order for the app to receive the setting when hosted as a process under Seq. We should probably accept the nicer GELF_ADDRESS
name for other purposes :-)
The default when not specified is 0.0.0.0:12201
.
A sqelf
container (datalust/sqelf-ci:1.0.228-dev
) barely uses CPU. When upgraded to datalust/sqelf:latest
, and also the 2.x CI builds, it runs at 100% on one core. This is only the case when using TCP.
version: '3'
services:
sqelf:
image: datalust/sqelf-ci:1.0.228-dev
# image: datalust/sqelf
depends_on:
- seq
environment:
SEQ_ADDRESS: "http://seq:5341"
GELF_ADDRESS: "tcp://0.0.0.0:12201"
restart: unless-stopped
networks:
- seq
ports:
- "12201:12201/tcp"
seq:
image: datalust/seq
restart: unless-stopped
environment:
ACCEPT_EULA: Y
volumes:
- data:/data
ports:
- "30080:80"
networks:
- seq
I am currently using Gelf to forward logs from a Gitlab instance I am running under docker. I am running this under Seq itself as a NuGet package with the following settings:
{
"gelfAddress": "udp://172.18.0.11:12202",
"certificatePath": "",
"certificatePrivateKeyPath": "",
"enableDiagnostics": "False"
}
The logs are being received just fine and the docker logging driver looks to be configured correctly, however I am getting some logs where there is no information in them:
Which looks like this when I export as JSON for a "empty" row with a \r
value in it:
{
"@t": "2022-11-09T00:10:53.4760000Z",
"@mt": "\r",
"@m": "\r",
"@i": "e1221a99",
"container_id": "5923b89cf3bbc470cdcb0134907aaed7d9099787907b5a2b9acac7549aadb32a",
"created": "2022-11-08T23:43:01.931459734Z",
"image_id": "sha256:eec20347402c7c4395f066925eb4a92702de8b12c781c75542f0ad17dec4f13a",
"tag": "5923b89cf3bb",
"image_name": "gitlab/gitlab-ce:latest",
"command": "/assets/wrapper",
"host": "xxxx",
"container_name": "gitlab",
"Application": "GitLab",
"IsDocker": "true"
}
I was hoping I could potentially leverage the "filter" field however applying this filter seems to stop logs coming through at all, which I have tried using <>
and !=
and just =
just in case to try all options:
And if I enable diagnostics, I can see the messages still coming in but they don't show in the UI:
{
"@t": "2022-11-09T00:24:22.5582479Z",
"@mt": "Collected GELF server metrics",
"@m": "Collected GELF server metrics",
"@i": "1fcf524e",
"@l": "DEBUG",
"process": {
"msg": 168
},
"receive": {
"chunk": 0,
"msg_chunked": 0,
"msg_incomplete_chunk_overflow": 0,
"msg_unchunked": 168
},
"server": {
"process_err": 0,
"process_ok": 168,
"receive_err": 0,
"receive_ok": 168,
"tcp_conn_accept": 0,
"tcp_conn_close": 0,
"tcp_conn_timeout": 0,
"tcp_msg_overflow": 0
},
"AppId": "hostedapp-36",
"AppInstanceId": "appinstance-66"
}
Was wondering if could advise on how to filter out these empty logs at all?
i use gelflib send the log from systemd-journald to seq-input-gelf
the systemd-journal use this entry
__REALTIME_TIMESTAMP
it is in microseconds since the epoch UTC,
but gelflib's setTime setTime function
just use time_t which is in s
so my log shown in seq events are like this
after i change the timestamp key in gelflib to a double value,
my log shown in seq events are like this
and cause seq-input-gelf stop to run
so i want to send to ms_timestamp to another key,
and select that key as defalut timestamp column.
Currently the app outputs a "Termination signal received..." event every time it's shut down, which ends up as noise in the Seq log stream. We could enable this only on receiving --verbose
or similar on the command-line?
I've been running both seq and sqelf in docker, separate containers.
Sometimes I get the feeling that events from an other docker host don't reach seq anymore. A restart of the container running sqelf fixes the issue. It seems to crash but there isn't any message in the log.
Maybe there could be some kind of heartbeat functionality in the sqelf container, so that is can automatically restart when it hangs. The process doesn't seem to exit because that would stop the container (and them automatically reboot).
Connection to Seq using gelf refusing all ports are opened workin on docker on window but not on ubuntu using linux containers. I have also tried http://trakvisa-seq:5341 and getting the same error.
trakvisa-seq-input-gelf | at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken)
trakvisa-seq-input-gelf | at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.System.Threading.Tasks.Sources.IValueTaskSource.GetResult(Int16 token)
trakvisa-seq-input-gelf | at System.Net.Sockets.Socket.<ConnectAsync>g__WaitForConnectWithCancellation|281_0(AwaitableSocketAsyncEventArgs saea, ValueTask connectTask, CancellationToken cancellationToken)
trakvisa-seq-input-gelf | at System.Net.Http.HttpConnectionPool.ConnectToTcpHostAsync(String host, Int32 port, HttpRequestMessage initialRequest, Boolean async, CancellationToken cancellationToken)
trakvisa-seq-input-gelf | --- End of inner exception stack trace ---
trakvisa-seq-input-gelf | at System.Net.Http.HttpConnectionPool.ConnectToTcpHostAsync(String host, Int32 port, HttpRequestMessage initialRequest, Boolean async, CancellationToken cancellationToken)
trakvisa-seq-input-gelf | at System.Net.Http.HttpConnectionPool.ConnectAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
trakvisa-seq-input-gelf | at System.Net.Http.HttpConnectionPool.CreateHttp11ConnectionAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
trakvisa-seq-input-gelf | at System.Net.Http.HttpConnectionPool.AddHttp11ConnectionAsync(QueueItem queueItem)
trakvisa-seq-input-gelf | at System.Threading.Tasks.TaskCompletionSourceWithCancellation`1.WaitWithCancellationAsync(CancellationToken cancellationToken)
trakvisa-seq-input-gelf | at System.Net.Http.HttpConnectionPool.HttpConnectionWaiter`1.WaitForConnectionAsync(Boolean async, CancellationToken requestCancellationToken)
trakvisa-seq-input-gelf | at System.Net.Http.HttpConnectionPool.SendWithVersionDetectionAndRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken)
trakvisa-seq-input-gelf | at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
trakvisa-seq-input-gelf | at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)```
services:
trakvisa-seq:
image: datalust/seq:latest
container_name: trakvisa-seq
restart: always
environment:
- ACCEPT_EULA=Y
ports:
- '5341:80'
networks:
- trakvisa-network
volumes:
- seq_data:/data
logging:
driver: "json-file"
options:
max-size: "50M"
max-file: "10"
deploy:
resources:
limits:
cpus: '2'
memory: 1G
trakvisa-seq-input-gelf:
image: datalust/seq-input-gelf:latest
container_name: trakvisa-seq-input-gelf
restart: always
environment:
SEQ_ADDRESS: "http://trakvisa-seq:80"
depends_on:
- trakvisa-seq
ports:
- "12201:12201/udp"
networks:
- trakvisa-network
logging:
driver: "json-file"
options:
max-size: "50M"
max-file: "10"
deploy:
resources:
limits:
cpus: '2'
memory: 1G
After some time the memory accumulates to 300-400 MB RAM. At the beginning seq-input-gelf
sits around 40 MB RAM. My question is: Is 300-400 MB RAM considered normal? What would happen if I limit that containers memory to, let's say, 100MB?
This happens on our production server and also locally when testing with Docker Desktop.
I'm using TCP with non-blocking mode, HTTPS and Traefik as reverse proxy.
Also, the CPU for seq-input-gelf is always between 2-4% when everything is idle.
This is our stripped down docker-compose.yml
file
version: "3.9"
x-deploy: &hardware-limits
resources:
limits:
memory: 2G
volumes:
seq:
services:
seq:
container_name: seq
image: datalust/seq:2023.4
deploy: *hardware-limits
restart: unless-stopped
security_opt:
- no-new-privileges:true
healthcheck:
test: seqsvr/Client/seqcli node health -s http://127.0.0.1/
timeout: 5s
start_period: 30s
interval: 5s
retries: 10
ports:
- "5341:5341"
environment:
ACCEPT_EULA: Y
volumes:
- seq:/data
seq-gelf-monitor:
container_name: seq-gelf-monitor
image: datalust/seq-input-gelf:3.0
deploy: *hardware-limits
restart: unless-stopped
security_opt:
- no-new-privileges:true
depends_on:
seq:
condition: service_healthy
stop_grace_period: 1s
ports:
- "127.0.0.1:12202:12201"
environment:
GELF_ADDRESS: tcp://0.0.0.0:12201
SEQ_ADDRESS: http://seq:5341
Hi, most libraries that send logs to graylogs or gelf logging servers can do so over HTTP. This allows logging to be done through a reverse proxy and HTTPS to be implemented simply with auto renew of certs. Can we please add support for http://0.0.0.0:12201 as a protocol for gelf injest?
Following #30
We've got some reports of the GELF server silently not processing events. This could be a problem with an error case not being caught that's causing the server to begin shutting down, but it never finishes.
We should add:
These could be controlled through a Seq setting or environment variable.
Related to #43
We've had some reports of instability in Windows environments. At some point over time, we seem to lose our UDP socket, and will block indefinitely in GetQueuedCompletionStatusEx
deep down in mio
's IOCP translation layer. No more events will be logged unless the process is recycled.
After very thoroughly reviewing our side of things, it doesn't look like we're triggering a shutdown that closes the socket but blocks before the process can exit.
The version of mio
that we're using via tokio
isn't the latest one, so it's possible there's been some changes somewhere in that stack.
It's also possible UDP sockets on Windows just haven't received a lot of attention yet in mio
/tokio
, so we'd like to implement an alternative TCP-based GELF server that can be used when running as a Seq app on Windows.
Now that we have a better integration test harness we can also make sure various chunking scenarios don't cause the server to block.
In the meantime, folks that are affected by this can try using the sqelf
Linux Docker container.
I have used datalust/sqelf:1.0.57.0
several days ago for testing log transfer from PHP services to Seq. Everything worked fine, after some tweaking.
Here is my seq/sqelf related docker-compose section
version: '2'
services:
sqelf:
image: datalust/sqelf:1.0.57.0
depends_on:
- seq
ports:
- "12201:12201/udp"
environment:
SEQ_ADDRESS: "http://seq:5341"
restart: unless-stopped
networks:
- seqnet
seq:
image: datalust/seq:latest
ports:
- "5341:80"
environment:
ACCEPT_EULA: Y
restart: unless-stopped
volumes:
- ../data/seq:/data
networks:
- seqnet
I created a separated network to prevent port collisions with other running containers
networks:
frontend:
driver: "bridge"
backend:
driver: "bridge"
seqnet:
driver: "bridge"
Everything starts up and runs as expected. I can use PHP logging (Monolog) and configure it to send Gelf messages to sqelf, from containers that joined the same network.
If I switch to datalust/sqelf:1.0.118
, the sqelf container does not get a network link anymore. It joins the network and enters a restart loop. Nothing can be send to sqelf as one might expect with no network available.
All I see in the container log is a bunch of those entries
{"@t":"2019-02-01T19:29:49.952835500Z","@l":"DEBUG","@mt":"Termination signal received; shutting down"}
{"@t":"2019-02-01T19:29:54.201928200Z","@l":"DEBUG","@mt":"Termination signal received; shutting down"}
{"@t":"2019-02-01T19:30:01.594420500Z","@l":"DEBUG","@mt":"Termination signal received; shutting down"}
{"@t":"2019-02-01T19:30:15.282527100Z","@l":"DEBUG","@mt":"Termination signal received; shutting down"}
{"@t":"2019-02-01T19:30:41.795996400Z","@l":"DEBUG","@mt":"Termination signal received; shutting down"}
{"@t":"2019-02-01T19:31:33.897595400Z","@l":"DEBUG","@mt":"Termination signal received; shutting down"}
{"@t":"2019-02-01T19:32:34.840344600Z","@l":"DEBUG","@mt":"Termination signal received; shutting down"}
{"@t":"2019-02-01T19:33:35.848725100Z","@l":"DEBUG","@mt":"Termination signal received; shutting down"}
{"@t":"2019-02-01T19:55:10.428170700Z","@l":"DEBUG","@mt":"Termination signal received; shutting down"}
{"@t":"2019-02-01T19:56:11.375723600Z","@l":"DEBUG","@mt":"Termination signal received; shutting down"}
{"@t":"2019-02-01T19:57:12.274970500Z","@l":"DEBUG","@mt":"Termination signal received; shutting down"}
{"@t":"2019-02-01T19:58:13.233399700Z","@l":"DEBUG","@mt":"Termination signal received; shutting down"}
{"@t":"2019-02-01T19:59:14.096769Z","@l":"DEBUG","@mt":"Termination signal received; shutting down"}
{"@t":"2019-02-01T20:00:15.041764700Z","@l":"DEBUG","@mt":"Termination signal received; shutting down"}
{"@t":"2019-02-01T20:01:15.949245Z","@l":"DEBUG","@mt":"Termination signal received; shutting down"}
{"@t":"2019-02-01T20:02:16.783862200Z","@l":"DEBUG","@mt":"Termination signal received; shutting down"}
It seems not be related to my network config, as the same happens if I just start seq / sqelf without my custom config using the dokcer-compose.yml
shown here.
dc -p seq up
Creating seq_seq_1 ... done
Creating seq_sqelf_1 ... done
Attaching to seq_seq_1, seq_sqelf_1
sqelf_1 | {"@t":"2019-02-01T20:12:40.171856300Z","@l":"DEBUG","@mt":"Termination signal received; shutting down"}
seq_1 | ────────────────────────────────────────
seq_1 | Seq ♦ Machine data, for humans.
seq_1 | ─────────── © 2019 Datalust Pty Ltd ────
seq_1 |
seq_1 | Running as server; press Ctrl+C to exit.
seq_1 |
seq_1 | [20:12:41 INF] Seq "5.0.2562" running on OS "Linux 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018"
seq_1 | [20:12:41 INF] Seq listening on ["http://localhost/", "http://localhost:5341/"]
seq_1 | [20:12:41 INF] Opening document store "/data/Documents/documents.lmdb"
seq_1 | [20:12:42 INF] Opening event store at "/data/Extents"
seq_1 | [20:12:42 INF] Available storage engines in order of preference are ["FLARE"]
seq_1 | [20:12:42 INF] Storage subsystem available
seq_sqelf_1 exited with code 0
seq_sqelf_1 exited with code 0
seq_sqelf_1 exited with code 0
seq_sqelf_1 exited with code 0
...
Any idea what causes this? Currently I'm a bit clueless.
Currently when sqelf
starts, it prints some useful info to STDERR
. Seq interprets this as event output and warns that it's not valid CLEF.
Hello,
Maybe I am doing something wrong, but I made my first steps with the seq-input-gelf container. Everything works like a charm and probably as expected.
The seq-input-gelf container forwards to the seq instance and my test container logs to the gelf container.
But the message feels a little ugly.
My first attempt I did make with a traefik container. The message shown in Seq is the same one I get from docker logs:
time="2022-04-03T19:10:20Z" level=info msg="Starting provider *acme.ChallengeTLSALPN"
So that is indeed the message traefik writes to the log. But as the information about time and log level is already there i would like to transform the message to
Starting provider *acme.ChallengeTLSALPN
Is there any good way to achieve this?
Despite the README's claim that it can ingest GELF via UDP or TCP, the Docker image only listens on the UDP socket.
$ k exec -it -n adt-system seq-gelf-translator-64b78f6559-tvxhl -- bash -c "apt update; apt install -y iproute2; ss -nlp | grep sqelf; echo Done"
Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease
Hit:2 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:3 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
Hit:4 http://security.ubuntu.com/ubuntu bionic-security InRelease
Reading package lists... Done
Building dependency tree
Reading state information... Done
31 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree
Reading state information... Done
iproute2 is already the newest version (4.15.0-2ubuntu1.3).
0 upgraded, 0 newly installed, 0 to remove and 31 not upgraded.
udp UNCONN 0 0 0.0.0.0:12201 0.0.0.0:* users:(("sqelf",pid=7,fd=6))
Done
Hi,
The docker compose example isn't working for me (latest docker desktop, MacOS M1) ... the gelf container won't successfully start:
seq-seq-input-gelf-1 | /run.sh: line 10: bin/sqelf: No such file or directory
seq-seq-input-gelf-1 | /run.sh: line 10: bin/seqcli/seqcli: No such file or directory
Compose file is
version: '3'
services:
seq-input-gelf:
image: datalust/seq-input-gelf:latest
depends_on:
- seq
ports:
- "12201:12201/udp"
environment:
SEQ_ADDRESS: "http://seq:5341"
restart: unless-stopped
seq:
image: datalust/seq:latest
ports:
- "5341:80"
environment:
ACCEPT_EULA: "Y"
restart: unless-stopped
volumes:
- ./seq-data:/data
Full output
(base) adam@adams-mac-mini seq % docker compose up
[+] Running 3/2
⠿ Network seq_default Created 0.0s
⠿ Container seq-seq-1 Created 0.0s
⠿ Container seq-seq-input-gelf-1 Created 0.0s
Attaching to seq-seq-1, seq-seq-input-gelf-1
seq-seq-input-gelf-1 | /run.sh: line 10: bin/sqelf: No such file or directory
seq-seq-input-gelf-1 | /run.sh: line 10: bin/seqcli/seqcli: No such file or directory
seq-seq-input-gelf-1 exited with code 127
seq-seq-input-gelf-1 | /run.sh: line 10: bin/sqelf: No such file or directory
seq-seq-input-gelf-1 | /run.sh: line 10: bin/seqcli/seqcli: No such file or directory
seq-seq-input-gelf-1 exited with code 127
seq-seq-1 | ────────────────────────────────────────
seq-seq-1 | Seq ♦ Machine data, for humans.
seq-seq-1 | ─────────── © 2023 Datalust Pty Ltd ────
seq-seq-1 |
seq-seq-1 | Running as server; press Ctrl+C to exit.
seq-seq-1 |
seq-seq-input-gelf-1 | /run.sh: line 10: bin/seqcli/seqcli: No such file or directory
seq-seq-input-gelf-1 | /run.sh: line 10: bin/sqelf: No such file or directory
seq-seq-input-gelf-1 exited with code 127
seq-seq-1 | [12:30:07 INF] Seq "2023.1.8876" running on OS "Linux 5.15.49-linuxkit #1 SMP PREEMPT Tue Sep 13 07:51:32 UTC 2022"
seq-seq-1 | [12:30:07 INF] Seq detected 4124.512256 MB of RAM
seq-seq-1 | [12:30:07 INF] Opening event store at "/data/Stream/stream.flare"
seq-seq-1 | [12:30:07 INF] Ingestion enabled
seq-seq-1 | [12:30:07 INF] Opening metastore "/data/Documents/metastore.flare"
seq-seq-1 | [12:30:07 INF] Storage subsystem available
seq-seq-1 | [12:30:07 INF] Seq listening on ["http://localhost/", "https://localhost/", "http://localhost:5341/", "https://localhost:45341/"]
seq-seq-input-gelf-1 | /run.sh: line 10: bin/sqelf: No such file or directory
seq-seq-input-gelf-1 | /run.sh: line 10: bin/seqcli/seqcli: No such file or directory
seq-seq-input-gelf-1 exited with code 127
seq-seq-input-gelf-1 | /run.sh: line 10: bin/sqelf: No such file or directory
seq-seq-input-gelf-1 | /run.sh: line 10: bin/seqcli/seqcli: No such file or directory
seq-seq-input-gelf-1 exited with code 127
^CGracefully stopping... (press Ctrl+C again to force)
Aborting on container exit...
[+] Running 2/2
⠿ Container seq-seq-input-gelf-1 Stopped 0.0s
⠿ Container seq-seq-1 Stopped 2.3s
canceled
Can't see what I'm doing wrong here ... any ideas?
Thanks!
I'm trying to collect logs from docker containers. All containers are running on the same host.
seq:
image: datalust/seq:latest
container_name: seq
restart: unless-stopped
environment:
- ACCEPT_EULA=Y
ports:
- "81:80"
- "5341:5341"
volumes:
- seq-logs:/data
seq-gelf:
image: datalust/seq-input-gelf:latest
container_name: seq-gelf
restart: unless-stopped
environment:
- ACCEPT_EULA=Y
- GELF_ENABLE_DIAGNOSTICS=True
- SEQ_ADDRESS="http://seq:5341"
# Same errors with:
# - SEQ_ADDRESS="seq:5341"
# - SEQ_ADDRESS="http://host.docker.internal:5341"
# - SEQ_ADDRESS="http://localhost:5341"
# - SEQ_ADDRESS="localhost:5341"
# - SEQ_ADDRESS="127.0.0.1:5341"
depends_on:
- seq
ports:
- "12201:12201/udp"
nginx:
...
logging:
driver: "gelf"
options:
gelf-address: "udp://host.docker.internal:12201"
# gelf-address: "udp://seq-gelf:12201"
Error message in seq-gelf (repeated):
Ingestion failed: Invalid URI: The URI scheme is not valid.
System.UriFormatException: Invalid URI: The URI scheme is not valid.
at System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind, UriCreationOptions& creationOptions)
at System.Uri..ctor(String uriString)
at Seq.Api.Client.SeqApiClient..ctor(String serverUrl, String apiKey, Action`1 configureHttpClientHandler)
at Seq.Api.SeqConnection..ctor(String serverUrl, String apiKey, Action`1 configureHttpClientHandler)
at SeqCli.Connection.SeqConnectionFactory.Connect(ConnectionFeature connection) in /home/appveyor/projects/seqcli/src/SeqCli/Connection/SeqConnectionFactory.cs:line 36
at SeqCli.Cli.Commands.IngestCommand.Run() in /home/appveyor/projects/seqcli/src/SeqCli/Cli/Commands/IngestCommand.cs:line 96
thread 'main' panicked at 'failed printing to stdout: Broken pipe (os error 32)', library/std/src/io/stdio.rs:1193:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
{"@t":"2022-05-24T07:18:41.833597300Z","@l":"ERROR","@mt":"GELF input failed","@x":"failed printing to stdout: Broken pipe (os error 32)"}
Do I understand correctly that the error "Invalid URI: The URI scheme is not valid" means that seq-gelp cannot find the seq service?
Looks like the seq container itself works fine:
[07:06:02 INF] Seq "2022.1.7449" running on OS "Linux 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022"
[07:06:02 INF] Seq detected 6543.028224 MB of RAM
[07:06:04 INF] Opening event store at "/data/Stream/stream.flare"
[07:06:04 INF] Ingestion enabled
[07:06:04 INF] Opening metastore "/data/Documents/metastore.flare"
[07:06:04 INF] Storage subsystem available
[07:06:04 INF] Seq listening on ["http://localhost/", "https://localhost/", "http://localhost:5341/", "https://localhost:45341/"]
[07:07:04 INF] 1 more generation 2 garbage collection(s) occurred
[07:11:04 INF] Metrics sampled
[07:16:04 INF] Metrics sampled
[07:16:04 INF] Applying 0 retention policies
[07:16:04 INF] Retention processing and compaction took 16.2288 ms; allocating 599983.7712 ms for indexing
I can connect to its web interface (with no captured events inside).
What's wrong with my config?
We've changed the CI infrastructure behind this project, which now requires some updates to the build process. Instead of a single cross-platform build, the project now needs to use a build matrix targeting Windows (Rust Windows + Linux targets for the app) and Linux (Docker container).
Error logs, originating from PHP, appear in Seq as Level (err)
By that, they are missing the red dot, and the red background as compared with .NET logs.
Here they have Level (error)
.
Not sure if this is something that needs tweaking in the log producer or Sqelf.
I cannot proof it, but I'm fairly sure that the behavior was different in an older Sqelf version, receiving the same logs.
Currently running
Seq: 5.0.2562
Sqelf: sqelf:latest
Docker secrets is a Docker swarm feature. It mounts a file named /run/secrets/<secret_name>
(by default) in the container, which contains the secret's value. The file can then be used by the container entrypoint to access sensitive configuration information such as passwords or keys, without storing them in an image layer, an environment variable, or the Docker stack file. Docker secrets can be manually managed through docker secret
commands, or can be set to point to a file on the host machine.
Using docker secrets for storing the API key would allow to remove that sensitive information from the stack file. Storing the API key in the stack file is a problem for people who want to version their stack file, which is a common practice in gitops organizations. It would require to make the entrypoint (run.sh
) able to read the API key from a file rather than from an environment variable. Image writers usually do this by using another environment variable such as SEQ_API_KEY_FILE
.
Official documentation on docker secrets : https://docs.docker.com/engine/swarm/secrets/
Example docker image supporting docker secrets : https://hub.docker.com/_/mariadb/
Hi guys,
We have a high volume GELF stream from a Mulesoft instance ... not the hugest fan of receiving the logs this way since it encapsulate the app's JSON logs in the GELF packet, but working towards a Log4J appender for Seq that will hopefully address that.
In the past couple of days, this has started encountering a "GELF processing failed irrecoverably" error, which hasn't happened in the ~2 years (roughly) that we've been using it. I can see the immediate cause is the break in the code;
// An unrecoverable error occurred receiving a chunk Some(Err(err)) => { increment!(server.receive_err); emit_err(&err, "GELF processing failed irrecoverably"); break; },
and this is the only apparent case that would result in the receiver stopping. It's not clear to me that this is the 'right' behaviour given that other errors will continue processing - possibly should be some retry behaviour at the least?
The last thing that was done with this input was to update to v2.0.303-dev, but this was back in August. Nonetheless, I've reverted it to v2.0.298 in case it's an actual bug.
For reference, the relevant logs are:
{"@t":"2021-10-07T09:40:47.8312840Z","@mt":"GELF processing failed irrecoverably","@m":"GELF processing failed irrecoverably","@i":"1e99ac5d","@l":"ERROR","@x":"An existing connection was forcibly closed by the remote host. (os error 10054)","AppId":"hostedapp-43","AppInstanceId":"appinstance-45"}
which is followed by the final server metrics that indicate a receive error and TCP connection timeout;
{"@t":"2021-10-07T09:40:47.8848529Z","@mt":"Collected GELF server metrics","@m":"Collected GELF server metrics","@i":"1fcf524e","@l":"DEBUG","process":{"msg":422},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":422},"server":{"process_err":0,"process_ok":423,"receive_err":1,"receive_ok":422,"tcp_conn_accept":205,"tcp_conn_close":636,"tcp_conn_timeout":1,"tcp_msg_overflow":0},"AppId":"hostedapp-43","AppInstanceId":"appinstance-45"}
Cheers,
Matt
Hi,
I'm trying to install this app on my installation of Seq, however when I add an instance of the app it says "The app process is stopped". Searching the logs for the AppInstanceId I see the following (and only this) in the results;
Nonzero exit code -1073741515 returned from app process
I've opened the correct port on the firewall and enabled diagnostics but its giving me nothing (This is a windows install and not via docker).
Any ideas?
Attempting to send errors from Laravel framework through its Logging engine as GELF to Seq.
Docker compose setup:
seq-input-gelf:
image: datalust/seq-input-gelf:latest
depends_on:
- seq
environment:
GELF_ADDRESS: "tcp://0.0.0.0:12201"
SEQ_ADDRESS: "http://seq:5341"
GELF_ENABLE_DIAGNOSTICS: "True"
ports:
- "12201:12201"
restart: unless-stopped
seq:
container_name: seq
image: datalust/seq:latest
restart: unless-stopped
environment:
- ACCEPT_EULA=y
ports:
- "9595:80"
- "5341:5341"
networks:
- frontend
- backend
Ports are up:
sudo netstat -tunlp | grep docker
tcp6 0 0 :::8080 :::* LISTEN 967/docker-proxy
tcp6 0 0 :::81 :::* LISTEN 1622/docker-proxy
tcp6 0 0 :::85 :::* LISTEN 1636/docker-proxy
tcp6 0 0 :::3000 :::* LISTEN 1024/docker-proxy
tcp6 0 0 :::3001 :::* LISTEN 1010/docker-proxy
tcp6 0 0 :::9595 :::* LISTEN 28504/docker-proxy
tcp6 0 0 :::444 :::* LISTEN 1605/docker-proxy
tcp6 0 0 :::5341 :::* LISTEN 28516/docker-proxy
tcp6 0 0 :::8000 :::* LISTEN 1570/docker-proxy
tcp6 0 0 :::8001 :::* LISTEN 981/docker-proxy
tcp6 0 0 :::4200 :::* LISTEN 994/docker-proxy
tcp6 0 0 :::12201 :::* LISTEN 28686/docker-proxy
tcp6 0 0 :::6379 :::* LISTEN 32595/docker-proxy
tcp6 0 0 :::2222 :::* LISTEN 1036/docker-proxy
Docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2ecfcc9e27c1 datalust/seq-input-gelf:latest "/run.sh" 46 minutes ago Up 9 seconds 0.0.0.0:12201->12201/tcp laradock_seq-input-gelf_1
82fab54a885f datalust/seq:latest "/run.sh" 6 hours ago Up 12 seconds 0.0.0.0:5341->5341/tcp, 0.0.0.0:9595->80/tcp seq
efa6294f46ec laradock_nginx "/docker-entrypoint.…" 3 days ago Up 2 hours 0.0.0.0:81->81/tcp, 0.0.0.0:85->80/tcp, 0.0.0.0:444->443/tcp laradock_nginx_1
2603eecd14ff centrifugo/centrifugo:latest "centrifugo --admin …" 5 weeks ago Up 2 hours 0.0.0.0:8000->8000/tcp centrifugo
972219fdfe72 laradock_php-fpm "docker-php-entrypoi…" 6 weeks ago Up 2 hours 9000/tcp laradock_php-fpm_1
00babde04deb laradock_workspace "/sbin/my_init" 6 weeks ago Up 2 hours 0.0.0.0:3000-3001->3000-3001/tcp, 0.0.0.0:4200->4200/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:2222->22/tcp, 0.0.0.0:8001->8000/tcp laradock_workspace_1
b38d17dfe6fd laradock_redis "docker-entrypoint.s…" 7 weeks ago Up 2 hours 0.0.0.0:6379->6379/tcp laradock_redis_1
5f427856464b docker:19.03-dind "dockerd-entrypoint.…" 7 weeks ago Up 2 hours 2375-2376/tcp
Container:
docker inspect 2ecfcc9e27c1
[
{
"Id": "2ecfcc9e27c1707a6505b94f85f10db3d9e50294b3d917a49193d4a13b1a9533",
"Created": "2020-10-24T00:04:41.109211Z",
"Path": "/run.sh",
"Args": [],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 31665,
"ExitCode": 0,
"Error": "",
"StartedAt": "2020-10-24T01:18:56.113073771Z",
"FinishedAt": "2020-10-24T01:15:57.317327958Z"
},
"Image": "sha256:f38aa667762f7ec94470381cfc694f3a56ac12950d6a2a91a2125fb369e6f922",
"ResolvConfPath": "/var/lib/docker/containers/2ecfcc9e27c1707a6505b94f85f10db3d9e50294b3d917a49193d4a13b1a9533/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/2ecfcc9e27c1707a6505b94f85f10db3d9e50294b3d917a49193d4a13b1a9533/hostname",
"HostsPath": "/var/lib/docker/containers/2ecfcc9e27c1707a6505b94f85f10db3d9e50294b3d917a49193d4a13b1a9533/hosts",
"LogPath": "/var/lib/docker/containers/2ecfcc9e27c1707a6505b94f85f10db3d9e50294b3d917a49193d4a13b1a9533/2ecfcc9e27c1707a6505b94f85f10db3d9e50294b3d917a49193d4a13b1a9533-json.log",
"Name": "/laradock_seq-input-gelf_1",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "docker-default",
"ExecIDs": null,
"HostConfig": {
"Binds": [],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "laradock_default",
"PortBindings": {
"12201/tcp": [
{
"HostIp": "",
"HostPort": "12201"
}
]
},
"RestartPolicy": {
"Name": "unless-stopped",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": [],
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "shareable",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/5298058a795f83d9e86b1b8740617801f5c2bb2cdafb98af33aac792ba89e739-init/diff:/var/lib/docker/overlay2/ed3a80ad4efa9b8276672d59a21aa737733c7799084b4e134bc6c2efe3b6ef72/diff:/var/lib/docker/overlay2/2cc724d60dd8d915cbd81a44951988516f5c858db29dafe20ae8225ec727630a/diff:/var/lib/docker/overlay2/048682b30311cd1ded7e21f4b660edac34b4c2d7df0b8c38401f84138d8b3a83/diff:/var/lib/docker/overlay2/90919bd42ef5416ef38d320eb8e254a77f31c504e0dbee659dd50479768e4743/diff:/var/lib/docker/overlay2/a7fd0c10de414b09f22d07bca10e70c90081038a04abbf61eb391aa3211c41b7/diff:/var/lib/docker/overlay2/abee0aa1b60197ec3c7ed6531c848e5343d6a177c3cea31112b9be4f3f2e4f93/diff:/var/lib/docker/overlay2/ba32dc821992935c235bb0716296bc9f18b00adb3e444c412951e6d524de2f99/diff:/var/lib/docker/overlay2/b5149eec63b97c516a2dea575716d1a6d912efba4ab7ca1b1c99da89e2e89e48/diff:/var/lib/docker/overlay2/38e7570a730d82deda53c0ca31f02d78a1608bb9841f8aacb87273aad798af0f/diff",
"MergedDir": "/var/lib/docker/overlay2/5298058a795f83d9e86b1b8740617801f5c2bb2cdafb98af33aac792ba89e739/merged",
"UpperDir": "/var/lib/docker/overlay2/5298058a795f83d9e86b1b8740617801f5c2bb2cdafb98af33aac792ba89e739/diff",
"WorkDir": "/var/lib/docker/overlay2/5298058a795f83d9e86b1b8740617801f5c2bb2cdafb98af33aac792ba89e739/work"
},
"Name": "overlay2"
},
"Mounts": [],
"Config": {
"Hostname": "2ecfcc9e27c1",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"12201/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"SEQ_ADDRESS=http://seq:5341",
"GELF_ENABLE_DIAGNOSTICS=True",
"GELF_ADDRESS=tcp://0.0.0.0:12201",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"SEQ_API_KEY="
],
"Cmd": null,
"Image": "datalust/seq-input-gelf:latest",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/run.sh"
],
"OnBuild": null,
"Labels": {
"Description": "Seq",
"Vendor": "Datalust Pty Ltd",
"com.docker.compose.config-hash": "35e7de6b63a6cd3fb8304fcbda8b2072f1fe0b7d6ee92b40cda9d53fd6fe3f64",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "laradock",
"com.docker.compose.service": "seq-input-gelf",
"com.docker.compose.version": "1.17.1"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "4500d37476e5e34e9e2a170a08d76b127d137a7d84a682e44e0845828cd12b91",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"12201/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "12201"
}
]
},
"SandboxKey": "/var/run/docker/netns/4500d37476e5",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"laradock_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"2ecfcc9e27c1",
"seq-input-gelf"
],
"NetworkID": "c397e6ab39dd910f62c93415b1b967137789b7647c3e803e3a1f55d4b65461fc",
"EndpointID": "1bc6a7386665d037c02cbe814049cbff412b0cd756749ff6423c113e9a2ff4b2",
"Gateway": "172.20.0.1",
"IPAddress": "172.20.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:14:00:02",
"DriverOpts": null
}
}
}
}
]
This works and appears in Seq.
curl -XPOST "http://localhost:5341/api/events/raw?clef" -d "{'@t':'2018-06-07T03:44:57.8532799Z','@mt':'Hello, {User}','User':'alice'}"
This returns fine but does not appear in Seq (no STDOUT either, nothing when listening on the port, no error):
echo '{ "version": "1.1", "host": "example.org", "short_message": "A short message", "level": 1, "timestamp": 1557327843, "_some_info": "foo" }' | nc -w1 127.0.0.1 12201
Attempting to use this package: https://github.com/hedii/laravel-gelf-logger also fails. config/logging
in Laravel:
'seq' => [
'driver' => 'custom',
'via' => \Hedii\LaravelGelfLogger\GelfLoggerFactory::class,
// This optional option determines the processors that should be
// pushed to the handler. This option is useful to modify a field
// in the log context (see NullStringProcessor), or to add extra
// data. Each processor must be a callable or an object with an
// __invoke method: see monolog documentation about processors.
// Default is an empty array.
'processors' => [
\Hedii\LaravelGelfLogger\Processors\NullStringProcessor::class,
// another processor...
],
// This optional option determines the minimum "level" a message
// must be in order to be logged by the channel. Default is 'debug'
'level' => 'debug',
// This optional option determines the channel name sent with the
// message in the 'facility' field. Default is equal to app.env
// configuration value
'name' => 'example-app',
// This optional option determines the system name sent with the
// message in the 'source' field. When forgotten or set to null,
// the current hostname is used.
'system_name' => 'laravel',
// This optional option determines if you want the UDP, TCP or HTTP
// transport for the gelf log messages. Default is UDP
'transport' => 'tcp',
// This optional option determines the host that will receive the
// gelf log messages. Default is 127.0.0.1
'host' => '127.0.0.1',
// This optional option determines the port on which the gelf
// receiver host is listening. Default is 12201
'port' => 12201,
// This optional option determines the path used for the HTTP
// transport. When forgotten or set to null, default path '/gelf'
// is used.
'path' => null,
// This optional option determines the maximum length per message
// field. When forgotten or set to null, the default value of
// \Monolog\Formatter\GelfMessageFormatter::DEFAULT_MAX_LENGTH is
// used (currently this value is 32766)
'max_length' => null,
// This optional option determines the prefix for 'context' fields
// from the Monolog record. Default is null (no context prefix)
'context_prefix' => null,
// This optional option determines the prefix for 'extra' fields
// from the Monolog record. Default is null (no extra prefix)
'extra_prefix' => null,
],
Logs from Seq (no errors)
seq | ────────────────────────────────────────
seq | Seq ♦ Machine data, for humans.
seq | ─────────── © 2020 Datalust Pty Ltd ────
seq |
seq | Running as server; press Ctrl+C to exit.
seq |
seq | [00:51:01 INF] Seq "2020.3.4761" running on OS "Linux 5.4.0-52-generic #57~18.04.1-Ubuntu SMP Thu Oct 15 14:04:49 UTC 2020"
seq | [00:51:01 INF] Seq detected 8222.748672 MB of RAM
seq | [00:51:03 INF] Seq listening on ["http://localhost/", "http://localhost:5341/"]
seq | [00:51:04 INF] Opening event store at "/data/Stream/stream.flare"
seq | [00:51:04 INF] Opening metastore "/data/Documents/documents.lmdb"
seq | [00:51:04 INF] Cache warm-up is required
seq | [00:51:04 INF] Initial memory cache warm-up "completed" in 49.7612 ms
seq | [00:51:04 INF] 1 segments warmed up ({"Schemata": 2, "Strings": 0})
seq | [00:51:04 INF] Storage subsystem available
seq | [00:52:04 INF] 1 more generation 2 garbage collection(s) occurred
seq | [00:56:04 INF] Metrics sampled
From within the container:
docker-compose exec seq-input-gelf bash
root@2ecfcc9e27c1:/# curl -X GET http://seq:5341
curl: (6) Could not resolve host: seq
Logs from seq-input-gelf:
seq-input-gelf_1 | {"@t":"2020-10-24T00:04:48.327483112Z","@l":"DEBUG","@mt":"Starting GELF server"}
seq-input-gelf_1 | {"@t":"2020-10-24T00:04:48.329388978Z","@l":"DEBUG","@mt":"Setting up for TCP"}
seq-input-gelf_1 | Failed to send an event batch
seq-input-gelf_1 | System.Net.Http.HttpRequestException: No such device or address ---> System.Net.Sockets.SocketException: No such device or address
seq-input-gelf_1 | at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)
seq-input-gelf_1 | --- End of inner exception stack trace ---
seq-input-gelf_1 | at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)
seq-input-gelf_1 | at System.Threading.Tasks.ValueTask`1.get_Result()
seq-input-gelf_1 | at System.Net.Http.HttpConnectionPool.CreateConnectionAsync(HttpRequestMessage request, CancellationToken cancellationToken)
seq-input-gelf_1 | at System.Threading.Tasks.ValueTask`1.get_Result()
seq-input-gelf_1 | at System.Net.Http.HttpConnectionPool.WaitForCreatedConnectionAsync(ValueTask`1 creationTask)
seq-input-gelf_1 | at System.Threading.Tasks.ValueTask`1.get_Result()
seq-input-gelf_1 | at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)
seq-input-gelf_1 | at System.Net.Http.AuthenticationHelper.SendWithAuthAsync(HttpRequestMessage request, Uri authUri, ICredentials credentials, Boolean preAuthenticate, Boolean isProxyAuth, Boolean doRequestAuth, HttpConnectionPool pool, CancellationToken cancellationToken)
seq-input-gelf_1 | at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
seq-input-gelf_1 | at System.Net.Http.HttpClient.FinishSendAsyncBuffered(Task`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts)
seq-input-gelf_1 | at SeqCli.Ingestion.LogShipper.SendBatchAsync(SeqConnection connection, String apiKey, IReadOnlyCollection`1 batch, Boolean logSendFailures) in /home/appveyor/projects/seqcli/src/SeqCli/Ingestion/LogShipper.cs:line 155
seq-input-gelf_1 | at SeqCli.Ingestion.LogShipper.ShipEvents(SeqConnection connection, String apiKey, ILogEventReader reader, InvalidDataHandling invalidDataHandling, SendFailureHandling sendFailureHandling, Func`2 filter) in /home/appveyor/projects/seqcli/src/SeqCli/Ingestion/LogShipper.cs:line 55
seq-input-gelf_1 | {"@t":"2020-10-24T00:05:48.327737397Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":1},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":1},"server":{"process_err":0,"process_ok":1,"receive_err":0,"receive_ok":1,"tcp_conn_accept":1,"tcp_conn_close":1,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:06:48.328056289Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:07:48.328401615Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | Failed to send an event batch
seq-input-gelf_1 | System.Net.Http.HttpRequestException: No such device or address ---> System.Net.Sockets.SocketException: No such device or address
seq-input-gelf_1 | at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)
seq-input-gelf_1 | --- End of inner exception stack trace ---
seq-input-gelf_1 | at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)
seq-input-gelf_1 | at System.Threading.Tasks.ValueTask`1.get_Result()
seq-input-gelf_1 | at System.Net.Http.HttpConnectionPool.CreateConnectionAsync(HttpRequestMessage request, CancellationToken cancellationToken)
seq-input-gelf_1 | at System.Threading.Tasks.ValueTask`1.get_Result()
seq-input-gelf_1 | at System.Net.Http.HttpConnectionPool.WaitForCreatedConnectionAsync(ValueTask`1 creationTask)
seq-input-gelf_1 | at System.Threading.Tasks.ValueTask`1.get_Result()
seq-input-gelf_1 | at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)
seq-input-gelf_1 | at System.Net.Http.AuthenticationHelper.SendWithAuthAsync(HttpRequestMessage request, Uri authUri, ICredentials credentials, Boolean preAuthenticate, Boolean isProxyAuth, Boolean doRequestAuth, HttpConnectionPool pool, CancellationToken cancellationToken)
seq-input-gelf_1 | at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
seq-input-gelf_1 | at System.Net.Http.HttpClient.FinishSendAsyncBuffered(Task`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts)
seq-input-gelf_1 | at SeqCli.Ingestion.LogShipper.SendBatchAsync(SeqConnection connection, String apiKey, IReadOnlyCollection`1 batch, Boolean logSendFailures) in /home/appveyor/projects/seqcli/src/SeqCli/Ingestion/LogShipper.cs:line 155
seq-input-gelf_1 | at SeqCli.Ingestion.LogShipper.ShipEvents(SeqConnection connection, String apiKey, ILogEventReader reader, InvalidDataHandling invalidDataHandling, SendFailureHandling sendFailureHandling, Func`2 filter) in /home/appveyor/projects/seqcli/src/SeqCli/Ingestion/LogShipper.cs:line 55
seq-input-gelf_1 | {"@t":"2020-10-24T00:08:48.328635883Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":1},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":1},"server":{"process_err":0,"process_ok":1,"receive_err":0,"receive_ok":1,"tcp_conn_accept":1,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:09:48.328914916Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:10:48.329239160Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":1,"tcp_conn_timeout":1,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:11:48.329485601Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:12:48.329734195Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:13:48.329960761Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:14:48.330128423Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:15:48.330304681Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:16:48.330611926Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:17:48.330802587Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | Failed to send an event batch
seq-input-gelf_1 | System.Net.Http.HttpRequestException: No such device or address ---> System.Net.Sockets.SocketException: No such device or address
seq-input-gelf_1 | at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)
seq-input-gelf_1 | --- End of inner exception stack trace ---
seq-input-gelf_1 | at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)
seq-input-gelf_1 | at System.Threading.Tasks.ValueTask`1.get_Result()
seq-input-gelf_1 | at System.Net.Http.HttpConnectionPool.CreateConnectionAsync(HttpRequestMessage request, CancellationToken cancellationToken)
seq-input-gelf_1 | at System.Threading.Tasks.ValueTask`1.get_Result()
seq-input-gelf_1 | at System.Net.Http.HttpConnectionPool.WaitForCreatedConnectionAsync(ValueTask`1 creationTask)
seq-input-gelf_1 | at System.Threading.Tasks.ValueTask`1.get_Result()
seq-input-gelf_1 | at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)
seq-input-gelf_1 | at System.Net.Http.AuthenticationHelper.SendWithAuthAsync(HttpRequestMessage request, Uri authUri, ICredentials credentials, Boolean preAuthenticate, Boolean isProxyAuth, Boolean doRequestAuth, HttpConnectionPool pool, CancellationToken cancellationToken)
seq-input-gelf_1 | at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
seq-input-gelf_1 | at System.Net.Http.HttpClient.FinishSendAsyncBuffered(Task`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts)
seq-input-gelf_1 | at SeqCli.Ingestion.LogShipper.SendBatchAsync(SeqConnection connection, String apiKey, IReadOnlyCollection`1 batch, Boolean logSendFailures) in /home/appveyor/projects/seqcli/src/SeqCli/Ingestion/LogShipper.cs:line 155
seq-input-gelf_1 | at SeqCli.Ingestion.LogShipper.ShipEvents(SeqConnection connection, String apiKey, ILogEventReader reader, InvalidDataHandling invalidDataHandling, SendFailureHandling sendFailureHandling, Func`2 filter) in /home/appveyor/projects/seqcli/src/SeqCli/Ingestion/LogShipper.cs:line 55
seq-input-gelf_1 | {"@t":"2020-10-24T00:18:48.331061506Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":1},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":1},"server":{"process_err":0,"process_ok":1,"receive_err":0,"receive_ok":1,"tcp_conn_accept":1,"tcp_conn_close":1,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:19:48.331239277Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:20:48.331532677Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:21:48.331778979Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:22:48.332102822Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:23:48.332449554Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:24:48.332754494Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:25:48.333062811Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:50:52.543461325Z","@l":"DEBUG","@mt":"Starting GELF server"}
seq-input-gelf_1 | {"@t":"2020-10-24T00:50:52.545627379Z","@l":"DEBUG","@mt":"Setting up for TCP"}
seq-input-gelf_1 | {"@t":"2020-10-24T00:51:52.543752073Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:52:52.543964897Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:53:52.544278253Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:54:52.544553289Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:55:52.544777399Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:56:52.545051947Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:57:52.545264216Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:58:52.545567218Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
seq-input-gelf_1 | {"@t":"2020-10-24T00:59:52.545875615Z","@l":"DEBUG","@mt":"Collected GELF server metrics","process":{"msg":0},"receive":{"chunk":0,"msg_chunked":0,"msg_incomplete_chunk_overflow":0,"msg_unchunked":0},"server":{"process_err":0,"process_ok":0,"receive_err":0,"receive_ok":0,"tcp_conn_accept":0,"tcp_conn_close":0,"tcp_conn_timeout":0,"tcp_msg_overflow":0}}
I currently have problems when trying to ingest events in GELF. I have a working Seq server 5.1.3004 that runs on windows and I have installed the seq.input.gelf
app as explained on this page.
However, when trying to send an event via ncat
like so:
echo -n '{ "version": "1.1", "host": "example.org", "short_message": "A short message", "level": 1, "timestamp": 1557327843, "_some_info": "foo" }' | ncat -w5 -u my.log.server 12201
nothing happens, even when sent from the same machine.
When sending a wrong format, eg by setting the level
to a string, at least an error message is logged:
echo -n '{ "version": "1.1", "host": "example.org", "short_message": "A short message", "level": "DEBUG", "timestamp": 1557327843, "_some_info": "foo" }' | ncat -w5 -u my.log.server 12201
{"@ t":"2019-05-08T15:18:50.6925401Z","@ mt":"GELF processing failed","@ m":"GELF processing failed","@ i":"07b8c06e","@ l":"ERROR","@ x":"invalid type: string "DEBUG", expected u8 at line 1 column 95","AppId":"hostedapp-68","AppInstanceId":"appinstance-70"}
Any hints as to where I went in the wrong direction? Thanks!
Currently, the app treats built-in GELF (e.g. host
) and Docker (e.g. container_id
) properties specially, nesting them under gelf
and docker
structured properties, respectively. This may not be suitable for all environments, particularly when regular structured event properties might be interpreted as being related to Docker when they're not.
Some kind of opt-in or opt-out should be considered, here.
For the TCP transport, we should look at supporting TLS. As a starting point, there are TLS APIs for tokio
, which is the async runtime this server uses.
Right now, we only check for expired message chunks when processing message chunks. That means it's possible unprocessed, expired chunks could be sitting around for a very long time if the server is mostly working with unchunked messages.
We could use a simple wrapping counter and clean up expired chunks after every 1000 or so unchunked messages received.
A seq-input-gelf
container that supports the ARM architecture would be great, to run within a Minikube
environment on an M1 Mac.
Hi!
I am running Seq with gelf plugin in a kubernetes environment.
Logs are shipped by fluent-bit.
Most log messages appear correctly in Seq, but some messages cannot be parsed correctly. Sometimes I get the error "GELF processing failed" and in the details I can see "expected ,
or }
at line 1 column 1392". So I guess fluent-bit is sending some wrongly formatted data to sqelf.
The issue is that its very hard to find out which log is not sent correctly, because it is not included in the error message. So I am not able to find out where the error comes from. Is there a chance to see message that cant be parsed or do you have any other hint on how to debug this?
Best regards
Hi!
Apologies if this is the wrong forum, let me know if there's a better place to post this. I've set up my organization's instance of Seq to ingest Docker logs from an image of NATS Streaming, but every single event imported shows up as an error:
I'm trying to understand the reason for this. The [INF] tag indicates an Information Level, as opposed to [WRN] or [ERR]. What is the logic that this service uses to determine the level of the event? Any help would be much appreciated.
Configurations for Seq and NATS included here...
NATS Configuration:
version: '3'
services:
nats:
image: nats-streaming:latest
command: -m 8222 --cluster_id test-cluster --store SQL --sql_driver postgres --sql_source "${connection_string}"
container_name: Nats
ports:
- "4222:4222"
- "8222:8222"
network_mode: host
logging:
driver: "gelf"
options:
gelf-address: "udp://localhost:12201"
restart: always
Seq:
version: '2'
services:
seq:
image: datalust/seq:latest
container_name: Seq_Server
volumes:
- seq:/data
ports:
- "5341:80"
environment:
- ACCEPT_EULA=Y
mem_limit: 3g
memswap_limit: 3g
restart: always
seq-input-gelf:
image: datalust/seq-input-gelf:latest
ports:
- "12201:12201/udp"
network_mode: host
environment:
SEQ_ADDRESS: "http://localhost:5341"
restart: unless-stopped
depends_on:
- seq
volumes:
seq:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.