Giter Club home page Giter Club logo

fluent-plugin-remote_syslog's People

Contributors

cosmo0920 avatar daipom avatar derjohn avatar dlackty avatar fujimotos avatar joker1007 avatar scalp42 avatar srbhklkrn avatar tomykaira avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fluent-plugin-remote_syslog's Issues

Output to remote rsyslog problem

Hi, I have configured td-agent with remote_syslog plugin. I made data input to td-agent, in parallel I store output data to file in td-agent instance. In file I see changes, but nothing in output to remote rsyslog. Hosts file contain ip/hostname, iptables is down on both instances. Please, help me solve problem, Thanks

support for ${} values

Is there any possibility to implement such thing like it is in elasticsearch output plugin ? It would add more possibilities to configure this plugin.
I ment record\[.+\] pattern as there is tag_parts\[.+\] support alredy.

TLS with selfsigned certificate

Hi I am trying to use this plugin (via the fluentd kubernetes daemonset Debian syslog docker image, which as far as I can tell uses this plugin for output.)

I need to use TLS encryption and the syslog receiver uses a selfsigned certificate. Therefore I tried to disable certificate verification with the verify_mode paramter; however I still get a verification error and no connection.

This is my output configuration:

  <label @OUTPUT>
    <match **>
      @type remote_syslog
      host "syslogserver"
      port 12555
      protocol tcp
      tls true
      verify_mode 0
      packet_size 65535
      <buffer>
        retry_max_interval 300
      </buffer>
      <format>
        @type "json"
      </format>
    </match>
  </label>

Name matching is done via /etc/hosts (through Kubernetes hostAliases) entry.

The selfsigned certificate has "syslogserver" both as issuer and subject CN; but this should not matter since I try to disable certificate verification anyway.

The error message I get is:

2021-06-11 17:41:49 +0000 [warn]: #0 failed to flush the buffer. retry_time=0 next_retry_seconds=2021-06-11 17:41:50 +0000 chunk="5c48105438fdcf12caab52270b36345b" error_class=RuntimeError error="verification error"
  2021-06-11 17:41:49 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/remote_syslog_sender-1.2.1/lib/remote_syslog_sender/tcp_sender.rb:73:in `block in connect'
  2021-06-11 17:41:49 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/remote_syslog_sender-1.2.1/lib/remote_syslog_sender/tcp_sender.rb:52:in `synchronize'
  2021-06-11 17:41:49 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/remote_syslog_sender-1.2.1/lib/remote_syslog_sender/tcp_sender.rb:52:in `connect'
  2021-06-11 17:41:49 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/remote_syslog_sender-1.2.1/lib/remote_syslog_sender/tcp_sender.rb:38:in `initialize'
  2021-06-11 17:41:49 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/fluent-plugin-remote_syslog-1.0.0/lib/fluent/plugin/out_remote_syslog.rb:136:in `new'
  2021-06-11 17:41:49 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/fluent-plugin-remote_syslog-1.0.0/lib/fluent/plugin/out_remote_syslog.rb:136:in `create_sender'
  2021-06-11 17:41:49 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/fluent-plugin-remote_syslog-1.0.0/lib/fluent/plugin/out_remote_syslog.rb:91:in `write'
  2021-06-11 17:41:49 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/fluentd-1.11.2/lib/fluent/plugin/output.rb:1133:in `try_flush'
  2021-06-11 17:41:49 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/fluentd-1.11.2/lib/fluent/plugin/output.rb:1439:in `flush_thread_run'
  2021-06-11 17:41:49 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/fluentd-1.11.2/lib/fluent/plugin/output.rb:461:in `block (2 levels) in start'
  2021-06-11 17:41:49 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/fluentd-1.11.2/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'

I also tried other values for verify_mode after some searching:

  • none
  • OpenSSL::SSL::VERIFY_NONE

but the result - and the error message - is the same.

Can you advise me what the correct use of the parameter is?

TCP Remote Shipping not working

Hi,

I'm trying to make the tcp remote syslog shipment work but it's just not shipping anything (nor logging anything to td-agent log file).

As an additional information I have the remote_syslog settings inside a copy @type and have a elasticsearch shipment configured side-by-side that is working flawlessly.

I'm attaching bellow my settings, could you please advise?

  <match **>
    @type copy
    <store>
      @type "remote_syslog"
      host "10.250.51.5"
      hostname "syslog-test"
      port 514
      severity "debug"
      protocol tcp
      program "td-agent"
      packet_size 10240
      timeout 0
      keep_alive true
      <buffer>
        flush_mode interval
        flush_interval 5s
      </buffer>
      <format>
        @type "single_value"
        message_key "msg"
      </format>
    </store>
  </match>

Then, from inside the container (just to show that the syslog server is reacheable):

root@cluster-logger-nqdpm:/# telnet 10.250.51.5 514
Trying 10.250.51.5...
Connected to 10.250.51.5.
Escape character is '^]'.

Then on the docker host (just to show that there are no open connections to the syslog server:

root@ip-10-247-4-56:~# netstat -pan | grep 10.250.51.5
root@ip-10-247-4-56:~#

Is there any way to debug this?

Thanks a lot! =)

Remote syslog not working

Hi @joker1007

I have this simple config:

syslog_docker_input.conf

<source>
  @type syslog
  tag syslog.docker.containers
  port 5140
  protocol_type udp
  <parse>
    message_format rfc3164
  </parse>
  source_hostname_key hostname
  source_address_key source_ip
  priority_key priority
  facility_key facility
  log_level info
</source>

To test, I'm using a simple container printing "hello world" in JSON and using the syslog driver with FluentD syslog input:

docker run --rm --name test --log-driver syslog --log-opt syslog-address=udp://127.0.0.1:5140 --log-opt tag="{{.ImageName}}/{{.Name}}/{{.ID}}" --log-opt syslog-format=rfc3164  alpine echo '{"hello": "world"}'

I enabled stdout output for syslog.docker.containers.** tags:

syslog_docker_output.conf

<match syslog.docker.containers.**>
  @type stdout
</match>

I can see the logs fine in fluentd logs:

2019-06-18 04:16:11.000000000 +0000 syslog.docker.containers.daemon.info: {"host":"default-ubuntu-1804","ident":"alpine/test/154dd9f55d5e","pid":"16783","message":"{\"hello\": \"world\"}","priority":"info","facility":"daemon","source_ip":"10.0.2.15","hostname":"default-ubuntu-1804.vagrantup.com"}

When I try to use remote_syslog, it doesn't appear the logs are being sent:

syslog_docker_output_papertrail.conf

<match syslog.docker.containers.**>
  @type remote_syslog
  host logs42.papertrailapp.com
  port 42000
  facility user
  severity notice
  program fluentd
  protocol udp
  tls false
  hostname default-ubuntu-1804
  log_level debug
  <format>
    @type single_value
    message_key message
  </format>
</match>

Any chance you have an idea? 🙇

Can't use tag placeholders

Hi @dlackty !

I have the following tag: marathon.myapp and I'm trying to tag it as myapp:

<match marathon.*>
  type remote_syslog
  host blabla.papertrailapp.com
  port 42000
  num_threads 2
  tag ${tag} # tried also tag_parts[2] or tag_parts.last
  hostname example.com
  output_include_time no
  output_include_tag no
  output_data_type attr:container_name,log
</match>

Unfortunately, this is not grabbing the 2nd part of the tag:

I'm also seeing:

015-12-08 01:27:51 +0000 [warn]: parameter 'num_threads' in <match marathon.*>
xxxxx
</match> is not used.

Any idea on how to have fluentd interpolate correctly tag_parts[x] correctly ?

Thanks in advance for the help!

Possible to pass through the hostname and the tag fields to the remote syslog?

I have an architecture where I get parsed fields to fluentd. I then have a requirement to send these events to a remote syslog in addition to sending to ES. I tested with your plugin and it works well except that I was interested in sending the original hostname and the tag (program) to the remote syslog. Is that possible?

Tag Rewrites Don't Work

Hi there,

I have the fluentd docker logging driver writing to my fluentd instance running this plugin. When I use the plugin using the following configuration:

<match docker.**>
  type remote_syslog
  host <somehost>
  port xxxxx
  severity debug
  tag my.${tag}
</match>

What I would expect based on the fluent-mixin-rewrite documentation is that my tag would read my.docker.<containername>. Instead it is my.${tag} literally. I could be misconfiguring or misunderstanding this functionality... is that the case?

If not, from a glance at the docs from fluent-mixin-rewrite and your code I wonder if on this line doing the emit should be passing emit_tag instead of just tag?

Thanks in advance!

tcp rst problem

My log collection system like this:

fluent => haproxy ==> rsyslog

In my case, haproxy will close the idle connection , 5 minutes after the last message . haproxy send FIN to fluentd actively , and fluentd become close_wait , in this time, fluentd send a message, haproxy will send RST immediately,after this tcp connection is close. The message you just sent is missing.

Message time sent through syslog_protocol is Time.now instead of original log timestamp

Hello,
We're using fluentd td-agent to get logs from linux servers (/var/log/secure) and send them to a remote destination using https://github.com/reproio/remote_syslog_sender and https://github.com/eric/syslog_protocol

We would like to keep the original log timestamp in place of the syslog message timestamp when sending the syslog message to the destination. However, it seems that the original log timestamp is overwritten by Time.now = the time when the packet is sent.

We're using TCP and syslog RFC 3164

This is and extract of our td-agent configuration:

    <source>
      @type tail
      path /var/log/secure
      pos_file /var/log/td-agent/buffer/secure.pos
      tag xxx.sys.yyy.secure
      format /^(?<time>[^ ]*\s*[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$/
      enable_watch_timer false
    </source>
  <match **.sys.**secure>
    @type remote_syslog
    @id soc
    <buffer>
      @type file
    </buffer>
    host xxxxx
    port 514
    protocol tcp
    packet_size 20480
    severity debug
  </match>

Example log file:

Mar 24 11:48:40 myhostname sshd[25533]: reprocess config line 126(...)

We have captured the network packet produced by the plugin: we can see that the syslog timestamp is equal to the time of packet sending (11:49:47 truncated at the second) instead of the original log timestamp (11:48:40)

wireshark_screenshot_1

wireshark_screenshot_2

What we see:

  • hidden is
    • either hostname that produced the log ("host: myhostname" in the message)
    • or the log aggregator hostname (displayed in "syslog hostname")
  • USER.DEBUG is PRI (or <1 5>)
  • syslog timestamp (format is RFC 3164 ) : it is the date of packet sending.

We would like to have the original log timestamp here, as parsed by the td-agent configuration "time" variable.

I believe that https://github.com/eric/syslog_protocol supports it: here it is getting the timestamp from the message and putting Time.now only if time is not found or PRI in incorrect:

https://github.com/eric/syslog_protocol/blob/master/lib/syslog_protocol/parser.rb#L9

    if pri and (pri = pri.to_i).is_a? Integer and (0..191).include?(pri)
      packet.pri = pri
    else
      # If there isn't a valid PRI, treat the entire message as content
      packet.pri = 13
      packet.time = Time.now
      packet.hostname = origin || 'unknown'
      packet.content = original_msg
      return packet
    end
    time = parse_time(msg)
    if time
      packet.time = Time.parse(time)
    else
      packet.time = Time.now
    end

Thanks!

noticing the following error in my logs

2016-03-08 01:24:37 +0000 [info]: shutting down input type="tail" plugin_id="object:2af8b78e6540"
2016-03-08 01:24:37 +0000 [warn]: emit transaction failed: error_class=RuntimeError error="can't modify frozen String" tag="fluent.info"
  2016-03-08 01:24:37 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluent-plugin-remote_syslog-0.3.2/lib/fluent/plugin/out_remote_syslog.rb:39:in `force_encoding'
  2016-03-08 01:24:37 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluent-plugin-remote_syslog-0.3.2/lib/fluent/plugin/out_remote_syslog.rb:39:in `block (2 levels) in emit'
  2016-03-08 01:24:37 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluent-plugin-remote_syslog-0.3.2/lib/fluent/plugin/out_remote_syslog.rb:37:in `each_pair'
  2016-03-08 01:24:37 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluent-plugin-remote_syslog-0.3.2/lib/fluent/plugin/out_remote_syslog.rb:37:in `block in emit'
  2016-03-08 01:24:37 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/event.rb:54:in `call'
  2016-03-08 01:24:37 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/event.rb:54:in `each'
  2016-03-08 01:24:37 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluent-plugin-remote_syslog-0.3.2/lib/fluent/plugin/out_remote_syslog.rb:36:in `emit'
  2016-03-08 01:24:37 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/output.rb:32:in `next'
  2016-03-08 01:24:37 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/plugin/out_copy.rb:74:in `emit'
  2016-03-08 01:24:37 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/event_router.rb:88:in `emit_stream'
  2016-03-08 01:24:37 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/event_router.rb:79:in `emit'
  2016-03-08 01:24:37 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/engine.rb:165:in `block in log_event_loop'
  2016-03-08 01:24:37 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/engine.rb:163:in `each'
  2016-03-08 01:24:37 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/engine.rb:163:in `log_event_loop'
2016-03-08 01:24:37 +0000 [error]: failed to emit fluentd's log event tag="fluent.info" event={"type"=>"tail", "plugin_id"=>"object:2af8b78e6540", "message"=>"shutting down input type=\"tail\" plugin_id=\"object:2af8b78e6540\""} error_class=RuntimeError error=#<RuntimeError: can't modify frozen String>

it doesnt happen often but I have seen it several times over the last few weeks.

mTLS support

Would it be possible in theory to enable mTLS support for this plugin?

If log message send failed,Is there any error message?

I use td-agent by this plugin , send log to remote_syslog,my td-agent service log dosen't have any error message,but remote_syslog lack of partial logs , I don't know what's wrong.If td-agent's log doesn't have any error message means send log message succesful?

output format

i use remote_syslog to send my original msg to my remote syslog service and i recive my msg ,but with my original msg ,it appare date host and "flunetd" ,and after this is my original message. i use json format in my match ,how can i do to get rid of date and host and “fluentd”.
eg . <13>Nov 26 20:54:41 XXX.COM fluentd: {"metaData":{XXX}}

parameter 'program' is not used.

Sorry, I have new issue. Could you help?

2016-02-04 07:52:42 -0500 [warn]: parameter 'program' in <match **>
type remote_syslog
host ****
port ****
severity debug
program fluentd
is not used.

problem with ${tag_parts[]} ?

My docker-compose manifest looks like

version: '2'
services:

  fluentd:
    build: fluentd/
    container_name: fluentd
    hostname: "env-192.168.0.101"
    ports:
      - "24224:24224"
      - "24284:24284"
    volumes:
      - ./fluentd_logs:/fluentd/log:rw
      - ./fluentd/rsyslog.conf:/fluentd/etc/fluent.conf
    links:
      - rsyslog:rsyslog

  logger:
    image: widgetpl/logger:v0.1
    hostname: "logger"
    container_name: logger
    logging:
      driver: fluentd
      options:
        fluentd-address: "localhost:24224"
        tag: "docker.{{(.Hostname)}}"
    environment:
      SLEEP: 1
      ENV1: value1
      ENV2: value2
    links:
      - fluentd:fluentd

  rsyslog:
    #image: widgetpl/rsyslog:v0.1
    build: ../STACK-282/rsyslog/
    container_name: rsyslog
    volumes:
      - ./rsyslog/template.conf:/etc/rsyslog.d/docker.conf
      - ./rsyslog/logs:/var/log/logs:rw
    ports:
      - "1515:515"
      - "1514:514"

and I have tried two differnt fluentd configs.
First

  <match docker.**>
    @type remote_syslog
    @log_level debug
    host rsyslog
    port 515
    severity debug
    facility local7
    tag ${tag}
    hostname ${hostname}
  </match>

and second

  <match docker.**>
    @type remote_syslog
    @log_level debug
    host rsyslog
    port 515
    severity debug
    facility local7
    tag ${tag_parts[0]}
    hostname ${tag_parts[1]}
  </match>

and I have this setup for rsyslog

template (name="DynFile" type="string" string="/var/log/logs/%HOSTNAME%/%PROGRAMNAME%.log")

local7.* ?DynFile

When I use First config of fluentd i get

hostname: env-192.168.0.101
source: env-192.168.0.101
fromhost: fluentd.stack261_default
fromhost-ip: 172.18.0.3
syslogtag: docker.michal-Latitude-E6540:
programname: docker.michal-Latitude-E6540

and the logs are in in /var/log/logs/env-192.168.0.101/docker.michal-Latitude-E6540.log

and whit second config

hostname: fluentd.stack261_default
source: fluentd.stack261_default
fromhost: fluentd.stack261_default
fromhost-ip: 172.18.0.3
syslogtag: ${tag_parts[1]}
programname: ${tag_parts

and the logs are in /var/log/logs/fluentd.stack261_default/${tag_parts.log

Redirect logs to a syslog server based on the value of a field

Hello,

Is it possible for forward log on a dedicated remote syslog server based on the value of a field please (placeholder seems to be supported) ?

@type remote_syslog
host ${record["Site"]}
port 9514

Site will be a fqdn where the logs should be forwarded !

Thanks for your help

PS : from this, it should be possible isn't it ? #31

Long messages are truncated on UDP

Hi,

I'm using the plugin for sending to remote syslog over UDP.

Long messages (beyond ~950 chars) are truncated.

Is it possible to split long messages when the transport is UDP?

UTF-8 vs ISO8859-1

Hi,

I'm trying to send ISO format file using your plugin, it seems fluentd can't handle this format file as:

2016-08-19 12:14:44 -0300 [warn]: source sequence is illegal/malformed utf-8, ignored error_class=JSON::GeneratorError tag="logs" record="{"message"=>"19-08-2016 12:13:52 Atualizando o status para Em Prepara\xE7\xE3o"}

It should be: "para Em preparação"

Is there any way to set my format using your plugin?

Thank you!

Output with a FQDN as host value make a DNS lookup with every event

Hello,
I suspect that this plugin does not cache DNS and does a lookup with each output event.

If used like this:

type remote_syslog
  host "some.cool.host.com"
  ...

Ii will make tons of tons of tons of DNS lookups in the nameserver of that host ("Self-DDoS-ing").

Workaround: Set an IP address as value, it will not make lookups:

type remote_syslog
  host "127.0.0.1"
  ....

How about a expire_dns_cache like the fluentd "forward Output Plugin" has?

rgds,
j

Unknown output plugin 'remote_syslog'

Hi. Could you help?

2016-01-31 20:13:24 -0500 [info]: reading config file path="/etc/td-agent/td-agent.conf"
2016-01-31 20:13:24 -0500 [info]: starting fluentd-0.12.19
2016-01-31 20:13:25 -0500 [info]: gem 'fluent-mixin-config-placeholders' version '0.3.0'
2016-01-31 20:13:25 -0500 [info]: gem 'fluent-mixin-plaintextformatter' version '0.2.6'
2016-01-31 20:13:25 -0500 [info]: gem 'fluent-plugin-mongo' version '0.7.11'
2016-01-31 20:13:25 -0500 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '1.5.3'
2016-01-31 20:13:25 -0500 [info]: gem 'fluent-plugin-s3' version '0.6.4'
2016-01-31 20:13:25 -0500 [info]: gem 'fluent-plugin-scribe' version '0.10.14'
2016-01-31 20:13:25 -0500 [info]: gem 'fluent-plugin-td' version '0.10.28'
2016-01-31 20:13:25 -0500 [info]: gem 'fluent-plugin-td-monitoring' version '0.2.1'
2016-01-31 20:13:25 -0500 [info]: gem 'fluent-plugin-webhdfs' version '0.4.1'
2016-01-31 20:13:25 -0500 [info]: gem 'fluentd' version '0.12.19'
2016-01-31 20:13:25 -0500 [info]: adding match pattern="test." type="file"
2016-01-31 20:13:25 -0500 [info]: adding match pattern="test2.
" type="stdout"
2016-01-31 20:13:25 -0500 [info]: adding match pattern="test2.**" type="remote_syslog"
2016-01-31 20:13:25 -0500 [error]: config error file="/etc/td-agent/td-agent.conf" error="Unknown output plugin 'remote_syslog'. Run 'gem search -rd fluent-plugin' to find plugins"
2016-01-31 20:13:25 -0500 [info]: process finished code=256
2016-01-31 20:13:25 -0500 [warn]: process died within 1 second. exit.

Syslog message with severity "warning" are considered as bad chunk by fluentd and discarded

syslog_protocol gem is a dependency of fluentd-plugin-remote_syslog which is used to output logs to remote syslog.
The logs which are created with severity "warning" are discarded by fluentd as bad chunk with the below message:

2020-02-05 15:18:38 +0530 [warn]: #0 got unrecoverable error in primary and no secondary error_class=ArgumentError error="'warning' is not a designated severity"
2020-02-05 15:18:38 +0530 [warn]: #0 suppressed same stacktrace
2020-02-05 15:18:38 +0530 [warn]: #0 bad chunk is moved to /tmp/fluent/backup/worker0/object_3fe96317eb5c/59dd0f8011f1edd9afad11041ad628f3.log

"warning" is a valid severity for syslog https://en.wikipedia.org/wiki/Syslog#Severity_level
The source code of syslog_protocol still uses "warn" which is deprecated.
Can you please suggest us the way forward to this?

Thanks

Supported TLS version

Hi,

Does fluentd-plugin-remote-syslog plugin support TLS1.3 ?

I see the dependency plugin "remote_syslog_sender" has a parameter ssl_option which accepts TLS version(default to TLS1.2). But this parameter is not accepted in fluentd-plugin-remote-syslog plugin.

Can you please help us know if TLS1.3 is supported? If not is there any plan to support this?

Thanks,
Mahesh

Issue while using single_value formatter plugin

Hello!

it seems that setting the @type to single_value in the format section leads to fluentd stop outputting logs at all.
I turned on verbosity on td-agent, but I didn't find any log line referencing any kind of error (similar output after adding/removing it, just that fluentd actually delivers log lines when removing the single_value type formatter.

Working with ltsv (default type) lead to different results (I can configure ltsv parameters without issues).

I am not sure about whether this is a flaw in the current version, an incompatibility of the gem with my current td-agent version or a problem in fluentd itself.

I am running td-agent 1.3.3 on Ubuntu 14.04.

Remote_syslog is not working as expected

Hi All,

I am quite new to fluentd.

Currently I am using following fluentd configuration where I am redirecting my syslog messages to fluentd which is sharing the same to remote syslog server via remote_syslog and a local file. I can see the syslog messages successfully written into local file but not into remote syslog server.

I also tried using local syslog client to write the logs into remote syslog server and it works fine. Issue is only observed when using the fluentd. Any suggestions would be of great help

<source>
  @type syslog
  port 5140
  bind 0.0.0.0
  tag syslog
  <transport tcp>
  </transport>
  <parse>
    @type syslog
    with_priority true
    message_format rfc3164
  </parse>
</source>

<match syslog.**>
  @type file
  path /var/log/fluent/testlog/01
</match>

<match syslog.**>
  @type remote_syslog
  host 10.10.26.209
  port 514
  hostname "#{Socket.gethostname}"
  facility local0
  <buffer>
    @type file
    path /var/log/fluent/syslog_buffer
    flush_interval 10s
  </buffer>
</match>

Using the remote syslog plugin, is there a way to get the original syslog message - without the tag / hostname / timestamp ?

When using the remote syslog plugin, it injects a new timestamp, a new tag (fluentd by default) and the hostname fields.
My understanding is as follows:

  • the timestamp is for the time the event is forwarded by the Syslog Server to the Remote Syslog server
  • the tag is fluentd, by default
  • the hostname is the syslog server forwarding the events to this Remote Syslog Server

Is there a way to strip / transform the record at the Remote Syslog Server ?
So for instance, I'd like to remove the fluentd tag. I'd like the timestamp and host to match what is in the body of the message.
Please refer to the screenshot below. I'd like to get rid of the items in red and use the items in green instead.

My config is as follows:

##########
# INPUTS #
##########
# udp syslog
<source>
  @type syslog
  <transport udp>
  </transport>
  bind 0.0.0.0
  port 514
  tag syslog
  <parse>
    @type none
    message_format auto
    with_priority true
  </parse>
</source>

###########
# OUTPUTS #
###########
<match syslog**>
  @type copy
  <store>
    @type file
    path /var/log/td-agent/syslog
    compress gzip
  </store>
  <store>
     @type forward
     <server>
       host 192.168.0.2
       port 514
     </server>
  </store>
  <store>
     @type remote_syslog
     host 192.168.0.3
     port 514     
  </store>
</match>

The output as received by Kiwi Syslog is as shown -
image

Any inputs / suggestions / recommendations are welcome.

Do I need to also configure a syslog client in other for remote_syslog to properly send data ?

Hi

I have installed the fluent-plugin-remote_syslog in order to send data to a remote rsyslog server trough fluentd

I have the following configuration to send log from a hostA to hostB where a remote Syslog server is configured to collect log on port 514 through tcp or udp)

<match log.agent.**>
        @type remote_syslog
        host hostB.com
        port 514
        facility user
        severity notice
        program fluentd
        protocol tcp
        tls false
        hostname hostA.com
        log_level debug

        <buffer>
        </buffer>

        <format>
          @type single_value
          message_key msg
        </format>
</match>

After restarting the td-agent service I do not have any td-agent service log error

However, data are not collected in HostB and I am try to understand why ?

My question is : do I need to also configure a syslog client on hostA in other to fluentd to properly send these data ?

Thank You for any help

No failure and buffering of data happening when syslog server (deployed through kubernetes) becomes unreachable

Version details:

  • td-agent-4.3.1
  • Syslog plugin gems:
    fluent-plugin-remote_syslog (1.0.0)
    remote_syslog_sender (1.2.2)
    syslog_protocol (0.9.2)

Background:

  • Fluentd and rsyslog are deployed through helm and are running as pods in kubernetes.
  • There is a k8s service 'rsyslog' created for the rsyslog pod.
  • Fluentd is configured to read a file and send to the rsyslog server. File buffer is used in the match section for on-disk buffering.
    Config looks like this
<source>
  @type tail
  path "/tmp/test.log"
  pos_file /var/log/shivtest-retry-fluentd-container-log.pos
  tag "test"
  <parse>
   @type json
  </parse>
</source>
<match test>
  @type remote_syslog
  @log_level trace
  host rsyslog.shiv-syslog.svc.cluster.local  
  port 514
  protocol tcp
  timeout 2
  <buffer>
     @type file
     path /tmp/buffer
     flush_interval 10s
     flush_mode interval
     flush_thread_count 1
     retry_forever true
     chunk_limit_size 2MB
     total_limit_size 100m
  </buffer>
</match>

Issue:
Fluentd successfully connects to the configured syslog endpoint and keeps pushing the records as per the flush interval.
But when the k8s service of syslog server goes down (i.e. when the syslog pod gets deleted or goes down to 0/1), fluentd does not detect any failures in connection to syslog.
It also keeps flushing all the chunks from the file buffer and does not retain anything in buffer - inspite of the destination being unreachable.
Pls see that the syslog service is unreachable (as seen by other clients like logger, curl)

  $ logger "Hello world" -n rsyslog.shiv-syslog.svc.cluster.local -P 514  -T
    logger: failed to connect rsyslog.shiv-syslog.svc.cluster.local port 514
  $ curl telnet://rsyslog.shiv-syslog.svc.cluster.local:514
    curl: (7) Failed connect to rsyslog.shiv-syslog.svc.cluster.local:514; Connection refused

Fluentd logs: there are no errors in fluentd. New chunks keep getting created and keep getting cleared from the file buffer location. Trace level logging is enabled.

<struct Fluent::Plugin::Buffer::Metadata timekey=nil, tag=nil, variables=nil, seq=0>
2023-03-21 13:25:45 +0000 [trace]: #0 done to commit a chunk chunk="5f768fa89b48e9499eecee4380ecf53f"
2023-03-21 13:25:45 +0000 [trace]: #0 writing events into buffer instance=1980 metadata_size=1
2023-03-21 13:25:45 +0000 [debug]: #0 Created new chunk chunk_id="5f768fb22636dafcbf2df4c4593736a5" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=nil, tag=nil, variables=nil, seq=0>
2023-03-21 13:25:45 +0000 [trace]: #0 chunk /tmp/buffer/buffer.b5f768fb22636dafcbf2df4c4593736a5.log size_added: 22 new_size: 22
2023-03-21 13:25:46 +0000 [trace]: #0 enqueueing all chunks in buffer instance=1980
2023-03-21 13:25:47 +0000 [trace]: #0 enqueueing all chunks in buffer instance=1980
2023-03-21 13:25:47 +0000 [trace]: #0 writing events into buffer instance=1980 metadata_size=1
2023-03-21 13:25:47 +0000 [trace]: #0 chunk /tmp/buffer/buffer.b5f768fb22636dafcbf2df4c4593736a5.log size_added: 22 new_size: 44
2023-03-21 13:25:47 +0000 [trace]: #0 enqueueing all chunks in buffer instance=1980
2023-03-21 13:25:48 +0000 [trace]: #0 enqueueing all chunks in buffer instance=1980
2023-03-21 13:25:49 +0000 [trace]: #0 writing events into buffer instance=1980 metadata_size=1
2023-03-21 13:25:49 +0000 [trace]: #0 chunk /tmp/buffer/buffer.b5f768fb22636dafcbf2df4c4593736a5.log size_added: 22 new_size: 66

So why are the chunks getting flushed from the buffer when the destination is unreachable?

Observation: -

  1. An interesting observation is -
    When syslog server is not running as a k8s pod, and is running as a standalone service on linux (i.e. managed through systemctl), when the service is stopped (using systemctl stop rsyslog), we immediately see the following error in fluentd logs when it tries to flush the next chunk from its buffer to the syslog endpoint.
    As the flush fails due to connectivity, it retains the chunk in the file buffer and keeps retrying flush (as per the configuration).
2023-03-21 10:03:38 +0000 [warn]: #0 failed to flush the buffer. retry_times=0 next_retry_time=2023-03-21 10:03:39 +0000 chunk="5f766277ef08d8f6257ad093f0a07328" error_class=Errno::ECONNREFUSED error="Connection refused - connect(2) for \"rsyslog.shiv-syslog.svc.cluster.local\" port 514"
  2023-03-21 10:03:38 +0000 [warn]: #0 /opt/td-agent/lib/ruby/gems/2.7.0/gems/remote_syslog_sender-1.2.2/lib/remote_syslog_sender/tcp_sender.rb:56:in `initialize'
  2023-03-21 10:03:38 +0000 [warn]: #0 /opt/td-agent/lib/ruby/gems/2.7.0/gems/remote_syslog_sender-1.2.2/lib/remote_syslog_sender/tcp_sender.rb:56:in `new'
  2023-03-21 10:03:38 +0000 [warn]: #0 /opt/td-agent/lib/ruby/gems/2.7.0/gems/remote_syslog_sender-1.2.2/lib/remote_syslog_sender/tcp_sender.rb:56:in `block in connect'
  2023-03-21 10:03:38 +0000 [warn]: #0 /opt/td-agent/lib/ruby/gems/2.7.0/gems/remote_syslog_sender-1.2.2/lib/remote_syslog_sender/tcp_sender.rb:52:in `synchronize'
  2023-03-21 10:03:38 +0000 [warn]: #0 /opt/td-agent/lib/ruby/gems/2.7.0/gems/remote_syslog_sender-1.2.2/lib/remote_syslog_sender/tcp_sender.rb:52:in `connect'
  2023-03-21 10:03:38 +0000 [warn]: #0 /opt/td-agent/lib/ruby/gems/2.7.0/gems/remote_syslog_sender-1.2.2/lib/remote_syslog_sender/tcp_sender.rb:129:in `rescue in send_msg'
  2023-03-21 10:03:38 +0000 [warn]: #0 /opt/td-agent/lib/ruby/gems/2.7.0/gems/remote_syslog_sender-1.2.2/lib/remote_syslog_sender/tcp_sender.rb:108:in `send_msg'
  2023-03-21 10:03:38 +0000 [warn]: #0 /opt/td-agent/lib/ruby/gems/2.7.0/gems/remote_syslog_sender-1.2.2/lib/remote_syslog_sender/sender.rb:49:in `block in transmit'
  2023-03-21 10:03:38 +0000 [warn]: #0 /opt/td-agent/lib/ruby/gems/2.7.0/gems/remote_syslog_sender-1.2.2/lib/remote_syslog_sender/sender.rb:37:in `each'
  2023-03-21 10:03:38 +0000 [warn]: #0 /opt/td-agent/lib/ruby/gems/2.7.0/gems/remote_syslog_sender-1.2.2/lib/remote_syslog_sender/sender.rb:37:in `transmit'
  2023-03-21 10:03:38 +0000 [warn]: #0 /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluent-plugin-remote_syslog-1.0.0/lib/fluent/plugin/out_remote_syslog.rb:105:in `block (2 levels) in write'
  2023-03-21 10:03:38 +0000 [warn]: #0 /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluent-plugin-remote_syslog-1.0.0/lib/fluent/plugin/out_remote_syslog.rb:104:in `each_line'
  2023-03-21 10:03:38 +0000 [warn]: #0 /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluent-plugin-remote_syslog-1.0.0/lib/fluent/plugin/out_remote_syslog.rb:104:in `block in write'
  2023-03-21 10:03:38 +0000 [warn]: #0 /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.14.6/lib/fluent/plugin/buffer/file_chunk.rb:171:in `open'
  2023-03-21 10:03:38 +0000 [warn]: #0 /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluent-plugin-remote_syslog-1.0.0/lib/fluent/plugin/out_remote_syslog.rb:103:in `write'
  2023-03-21 10:03:38 +0000 [warn]: #0 /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.14.6/lib/fluent/plugin/output.rb:1179:in `try_flush'
  2023-03-21 10:03:38 +0000 [warn]: #0 /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.14.6/lib/fluent/plugin/output.rb:1500:in `flush_thread_run'
  2023-03-21 10:03:38 +0000 [warn]: #0 /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.14.6/lib/fluent/plugin/output.rb:499:in `block (2 levels) in start'
  2023-03-21 10:03:38 +0000 [warn]: #0 /opt/td-agent/lib/ruby/gems/2.7.0/gems/fluentd-1.14.6/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
2023-03-21 10:03:38 +0000 [trace]: #0 writing events into buffer instance=1980 metadata_size=1

Later when the rsyslog service is restarted and up again, it connects and successfully pushes all buffered chunks to the destination without any loss of data. For ex.
2023-03-21 10:23:24 +0000 [warn]: #0 retry succeeded. chunk_id="5f7664fef1ec6c112220332f2732de46"
2) For syslog running over kubernetes, we see that fluentd detects if there is a connectivity issue with syslog initially when fluentd starts running and there retry/buffering works fine as expected.
But once fluentd successfully establishes a connection with syslog, and then if syslog destination goes unreachable, fluentd fails to detect connection errors.

Concern:
Why is the loss of connectivity not identified in case of rsyslog server running as a kubernetes service?
Pls let us know if there're any additional configs in syslog plugin that could help us achieve retry/buffering properly in this case too.

Syslog output

Hello,

How can i use this plugin as simple syslog forworder.

Exemple : if input format is syslog and i want to maitin the same message on output. The plugin add somme fileds that i dont need.

Thanks

Ps 👍 Great job

Best regards

Changing source string

I am encountering the following problem:

We are using docker swarm mit fluentd logging driver. Our logs are forwarded by remote_syslog plugin to graylog. Unfortunately, the source is always "stdout" independent of the origin host. I have tried using the "hostname" configuration, but this does not suffice as we need to set the source string from "stdout" to "my-hostname" in order to programmatically extract the source. Any recommendations our thoughts on this?

Thank you

On the future development of this plugin

Hi @dlackty. First, thanks for developing such a great plugin!

We are @fluent-plugins-nursery team who try to maintain orphaned Fluentd plugins and keep them updated.

It seems that the development of this plugin has been stagnating for a while.
The last update 578380b is 4 yeas ago and issues and pull requests are left untouched.

Since many users are still relying on this plugin, we want to adopt this plugin as well.

So the question is: is it possible to transfer the ownership of this project to fluent-plugins-nursery?
If you can, we'd continue developing this work as part of our program to sustain the ecosystem.

We are looking forward to hearing back from you!

Load Balancing

Hi,

Is there a way to load balance the destination Syslog servers using this plugin.

Essentially I'm looking to send the output to 2 syslog collectors without duplicating the messages incase of a service outage on one of the collectors.

Thanks in advance

Cannot reformat timestamp

I am trying to use this plugin but need the timestamp in another format (EST instead of UTC).

Output shows this:
Jun 10 20:49:45 fluentd-f5ggn fluentd: log details...

The timestamp shows in UTC but my target expects EST. I have tried to use parse/regex to extract this field but doesn't work. Any ideas on how I can convert this field?

Timeout issue

Hello,

We are using fluent-plugin-remote_syslog to forward session data events of a remote access tool based on an IP field to forward events to a dedicated syslog servers depending on the ip (IP plan) ! In other words, we have several sites, we can connect to these sites using a remote access solution, and want to forward sessions details to the syslog server of the site accessed !

If one of the sites is down, fluentd seems to be blocked and try to connect indefinitively to the site , and nothing else are forwarded, despite a timeout parameter is set !?

Is it a bug :-(, and if it is, do you plan to fix it please ?

If a timeout occured for a site, and if the timeout works, what happen to the events of the site unreachable, are they lost or still buffered and resend when the syslog server of the site is up again ?

Thanks for your help.

supported format

Hello,

Which syslog format is supported : BSD , IETF, both (how do you select the format) ?

Thanks for your help.

Support for IPv6

It'd be great if we supported to sent to a IPv6 remote syslog

support tcp protocol

i'm using tcp protocol on rsyslog.
it would be better to support tcp protocol.

Created config per readme but does not send to syslog server

Greetings, I have the following config but, I am not able to see any incoming packets to 514 on the server when I run tcpdump. Any info would be great.

<source>
  type forward
  port 24224
  bind 0.0.0.0
</source>

<match *.*>
  type copy
  <store>
    type file
    path /var/log/fluent/myapp
    time_slice_format %Y%m%d
    time_slice_wait 10m
    time_format %Y%m%dT%H%M%S
    compress gzip
    utc
  </store>
  <store>
    type remote_syslog
    host XX.XX.XX.XX
    port 514
    tag fluentd
  </store>

</match>

Removing the metadata sent with each log.

Hi,
I get some meta data sent with each log for example.
<13>Aug 22 10:53:42 devbox fluentd: {"message":"Some log message"}
Here <13>Aug 22 10:53:42 devbox fluentd: is redundant for me. How can I remove this from the transmitted log ?
Best.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.