Giter Club home page Giter Club logo

fluent-plugin-kubernetes_metadata_filter's Introduction

fluent-plugin-kubernetes_metadata_filter, a plugin for Fluentd

Circle CI Code Climate Test Coverage Ruby Style Guide Ruby Style Guide

The Kubernetes metadata plugin filter enriches container log records with pod and namespace metadata.

This plugin derives basic metadata about the container that emitted a given log record using the source of the log record. Records from kubernetes containers encode metadata about the container in the file name. The initial metadata derived from the source is used to lookup additional metadata about the container's associated pod and namespace (e.g. UUIDs, labels, annotations) when the kubernetes_url is configured. If the plugin cannot authoritatively determine the namespace of the container emitting a log record, it will use an 'orphan' namespace ID in the metadata. This behaviors supports multi-tenant systems that rely on the authenticity of the namespace for proper log isolation.

Requirements

fluent-plugin-kubernetes_metadata_filter fluentd ruby
>= 2.10.0 >= v1.10.0 >= 2.6
>= 2.5.0 >= v1.10.0 >= 2.5
>= 2.0.0 >= v0.14.20 >= 2.1
< 2.0.0 >= v0.12.0 >= 1.9

NOTE: For v0.12 version, you should use 1.x.y version. Please send patch into v0.12 branch if you encountered 1.x version's bug.

NOTE: This documentation is for fluent-plugin-kubernetes_metadata_filter-plugin-elasticsearch 2.x or later. For 1.x documentation, please see v0.12 branch.

Installation

gem install fluent-plugin-kubernetes_metadata_filter

Configuration

Configuration options for fluent.conf are:

  • kubernetes_url - URL to the API server. Set this to retrieve further kubernetes metadata for logs from kubernetes API server. If not specified, environment variables KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT will be used if both are present which is typically true when running fluentd in a pod.
  • apiVersion - API version to use (default: v1)
  • ca_file - path to CA file for Kubernetes server certificate validation
  • verify_ssl - validate SSL certificates (default: true)
  • client_cert - path to a client cert file to authenticate to the API server
  • client_key - path to a client key file to authenticate to the API server
  • bearer_token_file - path to a file containing the bearer token to use for authentication
  • tag_to_kubernetes_name_regexp - the regular expression used to extract kubernetes metadata (pod name, container name, namespace) from the current fluentd tag. This must use named capture groups for container_name, pod_name, namespace, and either pod_uuid (/var/log/pods) or docker_id (/var/log/containers)
  • cache_size - size of the cache of Kubernetes metadata to reduce requests to the API server (default: 1000)
  • cache_ttl - TTL in seconds of each cached element. Set to negative value to disable TTL eviction (default: 3600 - 1 hour)
  • watch - set up a watch on pods on the API server for updates to metadata (default: true)
  • annotation_match - Array of regular expressions matching annotation field names. Matched annotations are added to a log record.
  • allow_orphans - Modify the namespace and namespace id to the values of orphaned_namespace_name and orphaned_namespace_id when true (default: true)
  • orphaned_namespace_name - The namespace to associate with records where the namespace can not be determined (default: .orphaned)
  • orphaned_namespace_id - The namespace id to associate with records where the namespace can not be determined (default: orphaned)
  • lookup_from_k8s_field - If the field kubernetes is present, lookup the metadata from the given subfields such as kubernetes.namespace_name, kubernetes.pod_name, etc. This allows you to avoid having to pass in metadata to lookup in an explicitly formatted tag name or in an explicitly formatted CONTAINER_NAME value. For example, set kubernetes.namespace_name, kubernetes.pod_name, kubernetes.container_name, and docker.id in the record, and the filter will fill in the rest. (default: true)
  • ssl_partial_chain - if ca_file is for an intermediate CA, or otherwise we do not have the root CA and want to trust the intermediate CA certs we do have, set this to true - this corresponds to the openssl s_client -partial_chain flag and X509_V_FLAG_PARTIAL_CHAIN (default: false)
  • skip_labels - Skip all label fields from the metadata.
  • skip_pod_labels - Skip only pod label fields from the metadata.
  • skip_namespace_labels - Skip only namespace label fields from the metadata.
  • skip_container_metadata - Skip some of the container data of the metadata. The metadata will not contain the container_image and container_image_id fields.
  • skip_master_url - Skip the master_url field from the metadata.
  • skip_namespace_metadata - Skip the namespace_id field from the metadata. The fetch_namespace_metadata function will be skipped. The plugin will be faster and cpu consumption will be less.
  • stats_interval - The interval to display cache stats (default: 30s). Set to 0 to disable stats collection and logging
  • watch_retry_interval - The time interval in seconds for retry backoffs when watch connections fail. (default: 10)
  • open_timeout - The time in seconds to wait for a connection to kubernetes service. (default: 3)
  • read_timeout - The time in seconds to wait for a read from kubernetes service. (default: 10)
  • include_ownerrefs_metadata - If set to true, it will include metadata (kind & name) in kubernetes.ownerrefs about the controller that owns the pod. (default: false)

Reading from a JSON formatted log files with in_tail and wildcard filenames while respecting the CRI-o log format with the same config you need the fluent-plugin "multi-format-parser":

fluent-gem install fluent-plugin-multi-format-parser

The config block could look like this:

<source>
  @type tail
  path /var/log/containers/*.log
  pos_file fluentd-docker.pos
  read_from_head true
  tag kubernetes.*
  <parse>
    @type multi_format
    <pattern>
      format json
      time_key time
      time_type string
      time_format "%Y-%m-%dT%H:%M:%S.%NZ"
      keep_time_key false
    </pattern>
    <pattern>
      format regexp
      expression /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
      time_format '%Y-%m-%dT%H:%M:%S.%N%:z'
      keep_time_key false
    </pattern>
  </parse>
</source>

<filter kubernetes.var.log.containers.**.log>
  @type kubernetes_metadata
</filter>

<match **>
  @type stdout
</match>

Environment variables for Kubernetes

If the name of the Kubernetes node the plugin is running on is set as an environment variable with the name K8S_NODE_NAME, it will reduce cache misses and needless calls to the Kubernetes API.

In the Kubernetes container definition, this is easily accomplished by:

env:
- name: K8S_NODE_NAME
  valueFrom:
    fieldRef:
      fieldPath: spec.nodeName

Example input/output

Kubernetes creates symlinks to Docker log files in /var/log/containers/*.log. Docker logs in JSON format.

Assuming following inputs are coming from a log file named /var/log/containers/fabric8-console-controller-98rqc_default_fabric8-console-container-df14e0d5ae4c07284fa636d739c8fc2e6b52bc344658de7d3f08c36a2e804115.log:

{
  "log": "2015/05/05 19:54:41 \n",
  "stream": "stderr",
  "time": "2015-05-05T19:54:41.240447294Z"
}

Then output becomes as belows

{
  "log": "2015/05/05 19:54:41 \n",
  "stream": "stderr",
  "docker": {
    "id": "df14e0d5ae4c07284fa636d739c8fc2e6b52bc344658de7d3f08c36a2e804115",
  }
  "kubernetes": {
    "host": "jimmi-redhat.localnet",
    "pod_name":"fabric8-console-controller-98rqc",
    "pod_id": "c76927af-f563-11e4-b32d-54ee7527188d",
    "pod_ip": "172.17.0.8",
    "container_name": "fabric8-console-container",
    "namespace_name": "default",
    "namespace_id": "23437884-8e08-4d95-850b-e94378c9b2fd",
    "namespace_annotations": {
      "fabric8.io/git-commit": "5e1116f63df0bac2a80bdae2ebdc563577bbdf3c"
    },
    "namespace_labels": {
      "product_version": "v1.0.0"
    },
    "labels": {
      "component": "fabric8Console"
    }
  }
}

Contributing

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Test it (GEM_HOME=vendor bundle install; GEM_HOME=vendor bundle exec rake test)
  5. Push to the branch (git push origin my-new-feature)
  6. Create new Pull Request

Copyright

Copyright (c) 2015 jimmidyson

fluent-plugin-kubernetes_metadata_filter's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fluent-plugin-kubernetes_metadata_filter's Issues

ERROR: Failed to build gem native extension.

I have installed gem and ruby-dev and I get the following error

Building native extensions. This could take a while...
ERROR: Error installing fluent-plugin-kubernetes_metadata_filter:
ERROR: Failed to build gem native extension.

current directory: /var/lib/gems/2.3.0/gems/msgpack-1.0.2/ext/msgpack
/usr/bin/ruby2.3 -r ./siteconf20161219-27460-lhpx8l.rb extconf.rb
checking for ruby/st.h... *** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers. Check the mkmf.log file for more details. You may
need configuration options.

Provided configuration options:
--with-opt-dir
--without-opt-dir
--with-opt-include
--without-opt-include=${opt-dir}/include
--with-opt-lib
--without-opt-lib=${opt-dir}/lib
--with-make-prog
--without-make-prog
--srcdir=.
--curdir
--ruby=/usr/bin/$(RUBY_BASE_NAME)2.3
/usr/lib/ruby/2.3.0/mkmf.rb:456:in try_do': The compiler failed to generate an executable file. (RuntimeError) You have to install development tools first. from /usr/lib/ruby/2.3.0/mkmf.rb:587:intry_cpp'
from /usr/lib/ruby/2.3.0/mkmf.rb:1091:in block in have_header' from /usr/lib/ruby/2.3.0/mkmf.rb:942:inblock in checking_for'
from /usr/lib/ruby/2.3.0/mkmf.rb:350:in block (2 levels) in postpone' from /usr/lib/ruby/2.3.0/mkmf.rb:320:inopen'
from /usr/lib/ruby/2.3.0/mkmf.rb:350:in block in postpone' from /usr/lib/ruby/2.3.0/mkmf.rb:320:inopen'
from /usr/lib/ruby/2.3.0/mkmf.rb:346:in postpone' from /usr/lib/ruby/2.3.0/mkmf.rb:941:inchecking_for'
from /usr/lib/ruby/2.3.0/mkmf.rb:1090:in have_header' from extconf.rb:3:in

'
To see why this extension failed to compile, please check the mkmf.log which can be found here:

/var/lib/gems/2.3.0/extensions/x86_64-linux/2.3.0/msgpack-1.0.2/mkmf.log

extconf failed, exit code 1

Gem files will remain installed in /var/lib/gems/2.3.0/gems/msgpack-1.0.2 for inspection.
Results logged to /var/lib/gems/2.3.0/extensions/x86_64-linux/2.3.0/msgpack-1.0.2/gem_make.out

plugin stops detecting changes in pods after some time

I'm running openshift 3.2
the plugin is deployed as part of aggregated logging.

Successful case:

  1. Deployed new fluentd pods with this plugin.
  2. Deployed another test pod with specific set of labels.
  3. Checked that test pod's labels are correctly reflected in kibana.
  4. Edit test pod: add one label
  5. Check that test pod's labels are correctly updated in kibana.

Failure:

  1. Deployed new fluentd pods with this plugin.
  2. Deployed another test pod with specific set of labels.
  3. Checked that test pod's labels are correctly reflected in kibana.
  4. Wait for 1-2 hours.
  5. Edit test pod: add one label
  6. Check that test pod's labels are correctly updated in kibana. (failure: labels are not updated).

What i think is happening: watcher times out and stops receiving any notices from k8s.
Please let me know what you think.

cc @richm

Error for being unable to connect to k8s is convoluted

If the plugin is unable to connect to K8S, it doesn't print an error that is easy for users to interpret.

2016-03-03  03:32:39 +0000 [error]: unexpected error  error_class=Errno::ECONNREFUSED error=#<Errno::ECONNREFUSED:  Connection refused - connect(2)>
  2016-03-03 03:32:39 +0000 [error]: /usr/share/ruby/net/http.rb:878:in `initialize'
  2016-03-03 03:32:39 +0000 [error]: /usr/share/ruby/net/http.rb:878:in `open'
  2016-03-03 03:32:39 +0000 [error]: /usr/share/ruby/net/http.rb:878:in `block in connect'
  2016-03-03 03:32:39 +0000 [error]: /usr/share/ruby/timeout.rb:52:in `timeout'
  2016-03-03 03:32:39 +0000 [error]: /usr/share/ruby/net/http.rb:877:in `connect'
  2016-03-03 03:32:39 +0000 [error]: /usr/share/ruby/net/http.rb:1445:in `begin_transport'
  2016-03-03 03:32:39 +0000 [error]: /usr/share/ruby/net/http.rb:1402:in `transport_request'
  2016-03-03 03:32:39 +0000 [error]: /usr/share/ruby/net/http.rb:1376:in `request'
  2016-03-03 03:32:39 +0000 [error]: /opt/app-root/src/gems/kubeclient-0.4.0/lib/kubeclient/watch_stream.rb:20:in `each'
   2016-03-03 03:32:39 +0000 [error]:  /opt/app-root/src/gems/fluent-plugin-kubernetes_metadata_filter-0.16.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:230:in  `start_watch'
   2016-03-03 03:32:39 +0000 [error]:  /opt/app-root/src/gems/fluent-plugin-kubernetes_metadata_filter-0.16.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:140:in  `block in configure'
2016-03-03 03:32:47 +0000 [error]: fluentd main process died unexpectedly. restarting.

two "rake tests" are failing...both because the "labels" key/value pair has "labels" prefixed with ":"

/fluent-plugin-kubernetes_metadata_filter/test/plugin/test_filter_kubernetes_metadata.rb:307:in `block (2 levels) in <class:KubernetesMetadataFilterTest>'
<{:docker=>
  {:container_id=>
    "49095a2894da899d3b327c5fde1e056a81376cc9a8f8b09a195f2a92bceed459"},
 :kubernetes=>
  {:container_name=>"fabric8-console-container",
   :host=>"jimmi-redhat.localnet",
   :labels=>{"kubernetes_io/test"=>"somevalue"},
   :namespace_name=>"default",
   :pod_id=>"c76927af-f563-11e4-b32d-54ee7527188d",
   :pod_name=>"fabric8-console-controller-98rqc"}}> expected but was
<{:docker=>
  {:container_id=>
    "49095a2894da899d3b327c5fde1e056a81376cc9a8f8b09a195f2a92bceed459"},
 :kubernetes=>
  {:container_name=>"fabric8-console-container",
   :host=>"jimmi-redhat.localnet",
   :labels=>{:"kubernetes_io/test"=>"somevalue"},
   :namespace_name=>"default",
   :pod_id=>"c76927af-f563-11e4-b32d-54ee7527188d",
   :pod_name=>"fabric8-console-controller-98rqc"}}>

diff:
  {:docker=>
    {:container_id=>
      "49095a2894da899d3b327c5fde1e056a81376cc9a8f8b09a195f2a92bceed459"},
   :kubernetes=>
    {:container_name=>"fabric8-console-container",
     :host=>"jimmi-redhat.localnet",
?    :labels=>{:"kubernetes_io/test"=>"somevalue"},
     :namespace_name=>"default",
     :pod_id=>"c76927af-f563-11e4-b32d-54ee7527188d",
     :pod_name=>"fabric8-console-controller-98rqc"}}

the above output is from rake test and is showing a diff of the desired vs. expected output...the "labels" key/value pair is incorrect, as the "kubernetes_io/test" key is prefixed with a colon ":"

Performance tuning

@repeatedly Thought it best to discuss this in a separate issue so we can stay focused.

This plugin can retrieve data from the Kubernetes API server to enrich records with namespace, pod name, container name, labels, etc. This requires making REST requests to the API server. The relevant REST calls are @client.get_namespace(namespace_name) (see http://kubernetes.io/docs/api-reference/v1/operations/#_read_the_specified_namespace) and @client.get_pod(pod_name, namespace_name) (see http://kubernetes.io/docs/api-reference/v1/operations/#_read_the_specified_pod).

You can see examples for both of those responses in the vcr cassettes.

Metadata is cached to remove the need to continually query the API server: once a new pod is found, the API server is queried & metadata cached for attaching to subsequent records. The cache size is configurable & entries are evicted once a pod or namespace is deleted.

To complicate things slightly, some metadata from the API server is mutable: e.g. you can edit pod labels. To keep this in sync, the plugin configures watches for both namespaces & pods & updates the cached metadata when the metadata changes.

Is that enough info to get started testing the performance? Thanks for helping out with this!

Plugin fails to implement filter method

Error Message

2016-09-02 14:31:16 +0000 [info]: adding filter pattern="docker.var.lib.docker.containers.*.*.log" type="kubernetes_metadata"
/usr/local/bundle/gems/fluentd-0.14.4/lib/fluent/plugin/filter.rb:88:in `has_filter_with_time?': BUG: Filter plugins MUST be implmented either `filter` or `filter_with_time` (NotImplementedError)
    from /usr/local/bundle/gems/fluentd-0.14.4/lib/fluent/plugin/filter.rb:37:in `initialize'
    from /usr/local/bundle/gems/fluentd-0.14.4/lib/fluent/plugin_helper/inject.rb:82:in `initialize'
    from /usr/local/bundle/gems/fluentd-0.14.4/lib/fluent/compat/filter.rb:31:in `initialize'
    from /usr/local/bundle/gems/fluent-plugin-kubernetes_metadata_filter-0.24.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:99:in `initialize'
    from /usr/local/bundle/gems/fluentd-0.14.4/lib/fluent/plugin.rb:149:in `new'
    from /usr/local/bundle/gems/fluentd-0.14.4/lib/fluent/plugin.rb:149:in `new_impl'
    from /usr/local/bundle/gems/fluentd-0.14.4/lib/fluent/plugin.rb:108:in `new_filter'
    from /usr/local/bundle/gems/fluentd-0.14.4/lib/fluent/agent.rb:148:in `add_filter'
    from /usr/local/bundle/gems/fluentd-0.14.4/lib/fluent/agent.rb:67:in `block in configure'
    from /usr/local/bundle/gems/fluentd-0.14.4/lib/fluent/agent.rb:63:in `each'
    from /usr/local/bundle/gems/fluentd-0.14.4/lib/fluent/agent.rb:63:in `configure'
    from /usr/local/bundle/gems/fluentd-0.14.4/lib/fluent/root_agent.rb:86:in `configure'
    from /usr/local/bundle/gems/fluentd-0.14.4/lib/fluent/engine.rb:119:in `configure'
    from /usr/local/bundle/gems/fluentd-0.14.4/lib/fluent/engine.rb:93:in `run_configure'
    from /usr/local/bundle/gems/fluentd-0.14.4/lib/fluent/supervisor.rb:673:in `run_configure'
    from /usr/local/bundle/gems/fluentd-0.14.4/lib/fluent/supervisor.rb:435:in `block in run_worker'
    from /usr/local/bundle/gems/fluentd-0.14.4/lib/fluent/supervisor.rb:606:in `main_process'
    from /usr/local/bundle/gems/fluentd-0.14.4/lib/fluent/supervisor.rb:431:in `run_worker'
    from /usr/local/bundle/gems/fluentd-0.14.4/lib/fluent/command/fluentd.rb:271:in `<top (required)>'
    from /usr/local/lib/ruby/site_ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require'
    from /usr/local/lib/ruby/site_ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require'
    from /usr/local/bundle/gems/fluentd-0.14.4/bin/fluentd:5:in `<top (required)>'
    from /usr/local/bundle/bin/fluentd:22:in `load'
    from /usr/local/bundle/bin/fluentd:22:in `<main>'
2016-09-02 14:31:16 +0000 [info]: Worker 0 finished unexpectedly with status 1

Configuration

    <match fluent.**>
      @type null
    </match>

    <source>
      @type tail
      path /var/log/containers/*.log
      pos_file /tmp/docker-containers.log.pos
      time_format %Y-%m-%dT%H:%M:%S.%NZ
      tag docker.*
      format json
      read_from_head true
    </source>

    <filter docker.var.lib.docker.containers.*.*.log>
      @type kubernetes_metadata
    </filter>

    <match docker.**>
      @type file
      path /tmp/fluent-docker.log
      time_slice_format %Y%m%d
      time_slice_wait 10m
      time_format %Y%m%dT%H%M%S%z
      #compress gzip
      utc
    </match>

Versions

root@fluentd-forwarder-1235g:/tmp# ruby -v
ruby 2.3.1p112 (2016-04-26 revision 54768) [x86_64-linux]
root@fluentd-forwarder-1235g:/tmp# gem -v
2.6.6
root@fluentd-forwarder-1235g:/tmp# fluentd --version
fluentd 0.14.4
root@fluentd-forwarder-1235g:/tmp# gem list

*** LOCAL GEMS ***

activesupport (5.0.0.1)
addressable (2.4.0)
aws-sdk-core (2.5.8)
bigdecimal (default: 1.2.8)
bundler (1.12.5)
concurrent-ruby (1.0.2)
cool.io (1.4.5)
did_you_mean (1.0.0)
docker-api (1.31.0)
domain_name (0.5.20160826)
excon (0.52.0)
ffi (1.9.14)
fluent-mixin-config-placeholders (0.4.0)
fluent-plugin-cloudwatch-logs (0.3.3)
fluent-plugin-docker_metadata_filter (0.1.3)
fluent-plugin-kubernetes_metadata_filter (0.24.0)
fluent-plugin-record-reformer (0.8.2)
fluent-plugin-systemd (0.0.4)
fluentd (0.14.4)
http (0.9.8)
http-cookie (1.0.2)
http-form_data (1.0.1)
http_parser.rb (0.6.0)
i18n (0.7.0)
io-console (default: 0.4.5)
jmespath (1.3.1)
json (default: 1.8.3)
kubeclient (1.1.4)
lru_redux (1.1.0)
mime-types (3.1)
mime-types-data (3.2016.0521)
minitest (5.8.3)
msgpack (1.0.0)
net-telnet (0.1.1)
netrc (0.11.0)
power_assert (0.2.6)
psych (default: 2.0.17)
rake (10.4.2)
rdoc (default: 4.2.1)
recursive-open-struct (1.0.0)
rest-client (2.0.0)
rubygems-update (2.6.6)
serverengine (2.0.0)
sigdump (0.2.4)
strptime (0.1.8)
systemd-journal (1.2.2)
test-unit (3.1.5)
thread_safe (0.3.5)
tzinfo (1.2.2)
tzinfo-data (1.2016.6)
unf (0.1.4)
unf_ext (0.0.7.2)
uuidtools (2.1.5)
yajl-ruby (1.2.1)

Am I just using the wrong version of Ruby?

Failed to parse kubernetes.labels

Hi,

I have following error when trying to use plugin, seems that labels are not parsed :

My config :
2016-07-15 10:13:23 +0000 [info]: reading config file path="/etc/fluent/fluent.conf"
2016-07-15 10:13:23 +0000 [info]: starting fluentd-0.14.1
2016-07-15 10:13:23 +0000 [info]: spawn command to main: /usr/bin/ruby2.3 -Eascii-8bit:ascii-8bit /usr/local/bin/fluentd --under-supervisor
2016-07-15 10:13:25 +0000 [info]: reading config file path="/etc/fluent/fluent.conf"
2016-07-15 10:13:25 +0000 [info]: starting fluentd-0.14.1 without supervision
2016-07-15 10:13:25 +0000 [info]: gem 'fluent-plugin-docker_metadata_filter' version '0.1.3'
2016-07-15 10:13:25 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '1.5.0'
2016-07-15 10:13:25 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '0.23.0'
2016-07-15 10:13:25 +0000 [info]: gem 'fluentd' version '0.14.1'
2016-07-15 10:13:25 +0000 [warn]: 'type' is deprecated parameter name. use '@type' instead.
2016-07-15 10:13:25 +0000 [info]: adding filter pattern="kubernetes.var.lib.docker.containers...log" type="kubernetes_metadata"
2016-07-15 10:13:40 +0000 [info]: adding match pattern="**" type="elasticsearch"


[2016-07-15 09:11:38,359][DEBUG][action.bulk              ] [Meathook] [logstash-2016.07.14][0] failed to execute bulk item (index) index 
{
[logstash-2016.07.14][fluentd][AVXt01G8bCJdaa2_HQWW]
, source[{"log":"2016-07-14 13:00:23+0200 [-] \n"
,"stream":"stdout"
,"time":"2016-07-14T11:00:23.333149774Z"
,"docker":
    {
    "container_id":"10d226075434af463109e997da757b6371073b4d04cd5928878febb8e3add246"
    }
,"kubernetes":
    {
    "namespace_name":"default"
    ,"pod_id":"e388e50d-483b-11e6-a583-fa163e2ca80d"
    ,"pod_name":"app_utev2-4228063734-q256u"
    ,"container_name":"app_utev2"
    ,"labels":
        {
        "app":"app_utev2"
        ,"pod-template-hash":"4228063734"
        ,"track":"stable"
        }
    ,"host":"1xx.xxx.xxx.xxx"
    }
,"tag":"kubernetes.var.log.containers.app_utev2-4228063734-q256u_default_app_utev2-10d226075434af463109e997da757b6371073b4d04cd5928878febb8e3add246.log"
,"@timestamp":"2016-07-14T11:00:23.333149774Z"}]}





org.elasticsearch.index.mapper.MapperParsingException: failed to parse [kubernetes.labels]
        at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:409)
        at org.elasticsearch.index.mapper.object.ObjectMapper.serializeObject(ObjectMapper.java:554)
        at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:487)
        at org.elasticsearch.index.mapper.object.ObjectMapper.serializeObject(ObjectMapper.java:554)
        at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:487)
        at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:544)
        at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:493)
        at org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:453)
        at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:432)
        at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:149)
        at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:515)
        at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:422)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.ElasticsearchIllegalArgumentException: unknown property [app]
        at org.elasticsearch.index.mapper.core.StringFieldMapper.parseCreateFieldForString(StringFieldMapper.java:331)
        at org.elasticsearch.index.mapper.core.StringFieldMapper.parseCreateField(StringFieldMapper.java:277)
        at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:399)
        ... 14 more

Move separate concerns out to other plugins or other filters

It seems like this plugin is doing too much, handling multiple concerns that while convenient can cause duplicate work done in the pipeline.

Could we consider having this plugin do one thing, which is to fetch Kubernetes metadata based on defined fields in the record and add the fetched data to the defined target fields?

Then we have move the json parsing of payloads, the rename of fields to other namespaces, and any other work not related to this core function to other locations?

It would allow us to have one set of input fields defined to read from and one set of output fields written to and simply the behavior. It also will allow callers to move this work around in the pipeline where they have taken care to populate the input fields, and let them take care of what to do with the output fields.

Thoughts?

Add annotations

I have a usecase where I cannot put some specific information into a label because it contains "invalid" characters. So I'd like to put it into an annotation but this plugin does not add annotations to the record. Would be nice to have such support (behind a flag? only annotations with listed names or names that match a regex?).

Add docker image to metadata

Hi,

Would it be possible to add the docker image name:tag to the metadata? It is available from kube pod information and the container_name and container_id are already known from the log. This would allow to trace logs back to specific versions of pods/applications/containers in fast-moving environments, quite a valuable addition in my opinion.

I've looked through the code (I'm not a very experienced ruby developer) and it looks to me like this could be added without too much issues, but I'm not familiar enough to take a stab without getting some feedback where and how this would be best implemented..

[feature] Add cluster information in meta-data enrichment

Hi all,

I'd like to be able to see kubernetes.cluster-info (cluster kubernetes master URL would be sufficient) information in the extra fields added by this plugin. The reason is that in a multi-cluster setup, it is currently impossible to determine this information from the enriched data.

Also added a question in the upstream kubeclient repo: ManageIQ/kubeclient#245

UPDATE: I have created a pull request to add this feature:
#70

BR,
Bart

Working with in_forward source

Hi there!

As far as I read this filter's code, filter mainly assumes the source of the log messages would be in_tail. Hence filter can parse tags thanks to the /var/log/containers/*.log convention (or customizable regexp) and extracts some metadata from here. And remaining ones can be fetched by k8s client and merged.

The question is if we want to use in_forward as source which does not have tag parameter, how can we filter and extract k8s metadata from this source? What would be the cleanest approach here? Would the metadata fetched by ruby kubernates client be enough, assuming we have provided kubernates_url parameter to the filter?

Td-agent start failed.

Add kubernetes_metadata cause td-agent start failed.
When start td-agent , throw error "HTTP status code 400, Bad Request", what causes the error?

version info:
os rhel7.2
td-agent version 2.3.3
fluent-plugin-kubernetes_metadata_filter (0.24.0)
ruby version ruby 2.0.0p648 (2015-12-16) [x86_64-linux]

td-agent Configuration๏ผš
<filter kubernetes.**>
type kubernetes_metadata
kubernetes_url https://10.253.1.240:8443
verify_ssl false
bearer_token_file /tmp/token

error log:
2016-12-11 10:54:33 +0800 [info]: adding source type="kafka"
2016-12-11 10:54:33 +0800 [error]: unexpected error error="HTTP status code 400, Bad Request"
2016-12-11 10:54:33 +0800 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/kubeclient-1.2.0/lib/kubeclient/watch_stream.rb:20:in each' 2016-12-11 10:54:33 +0800 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-kubernetes_metadata_filter-0.24.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:362:in start_watch'
2016-12-11 10:54:33 +0800 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-kubernetes_metadata_filter-0.24.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:173:in `block in configure'
2016-12-11 10:54:33 +0800 [info]: process finished code=256
2016-12-11 10:54:33 +0800 [warn]: process died within 1 second. exit.

Don't work at K8s 1.6.x RBAC

I create serviceAccount , es cluster worked fine.
But fluentd doesn't running.
error="Exception encountered fetching metadata from Kubernetes API endpoint: 403 Forbidden"
What should I do with this error, please?

Feature Request: docker.container_name in addition to docker.container_id for correlating logs

Hi, thanks for maintaining the great project ๐Ÿ‘

I'm using dd-agent + Datadog to collect metrics from my pods and containers.
It annotates metrics with the container name like "k8s_<deployment/daemonset name><...>which is the container name shown when we rundocker ps`, but not container "id".

What makes it relevant to this project is the recent arrival of Datadog Log Management.
It allows annotating logs with arbitrary metadata - and I have been utilizing this kubernetes_metadata_filter in combination with fluent-plugin-datadog-log to annotate logs as similar as possible to how dd-agent annotates metrics.

Right now, only a important difference between annotated metadata between metrics and logs are docker container's id and docker container's name. I'd like to rely on container name rather than id because it looks a bit friendlier for human readers.

// Beware it isn't kubernetes.container_name which is specified via containers[].name in your pod spec.

Would it be ok to add the following features to support this use-case?

  • An option to include/exlude docker.container_id
    • When one relies on container name for correlation, container id is unnecessary
    • Perhaps include_docker_container_id which is true by default?
  • An option to include docker.container_name(NEW)
    • Perhaps include_docker_container_name which is false by default?

I can send a pull request if this feature request is acceptable.
Thanks!

Option to specify merge key (equivalent of parser.hash_value_field)

With merge_json_log true the keys are merged as top-level keys. This is not always the desirable output as (1) some keys can collide (docker, kubernetes) and (2) it is not ideal for indexing.

It would be nice if the plugin could implement the equivalent of fluentd's parser filters hash_value_field so that the parsed values are stored in a specified field.

Current

input: {log: "{\"level\": 10, \"msg\": \"foo\"}}

output:

{
  "log": "{...}",
  "level": 10,
  "msg": "foo",
  "stream": "stdout", 
  "docker": {...},
  "kubernetes": {...}
}

Suggested

hash_value_field parsed

input: {log: "{\"level\": 10, \"msg\": \"foo\"}}

output:

{
  "log": "{...}",
  "parsed": {
    "level": 10,
    "msg": "foo"
  }
  "stream": "stdout", 
  "docker": {...},
  "kubernetes": {...}
}

Thoughts? I can probably do a PR sometime this week if it sounds good and you guys don't have a bandwidth.

Tag based upon namespace

Is there a way to have fluentd tag the logs based on the namespace? I have a need to put different application (which are defined by namespace) into different destinations.

Tag logs with the current pod information

While converting some legacy workflows to Kubernetes, we noticed that there is a need to tag the log feed with the current pod information.

For example one could have an application that is capable of logging through syslog, but have little control over the format of the logs. A simple solution would be to have a fluentd receiver in that same pod. The fluentd could have the kubernetes metadata filter installed just to pull information for that pod.

Elasticsearch 2.x does not allow `.` in field names

I tried using this plugin with ES 2.1 and get many errors like this:

[2016-01-28 18:23:58,459][DEBUG][action.bulk              ] [Gog] [logstash-2016.01.28][0] failed to execute bulk item (index) index {[logstash-2016.01.28][fluentd][AVKJekf4-Teb7njfaBNh], source[{"log":"2016/01/28 18:23:55 Client ip 10.244.65.1:52922 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local localhost >/dev/null\n","stream":"stderr","time":"2016-01-28T18:23:55.459497426Z","docker":{"container_id":"c34fabafd21cf82441fd389cc71bf87a943be7ca7315ee6069307e7b95ab7e16"},"kubernetes":{"namespace_name":"kube-system","pod_id":"6a691072-c5de-11e5-831e-062fba24dd93","pod_name":"kube-dns-v9-v0kj9","container_name":"healthz","labels":{"k8s-app":"kube-dns","kubernetes.io/cluster-service":"true","version":"v9"},"host":"ip-10-2-63-0.us-west-2.compute.internal"},"tag":"kubernetes.var.log.containers.kube-dns-v9-v0kj9_kube-system_healthz-c34fabafd21cf82441fd389cc71bf87a943be7ca7315ee6069307e7b95ab7e16.log","@timestamp":"2016-01-28T18:23:55.459497426Z"}]}
MapperParsingException[Field name [kubernetes.io/cluster-service] cannot contain '.']
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:278)
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:223)
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:198)
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:310)
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:223)
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:198)
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:310)
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:223)
        at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:139)
        at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:140)
        at org.elasticsearch.index.mapper.DocumentMapperParser.parseCompressed(DocumentMapperParser.java:121)
        at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:391)
        at org.elasticsearch.cluster.metadata.MetaDataMappingService$2.execute(MetaDataMappingService.java:386)
        at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:388)
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

Where the relevant line seems to be: MapperParsingException[Field name [kubernetes.io/cluster-service] cannot contain '.'] It looks like the kubernetes official labels have the kubernetes.io prefix which makes ES unhappy due to the . character being disallowed in ES 2.0

Perhaps we should scrub these label names for .s and replace them with _?

Support Docker fluentd plugin

This plugin is great and it supports both JSON and journalctl approaches. I just learned there is an official fluentd Docker logging driver and think it would be great if this works with it. If it already does then let's update the readme, if not, lets discuss what needs to be done here.

error_class=NoMethodError error="undefined method `debug' for #<String:0x00000002970c28>" tag="kubernetes.var.log.containers..."

https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter/blob/master/lib/fluent/plugin/filter_kubernetes_metadata.rb#L394

log is overloaded here. log should refer to the Logger, not the log message. Please use a different variable for the log message e.g. message.

We also need a test for this case e.g. provide a message with a log field that has a JSON parsing error.
@jcantrill

Why do we cache pod metadata using the container name?

From

cache_key = "#{metadata['kubernetes']['namespace_name']}_#{metadata['kubernetes']['pod_name']}_#{metadata['kubernetes']['container_name']}"
, we can see that the cache key used for the object being cached includes the container name.

Yet, get_metadata method that fetches the metadata object to be cached (

def get_metadata(namespace_name, pod_name, container_name)
) does not use the contain name at all, though it does add it to the object returned.

This seems to imply that we'll have multiple versions of the same pod metadata cached under different keys. In turn, this can lead to unnecessary traffic to the API server.

Changing the code to fetch the metadata blob using a cache key only using namespace name and pod name, will allow us to still add the container name to the object returned while reducing API server traffic, and reducing duplicate data cached.

Further, if the traffic drops significantly, one could imagine making the connections to the API server timeout to avoid the need for a continuous connection just for logs.

host metadata

Im not capturing the host metadata in my local testing environment. Do I need to enable something to get that information?

fluentd log is filled up with KubeClient messages

We are getting lots of fluentd log messages from Kubeclient::Common::WatchNotice like this.

#<Kubeclient::Common::WatchNotice type="MODIFIED", object={:kind=>"Namespace", :apiVersion=>"v1", :metadata=>{...}, :spec=>{...}, :status=>{:phase=>"Active"}}>

It looks the message is coming from the "puts notice" line in start_namespace_watch (filter_kubernetes_metadata.rb).

    def start_namespace_watch
      resource_version = @client.get_namespaces.resourceVersion
      watcher          = @client.watch_namespaces(resource_version)
      watcher.each do |notice|
        puts notice
        ^^^^^^^^^^^
        case notice.type
          when 'MODIFIED'
            cache_key = notice.object['metadata']['name']
            cached    = @namespace_cache[cache_key]
            if cached
              @namespace_cache[cache_key] = parse_namespace_metadata(notice.object)
            end

Could you consider eliminating the "puts notice" line or replacing it with the lower log level such as "log.debug(notice)"? Thanks.

Specify kubeconfig file for auth

Rather than breaking out the individual auth bits it might be better to just specify a kubeconfig file? That's what is generated for the secrets for token-admin and token-kubelet

Milliseconds are stripped from logs

It appears this plugin strips milliseconds from log entries, making logs appear in searches (e.g., via the kibana UI) in the wrong order.

Pods with dot in the pod_name don't get their metadata added

Unless i'm missing something, the fix in b7ca287 is incomplete.

The regex is now correct (manually verified) but for some unknown reason logs from pods named with a dot anywhere in the name still don't get collected.

I used grokdebug to check the regex, and it seems OK for my pod names.
Regex (default): var\.log\.containers\.(?<pod_name>[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace>[^_]+)_(?<container_name>.+)-(?<docker_id>[a-z0-9]{64})\.log$

Example pod tags:

kubernetes.var.log.containers.heapster-v1.2.0-4088228293-11d9f_kube-system_heapster-nanny-7f3832b97b974a41e3ef94c828de5e564c2df0bd4a291156b9f9c05ef004832c.log
kubernetes.var.log.containers.ptlc.debug_default_alpine-e067228f5c2d8f061727bf0d09f3f70da8a5356afd18eb1b6f1b6897c0b0f126.log
kubernetes.var.log.containers.kube-apiserver-ip-10-50-45-119.eu-west-1.compute.internal_kube-system_kube-apiserver-c3e78cbaa087b722ae5d72cbbd9ed25a60e93febf9333b750da0b48e8300b502.log

This is impacting deployment schemes that include the hostname in the kubernetes master component pod names, or use version numbering in semver format like the heapster deployment.

How do reproduce:
Deploy a test pod with a dot in the name:

apiVersion: v1
kind: Pod
metadata:
  name: fluentd.debug
  labels:
    app: debug
spec:
  containers:
  - command:
    - /bin/sh
    - -c
    - "while true; do date; sleep 5;done"
    image: "alpine:3.4"
    imagePullPolicy: IfNotPresent
    name: alpine

Observe missing metadata tags for this pod by querying your logs for entries without the metadata (eg NOT _exists_: kubernetes for kibana/elasticsearch)

My hunch is that the metadata is silently dropped here https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter/blob/master/lib/fluent/plugin/filter_kubernetes_metadata.rb#L76

emit transaction failed: error_class=NoMethodError error="undefined method `strip\' for #<Symbol:0x000000033f370c>

I apologise in advance if I'm posting this to the wrong repo. I recently started using Kubernetes and keep seeing the following error in my logs:

{ log: '2016-07-28 14:47:55 +0000 [warn]: emit transaction failed: error_class=NoMethodError error="undefined method `strip\' for #<Symbol:0x000000033f370c>" tag="kubernetes.var.log.containers.blockai-ui-v6-web-7a4hq_blockai-ui_blockai-ui-web-39b0262bb5ba550fa6e7d824d337ff3b3cdb77d4c0a50b4221405cdae2be0475.log"\n',
  stream: 'stdout',
  docker: { container_id: '70edf98b8595202c6b564010fe66e8348168cdedeccbf0fd9a94decb0caf39d1' },
  kubernetes:
   { namespace_name: 'kube-system',
     pod_id: '4202b066-52eb-11e6-a5e2-068755380eff',
     pod_name: 'fluentd-elasticsearch-ip-172-20-0-130.us-west-1.compute.internal',
     container_name: 'fluentd-elasticsearch',
     labels: { 'k8s-app': 'fluentd-logging' },
     host: 'ip-172-20-0-130.us-west-1.compute.internal' } }

Is it possibly an issue with this plugin? Thanks!

Possibly related: kubernetes/kubernetes#29640

Plugin caused the fluentd service failed to start sometimes.

Here is the error log.
ERROR MESSAGE

Oct 21 11:37:39 shift-node2 fluentd: 2016-10-21 11:37:39 +0800 [info]: following tail of /var/log/messages
Oct 21 11:37:39 shift-node2 fluentd: 2016-10-21 11:37:39 +0800 [error]: unexpected error error_class=KubeException error=#<KubeException: HTTP status code 400, Bad Request>
Oct 21 11:37:39 shift-node2 fluentd: 2016-10-21 11:37:39 +0800 [error]: /usr/share/gems/gems/kubeclient-1.2.0/lib/kubeclient/watch_stream.rb:20:in `each'
Oct 21 11:37:39 shift-node2 fluentd: 2016-10-21 11:37:39 +0800 [error]: /usr/share/gems/gems/fluent-plugin-kubernetes_metadata_filter-0.24.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:401:in `start_namespace_watch'
Oct 21 11:37:39 shift-node2 fluentd: 2016-10-21 11:37:39 +0800 [error]: /usr/share/gems/gems/fluent-plugin-kubernetes_metadata_filter-0.24.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:176:in `block in configure'
Oct 21 11:37:39 shift-node2 fluentd: 2016-10-21 11:37:39 +0800 [info]: shutting down fluentd
Oct 21 11:37:39 shift-node2 fluentd: 2016-10-21 11:37:39 +0800 [info]: shutting down input type="debug_agent" plugin_id="debug_agent_input"
Oct 21 11:37:39 shift-node2 fluentd: 2016-10-21 11:37:39 +0800 [info]: shutting down input type="tail" plugin_id="object:b2bbb4"
Oct 21 11:37:39 shift-node2 fluentd: 2016-10-21 11:37:39 +0800 [info]: shutting down input type="monitor_agent" plugin_id="monitor_agent_input"
Oct 21 11:37:39 shift-node2 fluentd: 2016-10-21 11:37:39 +0800 [info]: shutting down input type="forward" plugin_id="forward_input"
Oct 21 11:37:39 shift-node2 fluentd: 2016-10-21 11:37:39 +0800 [info]: shutting down input type="tail" plugin_id="object:9126fc"
Oct 21 11:37:39 shift-node2 fluentd: 2016-10-21 11:37:39 +0800 [info]: shutting down input type="http" plugin_id="http_input"
Oct 21 11:37:39 shift-node2 fluentd: 2016-10-21 11:37:39 +0800 [error]: unexpected error error="HTTP status code 400, Bad Request"
Oct 21 11:37:39 shift-node2 fluentd: 2016-10-21 11:37:39 +0800 [error]: /usr/share/gems/gems/kubeclient-1.2.0/lib/kubeclient/watch_stream.rb:20:in `each'
Oct 21 11:37:39 shift-node2 fluentd: 2016-10-21 11:37:39 +0800 [error]: /usr/share/gems/gems/fluent-plugin-kubernetes_metadata_filter-0.24.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:362:in `start_watch'
Oct 21 11:37:39 shift-node2 fluentd: 2016-10-21 11:37:39 +0800 [error]: /usr/share/gems/gems/fluent-plugin-kubernetes_metadata_filter-0.24.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:173:in `block in configure'
Oct 21 11:37:40 shift-node2 fluentd: 2016-10-21 11:37:39 +0800 [info]: process finished code=256
Oct 21 11:37:40 shift-node2 fluentd: 2016-10-21 11:37:39 +0800 [warn]: process died within 1 second. exit.

Configuration

<source>
  @type tail
  @label @INGRESS
  path /var/log/containers/*.log
  pos_file /var/log/fluent/es-containers.log.pos
  tag kubernetes.*
  time_format %Y-%m-%dT%H:%M:%S
  format json
  keep_time_key true
  read_from_head true
  exclude_path []
</source>

<filter kubernetes.**>
  type kubernetes_metadata
  kubernetes_url "https://shift-master.origin.com:8443"
  bearer_token_file /root/token
  ca_file /root/ca
  include_namespace_id true
  use_journal "false"
</filter>

<match kubernetes.**>
  @type file
  path /var/log/fluent/container
</match>

Version

# ruby -v
ruby 2.0.0p598 (2014-11-13) [x86_64-linux]
# fluentd --version
fluentd 0.12.20
# gem list 

*** LOCAL GEMS ***

activesupport (4.2.4)
addressable (2.3.6)
bigdecimal (1.2.0)
cool.io (1.2.4)
dalli (2.7.4)
docker-api (1.22.4)
domain_name (0.5.20160615)
excon (0.39.6)
fluent-plugin-docker_metadata_filter (0.1.1)
fluent-plugin-flatten-hash (0.2.0)
fluent-plugin-kubernetes_metadata_filter (0.24.0)
fluent-plugin-rewrite-tag-filter (1.5.5)
fluentd (0.12.20)
http (0.9.8)
http-cookie (1.0.2)
http-form_data (1.0.1)
http_parser.rb (0.6.0)
i18n (0.7.0)
io-console (0.4.2)
json (1.7.7)
kubeclient (1.2.0)
lru_redux (1.1.0)
mime-types (1.19)
minitest (4.7.0)
msgpack (0.5.11)
multi_json (1.10.1)
netrc (0.11.0)
psych (2.0.0)
rack (1.5.2)
rdoc (4.0.0)
recursive-open-struct (1.0.0)
rest-client (2.0.0)
sigdump (0.2.2)
string-scrub (0.0.5)
thread_safe (0.3.4)
tzinfo (1.2.2)
tzinfo-data (1.2014.10)
unf (0.1.4)
unf_ext (0.0.7.2)
yajl-ruby (1.2.1

Start correctly sometimes.Next is the log with warning.

Start Log

Oct 21 11:54:21 shift-node2 fluentd: 2016-10-21 11:54:21 +0800 [info]: following tail of /var/log/containers/django-example-8-sg3rz_test_django-example-1ac621022dc79d21325422419b08ed3170e2ab6b39572937740732ad32607b5d.log
Oct 21 11:54:21 shift-node2 fluentd: 2016-10-21 11:54:21 +0800 [warn]: emit transaction failed: error_class=KubeException error="HTTP status code 400, Bad Request" tag="kubernetes.var.log.containers.django-example-8-sg3rz_test_django-example-1ac621022dc79d21325422419b08ed3170e2ab6b39572937740732ad32607b5d.log"
Oct 21 11:54:21 shift-node2 fluentd: 2016-10-21 11:54:21 +0800 [warn]: /usr/share/gems/gems/kubeclient-1.2.0/lib/kubeclient/watch_stream.rb:20:in `each'
Oct 21 11:54:21 shift-node2 fluentd: 2016-10-21 11:54:21 +0800 [warn]: /usr/share/gems/gems/fluent-plugin-kubernetes_metadata_filter-0.24.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:401:in `start_namespace_watch'
Oct 21 11:54:21 shift-node2 fluentd: 2016-10-21 11:54:21 +0800 [warn]: /usr/share/gems/gems/fluent-plugin-kubernetes_metadata_filter-0.24.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:176:in `block in configure'
Oct 21 11:54:21 shift-node2 fluentd: 2016-10-21 11:54:21 +0800 [info]: following tail of /var/log/containers/database-1-vcik8_jenkins-pipeline_POD-78f5d08067573d3d044d46ad8f1d511dffe1c8cce88a635d566d5a6773dae576.log
Oct 21 11:54:21 shift-node2 fluentd: 2016-10-21 11:54:21 +0800 [info]: following tail of /var/log/containers/database-1-vcik8_jenkins-pipeline_ruby-helloworld-database-bcfd728a3e286e57e510f535ce757774c78cc5e57e5f7e613384e9b25292f12f.log
Oct 21 11:54:21 shift-node2 fluentd: 2016-10-21 11:54:21 +0800 [info]: following tail of /var/log/containers/app-a-7-rrvok_demo_POD-12409da468506b1da2b8c4eb2f13fd32f0f0609efec76f43f96b20fe9ac2df53.log
Oct 21 11:54:21 shift-node2 fluentd: 2016-10-21 11:54:21 +0800 [info]: following tail of /var/log/containers/docker-registry-5-si9su_default_registry-8c7f35efec9d1e73250e1ac6e0e07b1c462b31f790a3670de32e90cbabac7d3e.log
Oct 21 11:54:21 shift-node2 fluentd: 2016-10-21 11:54:21 +0800 [warn]: emit transaction failed: error_class=KubeException error="HTTP status code 400, Bad Request" tag="kubernetes.var.log.containers.docker-registry-5-si9su_default_registry-8c7f35efec9d1e73250e1ac6e0e07b1c462b31f790a3670de32e90cbabac7d3e.log"
Oct 21 11:54:21 shift-node2 fluentd: 2016-10-21 11:54:21 +0800 [warn]: /usr/share/gems/gems/kubeclient-1.2.0/lib/kubeclient/watch_stream.rb:20:in `each'
Oct 21 11:54:21 shift-node2 fluentd: 2016-10-21 11:54:21 +0800 [warn]: /usr/share/gems/gems/fluent-plugin-kubernetes_metadata_filter-0.24.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:362:in `start_watch'
Oct 21 11:54:21 shift-node2 fluentd: 2016-10-21 11:54:21 +0800 [warn]: /usr/share/gems/gems/fluent-plugin-kubernetes_metadata_filter-0.24.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:173:in `block in configure'
Oct 21 11:54:21 shift-node2 fluentd: 2016-10-21 11:54:21 +0800 [info]: following tail of /var/log/containers/registry-console-2-wh6bb_default_POD-c280bca35a20bd21ec7ca03ed023c30c1d42207d13fe1458e34222077fe9011a.log

Why not start sometimes?

Restricting API calls to a single namespace

Consider a DaemonSet in namespace friday whose spec.serviceAccountName is set to fluentd.
The ServiceAccount is named fluentd and lives in the same namespace.

$ kubectl get sa -n friday
NAME      SECRETS   AGE
default   1         4d
fluentd   1         2h

A Role named fluentd is created in the same namespace:

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
  name: fluentd
rules:
- apiGroups:
  - ""
  resources:
  - "namespaces"
  - "pods"
  verbs:
  - "get"
  - "watch"
  - "list"

A RoleBinding is also created in the same namespace:

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: fluentd
subjects:
- kind: ServiceAccount
  name: fluentd
  apiGroup: ""
roleRef:
  kind: Role
  name: fluentd
  apiGroup: ""

We can confirm the RBAC setup is working as expected:

$ kubectl auth can-i list pods --as system:serviceaccount:friday:fluentd --namespace friday
yes

The current namespace is fed to the fluentd pods via a POD_NAMESPACE environment variable:

      containers:
      - name: fluentd-es
        image: gcr.io/google-containers/fluentd-elasticsearch:v2.0.1
        env:
        - name: FLUENTD_ARGS
          value: --no-supervisor
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace

Which allows the fluentd <source> looks for log files in the current namespace:

    <source>
      type tail
      path "/var/log/containers/*_#{ENV['POD_NAMESPACE']}_*.log"
      pos_file "/var/log/es-containers.#{ENV['POD_NAMESPACE']}.log.pos"
      time_format %Y-%m-%dT%H:%M:%S.%NZ
      tag kubernetes.*
      format json
      read_from_head true
    </source>

The problem is that the kubernetes_metadata plugin always queries pods at the cluster level, which fails due to the limited permissions of the service account:

2017-10-24 17:49:06 +0000 [info]: reading config file path="/etc/fluent/fluent.conf"
2017-10-24 17:49:06 +0000 [info]: starting fluentd-0.12.39 without supervision
2017-10-24 17:49:06 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '1.9.5'
2017-10-24 17:49:06 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '0.27.0'
2017-10-24 17:49:06 +0000 [info]: gem 'fluent-plugin-prometheus' version '0.3.0'
2017-10-24 17:49:06 +0000 [info]: gem 'fluent-plugin-systemd' version '0.0.8'
2017-10-24 17:49:06 +0000 [info]: gem 'fluentd' version '0.12.39'
2017-10-24 17:49:06 +0000 [info]: adding match pattern="fluent.**" type="null"
2017-10-24 17:49:06 +0000 [info]: adding filter pattern="kubernetes.**" type="kubernetes_metadata"
2017-10-24 17:49:07 +0000 [info]: adding match pattern="**" type="elasticsearch"
2017-10-24 17:49:07 +0000 [error]: config error file="/etc/fluent/fluent.conf" error="Exception encountered fetching metadata from Kubernetes API endpoint: 403 Forbidden (User \"system:serviceaccount:friday:fluentd\" cannot list pods at the cluster scope.)"

The plugin should allow limiting its watches to one or more namespaces. This is a use case for shared clusters where tenants are given separate namespaces and want to deploy fluentd logging in the scope of their own namespace.

Invalid container_id when container of the pod was restarted

Few days ago a faced a problem with skydns. Its healthcheck resulted in kube2sky container restart. Twice. When I analyzed logs in Kibana I was confused because logs tell that kube2sky was restarted, but container id was the same. Until I noticed the log tag which tells that actual container id was changed. Here is the screenshot.

Original container ID was fa206692f400f634c7b1546d88a4a110a975782ef28e2f141ce928534b002273 then after restart it was changed to 88dcb977a7fae920ce86725da86295545bed317950048c3a302dfec04fda5c2b and after next restart it was changed to 165f3844607afd71f2477eed37b2cf50d74931869c2fb7259eafb9bc744e7275
screenshot_20161004_171358

I use fluent-plugin-kubernetes_metadata_filter 0.24.0 which is proposed by kubernetes repo.

How to match logs from specific containers?

Hi there!

This plugin is helping us a lot with our logging :) Thanks! Now, I would like to filter out logs from some specific containers, but I haven't found how. I'm completely new to fluentd and I don't fully understand the td-agent.conf syntax, so sorry if this is a silly question :D

I would like to add something like this on top of my td-agent.conf:

<match kubernetes.container_name=="my_super_spamy_deployment">
  @type null
</match>

But of course that's invalid syntax. How can we achieve something like that?

v0.22 or v0.23 causing nil->String conversions

Something broke between fabric8io/docker-fluentd-kubernetes v1.13 and v1.14 for me. The only changes are the Kubernetes metadata filter, which jumped from 0.21 to 0.22 and then 0.23.

These are the only two PRs merged in those releases:
1e82acf
beb0b85

While trying to figure the root cause, I noticed the stack traces:

2016-06-16 21:49:49 +0000 [warn]: [Fluent::ElasticsearchOutputDynamic] failed to flush the buffer. plugin_id="object:1a23180" retry_time=0 next_retry=2016-06-16 21:49:50 +0000 chunk="5356c373da2132319e73f720af223749" error_class=TypeError error="no implicit conversion of nil into String"
  2016-06-16 21:49:49 +0000 [warn]: /opt/rh/rh-ruby22/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-1.5.0/lib/fluent/plugin/out_elasticsearch_dynamic.rb:130:in `eval'
  2016-06-16 21:49:49 +0000 [warn]: /opt/rh/rh-ruby22/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-1.5.0/lib/fluent/plugin/out_elasticsearch_dynamic.rb:130:in `block in write'
  2016-06-16 21:49:49 +0000 [warn]: /opt/rh/rh-ruby22/root/usr/local/share/gems/gems/fluentd-0.14.0/lib/fluent/event.rb:194:in `each'
  2016-06-16 21:49:49 +0000 [warn]: /opt/rh/rh-ruby22/root/usr/local/share/gems/gems/fluentd-0.14.0/lib/fluent/event.rb:194:in `block in each'
  2016-06-16 21:49:49 +0000 [warn]: /opt/rh/rh-ruby22/root/usr/local/share/gems/gems/fluentd-0.14.0/lib/fluent/plugin/buffer/memory_chunk.rb:90:in `open'
  2016-06-16 21:49:49 +0000 [warn]: /opt/rh/rh-ruby22/root/usr/local/share/gems/gems/fluentd-0.14.0/lib/fluent/plugin/buffer/memory_chunk.rb:90:in `open'
  2016-06-16 21:49:49 +0000 [warn]: /opt/rh/rh-ruby22/root/usr/local/share/gems/gems/fluentd-0.14.0/lib/fluent/event.rb:193:in `each'
  2016-06-16 21:49:49 +0000 [warn]: /opt/rh/rh-ruby22/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-1.5.0/lib/fluent/plugin/out_elasticsearch_dynamic.rb:115:in `write'
  2016-06-16 21:49:49 +0000 [warn]: /opt/rh/rh-ruby22/root/usr/local/share/gems/gems/fluentd-0.14.0/lib/fluent/compat/output.rb:118:in `write'
  2016-06-16 21:49:49 +0000 [warn]: /opt/rh/rh-ruby22/root/usr/local/share/gems/gems/fluentd-0.14.0/lib/fluent/plugin/output.rb:778:in `try_flush'
  2016-06-16 21:49:49 +0000 [warn]: /opt/rh/rh-ruby22/root/usr/local/share/gems/gems/fluentd-0.14.0/lib/fluent/plugin/output.rb:953:in `flush_thread_run'
  2016-06-16 21:49:49 +0000 [warn]: /opt/rh/rh-ruby22/root/usr/local/share/gems/gems/fluentd-0.14.0/lib/fluent/plugin/output.rb:360:in `block (2 levels) in start'
  2016-06-16 21:49:49 +0000 [warn]: /opt/rh/rh-ruby22/root/usr/local/share/gems/gems/fluentd-0.14.0/lib/fluent/plugin_helper/thread.rb:66:in `block in thread_create'

Given that it's happening in elasticsearch_dynamic, perhaps nobody has noticed it yet because they don't use a setup like mine:

  <match **>
    type elasticsearch_dynamic
[...]
    logstash_format true
    logstash_prefix "logstash-jenkins-${record[\'kubernetes\'][\'namespace_name\']}"
[...]

Could it be that there are log entries now where record['kubernetes']['namespace_name'] is not defined? That makes me suspect the systemd change. I'll try to debug some more, but my Ruby is non-existent.

cc @richm

Not able to install

Probably entirely my fault, but I can't seem to install this plug-in.

$ sudo gem install fluent-plugin-kubernetes_metadata_filter
Building native extensions.  This could take a while...
ERROR:  Error installing fluent-plugin-kubernetes_metadata_filter:
    ERROR: Failed to build gem native extension.

        /usr/bin/ruby1.9.1 extconf.rb
/usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- mkmf (LoadError)
    from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
    from extconf.rb:1:in `<main>'


Gem files will remain installed in /var/lib/gems/1.9.1/gems/msgpack-0.5.11 for inspection.
Results logged to /var/lib/gems/1.9.1/gems/msgpack-0.5.11/ext/msgpack/gem_make.out

Need access to *namespace* annotations

#31 gets pod annotations, but I would like access to namespace annotations. How can I get this information?
$ kubeclt describe namespace NAMESPACE
shows the namespace annotations: e.g.:
kubectl describe namespace default
Name: default
Labels: myLabel=abc
Annotations: myAnnotation=xyz
Status: Active

No resource quota.

No resource limits.

[NOTE: kubectl annotate namespace NAMESPACE key=value <<--- this adds the annotations...]

ERROR: Failed to build gem native extension.

Hello everybody,
I am getting the following error when I try the installation

gem install fluent-plugin-kubernetes_metadata_filter
Fetching: msgpack-1.0.2.gem (100%)
Building native extensions. This could take a while...
ERROR: Error installing fluent-plugin-kubernetes_metadata_filter:
ERROR: Failed to build gem native extension.

current directory: /var/lib/gems/2.3.0/gems/msgpack-1.0.2/ext/msgpack

/usr/bin/ruby2.3 -r ./siteconf20161219-10470-1ybqy2t.rb extconf.rb
mkmf.rb can't find header files for ruby at /usr/lib/ruby/include/ruby.h

extconf failed, exit code 1

Gem files will remain installed in /var/lib/gems/2.3.0/gems/msgpack-1.0.2 for inspection.
Results logged to /var/lib/gems/2.3.0/extensions/x86_64-linux/2.3.0/msgpack-1.0.2/gem_make.out

Do you have any idea on the reasons why this is happening?

Thank you so much.

dump_stats breaks when include_namespace_metadata is false

dump_stats breaks when include_namespace_metadata is false (default).

  2017-11-08 22:45:23 +0000 [warn]: emit transaction failed: error_class=NoMethodError error="undefined method `count' for nil:NilClass" tag="kubernetes.var.log.containers.kubernetes-dashboard-gxb85_kube-system_kubernetes-dashboard-60d8443ca9eb166b9033dec43ad59d273a3691854b124f5bfe96be679212b033.log"
  2017-11-08 22:45:23 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluent-plugin-kubernetes_metadata_filter-0.31.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:140:in `dump_stats'
  2017-11-08 22:45:23 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluent-plugin-kubernetes_metadata_filter-0.31.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:337:in `filter_stream_from_files'
  2017-11-08 22:45:23 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-0.12.40/lib/fluent/event_router.rb:152:in `block in emit'
  2017-11-08 22:45:23 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-0.12.40/lib/fluent/event_router.rb:151:in `each'
  2017-11-08 22:45:23 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-0.12.40/lib/fluent/event_router.rb:151:in `emit'
  2017-11-08 22:45:23 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-0.12.40/lib/fluent/event_router.rb:90:in `emit_stream'
  2017-11-08 22:45:23 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-0.12.40/lib/fluent/plugin/in_tail.rb:311:in `receive_lines'
  2017-11-08 22:45:23 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-0.12.40/lib/fluent/plugin/in_tail.rb:429:in `wrap_receive_lines'
  2017-11-08 22:45:23 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-0.12.40/lib/fluent/plugin/in_tail.rb:626:in `on_notify'
  2017-11-08 22:45:23 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-0.12.40/lib/fluent/plugin/in_tail.rb:455:in `on_notify'
  2017-11-08 22:45:23 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-0.12.40/lib/fluent/plugin/in_tail.rb:542:in `on_timer'
  2017-11-08 22:45:23 +0000 [warn]: /var/lib/gems/2.3.0/gems/cool.io-1.5.1/lib/cool.io/loop.rb:88:in `run_once'
  2017-11-08 22:45:23 +0000 [warn]: /var/lib/gems/2.3.0/gems/cool.io-1.5.1/lib/cool.io/loop.rb:88:in `run'
  2017-11-08 22:45:23 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-0.12.40/lib/fluent/plugin/in_tail.rb:295:in `run'

Since @namespace_cache is initialized only when include_namespace_metadata is enabled, dump_stats needs to handle this conditionally.

plugin doesn't work if the logs are not in var/log/containers

We deploy fluentd as a container and mount the logs from host machine /var/log/containers to fluent container at /containerlogs. This location wouldn't match the regex and plugin doesn't work, or if we have a custom path for container logs is there a way?

Warning about failing to expand kubernetes label.

Does anyone know why the following warnings are filling up my logs?

[warn]: failed to expand %Q[#{kubernetes["labels"]}]error_class=NameError error=\"undefined local variable or methodkubernetes'"\n`

Kubernetes version: v1.5

I'm using a fluentd daemonset to push logs to AWS CloudWatch Logs.

annotations disappear?

Hi,

Im using minikube to test this plugin. For some reason annotations are present for the first few log records then they disappear?? I've been pulling my hair out trying to figure out why.

It seems the metadata is stored into the cache with the annotations but then when it retrieves the cache via the key the annotations are missing but all the other data is there?

Any ideas what's going on?

exclude namespace kube-system to send logs to ElasticSearch

Is there a way to have fluentd to exclude namespace "kube-system" not to send logs to Elasticsearch so that we don't see logs from the namespace(kube-system) in Kibana.

I'm trying to add into td-agent.conf so that it will be updated and stop sending logs from the namespace (kube-system) to ES and we will only have logs from other namespaces but from kube-system in Kibana.

Thanks in advance.

How to create a rule to filter logs by k8s labels?

I.e. I have k8s labels and I'd like to apply some filter on logs which relate to these labels:

    "kubernetes": {
      "namespace_name": "default",
      "pod_id": "397f9943-83de-11e6-9679-525400d2e1e2",
      "pod_name": "app-primary-3029120254-rb5u3",
      "container_name": "my-primary-app",
      "labels": {
        "app": "app",
        "app": "primary",
        "pod-template-hash": "3029120254"
      },
      "host": "ubuntu3"

And I'd like to remove ANSI colors from these logs:

<label %what_should_i_put_here?% >
    type color_stripper
    strip_fields log
</label>

Plugin should return string keys not symbols

The plugin returns this::

{:docker:{:container_id:"46ac488dad3d826d86c1b095c8ea4b0904ee8c8c95e06e5f55571b99e91ee00f"},
 :kubernetes:{:namespace_name":"test",....}
}

This is not permitted in fluentd: http://docs.fluentd.org/articles/plugin-development#record-format:
"Fluentd plugins assume the record is a JSON so the key should be the String, not Symbol. If you emit a symbol keyed record, it may cause a problem."

router.emit(tag, time, {'foo' => 'bar'})  # OK!
router.emit(tag, time, {:foo => 'bar'})   # NG!

This causes problems trying to pass the data through filters. Specifically, record_transformer cannot reference the keys, either to use them to build other fields in the record, or to remove_keys.

prerequisites to use this plugin

Hi,

Any prerequisites to use this plugin? Do I have to run td-agent inside of Kubernetes cluster? I want to run td-agent outside of Kubernetes and without specify Kubernetes api since I don't need additional metadata or tags.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.