Giter Club home page Giter Club logo

mulog's Introduction

μ/log

Clojars Project cljdoc badge CircleCi last-commit

mulog

μ/log (Pronounced: /mjuːlog/) is a micro-logging library that logs events and data, not words!

From the Greek letter μ, mu (Pronunciation: /mjuː/)
The twelfth letter of the Greek alphabet (Μ, μ), often used as a prefix for micro- which is 10-6 in the SI (System of Units). Lowercase letter "u" is often substituted for "μ" when the Greek character is not typographically available.

(source: https://en.wikipedia.org/wiki/Mu_(letter))

Features

Here some features and key design decisions that make μ/log special:

  • Effortlessly, logs events as data points.
  • No need to construct strings that then need to be deconstructed later.
  • Fast, extremely fast, under 300 nanoseconds per log entry
  • Memory bound; no unbounded use of memory
  • All the processing and rendering happens asynchronously.
  • Ability to add contextual logging.
  • Adding publishers won't affect logging performances
  • Extremely easy to create stateful publishers for new systems
  • Wide range of publishers available (see available list)
  • Event logs are useful, but not as important as process flow (therefore preferable to drop events rather than crashing the process)
  • Because it is cheap to log events, you can freely log plenty.
  • And events are just data so can process, enrich, filter, aggregate, visualise the data with your own tools.

Motivation

It is not the intention of µ/log to be a logging system in the sense of Log4j et al. In any significant project I worked in the last 15 years, logging text messages resulted in a large amount of strings which was hard to make sense of, thus mostly ignored. µ/log's idea is to replace the "3 Pillars of Observability" with a more fundamental concept: "the event". Event-based data is easy to index, search, augment, aggregate and visualise therefore can easily replace traditional logs, metrics and traces.

Existing logging libraries are based on a design from the 80s and early 90s. Most of the systems at the time where developed in standalone servers where logging messages to console or file was the predominant thing to do. Logging was mostly providing debugging information and system behavioural introspection.

Most of modern systems are distributed in virtualized machines that live in the cloud. These machines could disappear any time. In this context logging on the local file system isn't useful as logs are easily lost if virtual machines are destroyed. Therefore it is common practice to use log collectors and centralized log processors. The ELK stack it has been predominant in this space for years, but there are a multitude of other commercial and open-source products.

Most of these systems have to deal with non structured data represented as formatted strings in files. The process of extracting information out of these strings is very tedious, error prone, and definitely not fun. But the question is: why did we encode these as strings in the first place? This is just because existing log frameworks, which have been redesigned in various decades follow the same structure as when systems lived on the same single server for decades.

I believe we need the break free of these anachronistic designs and use event loggers, not message loggers, which are designed for dynamic distributed systems living in cloud and using centralized log aggregators. So here is μ/log designed for this very purpose.

Watch my talk on μ/log at the London Clojurians Meetup:

μ/log and the next 100 logging systems

Table of contents

Usage

In order to use the library add the dependency to your project.clj

;; Leiningen project
[com.brunobonacci/mulog "0.9.0"]

;; deps.edn format
{:deps { com.brunobonacci/mulog {:mvn/version "0.9.0"}}}

Current version: Clojars Project

Then require the namespace:

(ns your-ns
  (:require [com.brunobonacci.mulog :as μ]))

;; or for the more ASCII traditionalists
(ns your-ns
  (:require [com.brunobonacci.mulog :as u]))

Check the online documentation

Then instrument your code with the log you deem useful. The general structure is

(μ/log event-name, key1 value1, key2 value2, ... keyN valueN)

You can add as many key-value pairs as you deem useful to express the event in your system.

For example:

;; good to use namespaced keywords for the event-name
(μ/log ::hello :to "New World!")

However you will NOT be able to see any events until you add a publisher which will take your events and send them to a distributed logger or your local console (if you are developing).

(μ/start-publisher! {:type :console})

At this point you should be able to see the previous event in your REPL terminal and it will look as follows:

{:mulog/trace-id #mulog/flake "4VTBeu2scrIEMle9us8StnmvRrj9ThWP", :mulog/timestamp 1587500402972, :mulog/event-name :your-ns/hello, :mulog/namespace "your-ns", :to "New World!"}

Here are some example of events you might want to log.

;; The general form is
(μ/log ::event-name, :key1 "value1", :key2 :value2, :keyN "valueN")

;; examples
(μ/log ::system-started :version "0.1.0" :init-time 32)

(μ/log ::user-logged :user-id "1234567" :remote-ip "1.2.3.4" :auth-method :password-login)

(μ/log ::http-request :path "/orders", :method :post, :remote-ip "1.2.3.4", :http-status 201, :request-time 129)

(def x (RuntimeException. "Boom!"))
(μ/log ::invalid-request :exception x, :user-id "123456789", :items-requested 47)

(μ/log ::position-updated :poi "1234567" :location {:lat 51.4978128, :lng -0.1767122} )

All above are examples of events you might want to track, collect and aggregate on it in a specialized timeseries database.

Use of context

Adding events which are rich in attributes and dimensions is extremely useful, however it is not easy to have all the attributes and dimensions at your disposal everywhere in the code. To get around this problem μ/log supports the use of context.

There are two levels of context, a global level and a local one.

The global context allows you to define properties and values which will be added to all the events logged afterwards.

For example:

(μ/log ::system-started :init-time 32)
;; {:mulog/timestamp 1572709206048, :mulog/event-name :your-ns/system-started, :mulog/namespace "your-ns", :init-time 32}

;; set global context
(μ/set-global-context! {:app-name "mulog-demo", :version "0.1.0", :env "local"})

(μ/log ::system-started :init-time 32)
;; {:mulog/event-name :your-ns/system-started,
;;  :mulog/timestamp  1587501375129,
;;  :mulog/trace-id   #mulog/flake "4VTCYUcCs5KRbiRibgulnns3l6ZW_yxk",
;;  :mulog/namespace  "your-ns",
;;  :app-name         "mulog-demo",
;;  :env              "local",
;;  :init-time        32,
;;  :version          "0.1.0"}

Typically, you will set the global context once in your main function at the starting of your application with properties which are valid for all events emitted by the process. Use set-global-context! to specify a given value, or update-global-context! with a update function to change some of the values. Examples of properties you should consider adding in the global context are app-name, version, environment, process-id, host-ip, os-type, jvm-version etc etc

The second type of context is the (thread) local context. It can be used to inject information about the current processing and all the events within the scope of the context will inherit the properties and their values.

For example the following line will contain all the properties of the global context, all the properties of the local context and all inline properties.

(μ/with-context {:order "abc123"}
  (μ/log ::item-processed :item-id "sku-123" :qt 2))

;; {:mulog/event-name :your-ns/item-processed,
;;  :mulog/timestamp  1587501473472,
;;  :mulog/trace-id   #mulog/flake "4VTCdCz6T_TTM9bS5LCwqMG0FhvSybkN",
;;  :mulog/namespace  "your-ns",
;;  :app-name         "mulog-demo",
;;  :env              "local",
;;  :item-id          "sku-123",
;;  :order            "abc123",
;;  :qt               2,
;;  :version          "0.1.0"}

The local context can be nested:

(μ/with-context {:transaction-id "tx-098765"}
  (μ/with-context {:order "abc123"}
    (μ/log ::item-processed :item-id "sku-123" :qt 2)))

;; {:mulog/event-name :your-ns/item-processed,
;;  :mulog/timestamp  1587501492168,
;;  :mulog/trace-id   #mulog/flake "4VTCeIc_FNzCjegzQ0cMSLI09RqqC2FR",
;;  :mulog/namespace  "your-ns",
;;  :app-name         "mulog-demo",
;;  :env              "local",
;;  :item-id          "sku-123",
;;  :order            "abc123",
;;  :qt               2,
;;  :transaction-id   "tx-098765",
;;  :version          "0.1.0"}

Local context works across function boundaries:

(defn process-item [sku quantity]
    ;; ... do something
    (μ/log ::item-processed :item-id sku :qt quantity)
    ;; ... do something
    )

(μ/with-context {:order "abc123"}
    (process-item "sku-123" 2))

;; {:mulog/event-name :your-ns/item-processed,
;;  :mulog/timestamp  1587501555926,
;;  :mulog/trace-id   #mulog/flake "4VTCi08XrCWQLrR8vS2nP8sG1zDTGuY_",
;;  :mulog/namespace  "your-ns",
;;  :app-name         "mulog-demo",
;;  :env              "local",
;;  :item-id          "sku-123",
;;  :order            "abc123",
;;  :qt               2,
;;  :version          "0.1.0"}

Best practices

Here some best practices to follow while logging events:

  • Use namespaced keywords or qualified strings for the event-name
  • Log plain values, not opaque objects, objects will be turned into strings which diminishes their value
  • Do not log mutable values, since rendering is done asynchronously you could be logging a different state. If values are mutable capture the current state (deref) and log it.
  • Avoid logging deeply nested maps, they are hard to query.
  • Log timestamps with milliseconds precision.
  • Use global context to enrich events with application name (:app-name), version (:version), environment (:env), host, OS pid, and other useful information so that it is always possible to determine the source of the event. See example here.
  • If you have to log an error/exception put the exception object with a :exception key. For example:
    (try
      (something)
      (catch Exception x
        (μ/log ::actionX :exception x :status :failed)))
    It will be easier to search for all the error in Elasticsearch just by looking the presence of the exception key (Elasticsearch query example exception:*)

μ/trace

since v0.2.0

mutrace

μ/trace (Pronounced: /mjuːtrace/) is a micro distributed tracing library with the focus on tracking data with custom attributes.

μ/trace is a subsystem of μ/log and it relies heavily on it. While the objective of μ/log is to record and publish a event which happens in a single point in time, the objective of μ/trace is to record and publish an event that spans over a short period of time, and potentially, spans across multiple systems.

μ/trace can be used within a single system and it will provide accurate data around instrumented operation of that system. μ/trace can also be used in a distributed setup and in conjunction with other distributed tracers such as Zipkin and participate into distributed traces.

μ/trace data points are not confined to distributed tracers, but the data can be used and interpreted in Elasticsearch, in real-time streaming system which use Apache Kafka etc.

Assume that you have a complex operation which you want to track the rate, the outcome, the latency and have contextual information about the call.

One example of such calls is the call to an external service or database to retrieve the current product availability.

Here an example of such call:

;; example call to external service
(defn product-availability [product-id]
  (http/get availability-service {:product-id product-id}))

We want to track how long this operation takes and if it fails, what's the reason. With μ/trace we can instrument the request as follow:

;; same require as mulog
;; (require '[com.brunobonacci.mulog :as μ])

;; wrap the call to the `product-availability` function with μ/trace
(μ/trace ::availability
  []
  (product-availability product-id))

μ/trace will start a timer before calling (product-availability product-id) and when the execution completes it will log an event using μ/log. To the caller it will be like calling (product-availability product-id) directly as the caller will receive the evaluation result of the body. However, μ/log will publish the following event:

;; {:mulog/event-name :your-ns/availability,
;;  :mulog/timestamp 1587504242983,
;;  :mulog/trace-id #mulog/flake "4VTF9QBbnef57vxVy-b4uKzh7dG7r7y4",
;;  :mulog/root-trace #mulog/flake "4VTF9QBbnef57vxVy-b4uKzh7dG7r7y4",
;;  :mulog/duration 254402837,
;;  :mulog/namespace "your-ns",
;;  :mulog/outcome :ok,
;;  :app-name "mulog-demo",
;;  :env "local",
;;  :version "0.1.0"}

There are a few things to notice here:

  • Firstly, it inherited the global context which we set for μ/log (:app-name, :version and :env)
  • Next, we have the same keys which are available in μ/log events, such as: :mulog/event-name, :mulog/timestamp, :mulog/namespace and :mulog/trace-id.
  • In addition to the :mulog/trace-id, which identified this particular trace event, we have two more IDs. One called :mulog/root-trace and the second one called :mulog/parent-trace. The latter one is missing because this trace doesn't have a parent μ/trace block. The :mulog/root-trace is the id of the originating trace which could be coming from another system. The :mulog/root-trace is the same as the :mulog/trace-id because, in this example, this trace is the first one (and the only one) of the stack.
  • Next, we have :mulog/duration which is the duration of the evaluation of the body ( the product-availability call) expressed in nanoseconds
  • Whether the call succeeded or failed, this is specified in :mulog/outcome which it can be :ok or :error. The latter will be set in case an exception is raised, and in this case, an additional :exception property will be added with the actual exception. In case of errors, the exception will be thrown back to the caller for further handling.

In the above example we are missing some contextual information. For example, we know that someone is enquiring about product availability but we don't know about which product. This information is available at the point of call, it would be nice to be able to see this information in the trace as well. That's easily done.

Like μ/log events, we can add key/value pairs to the trace as well:

(μ/trace ::availability
  [:product-id product-id]
  (product-availability product-id))

Note that within square brackets we have added the info we need. But we can go one step further. Let's assume that we had the order-id and the user-id who is enquiring about the availability as local context then we would have the following trace event.

(def product-id "2345-23-545")
(def order-id   "34896-34556")
(def user-id    "709-6567567")

(μ/with-context {:order order-id, :user user-id}
  (μ/trace ::availability
    [:product-id product-id]
    (product-availability product-id)))

;; {:mulog/event-name :your-ns/availability,
;;  :mulog/timestamp 1587506497789,
;;  :mulog/trace-id #mulog/flake "4VTHCez0rr3TpaBmUQrTb2DZaYmaWFkH",
;;  :mulog/root-trace #mulog/flake "4VTHCez0rr3TpaBmUQrTb2DZaYmaWFkH",
;;  :mulog/duration 280510026,
;;  :mulog/namespace "your-ns",
;;  :mulog/outcome :ok,
;;  :app-name "mulog-demo",
;;  :env "local",
;;  :order "34896-34556",
;;  :product-id "2345-23-545",
;;  :user "709-6567567",
;;  :version "0.1.0"}

One important difference between with-context and the μ/trace pairs is that with-context will propagate that information to all nested calls while the μ/trace pairs will be only added to that specific trace event and not the nested ones.

If we had the following set of nested calls:

(process-order)
└── (availability)
    ├── (warehouse-availability)
    ├── (shopping-carts)
    └── (availability-estimator)

Where process-order check the availability of each product, and to check the availability of each product you need to verify what is available in the warehouse as well as how many items are locked in in-flight shopping carts and have this information provided to an estimator you would end-up with a trace which looks like the following:

nested traces

Publishers

Publishers allow to send the events to external system where they can be stored, indexed, transformed or visualised.

Most of the publishers are in separated modules to reduce the risk of dependencies clash. Please see the specific publisher documentation for the name of the module to add in your dependencies.

Modules can be started as follow:

(def pub (μ/start-publisher! {:type :console :pretty? true}))

The map contains the configuration which is specific to the publisher.

It returns a function with no arguments which when called stops the publisher and flushes the records currently present in the buffer. Finally, if the publisher implements the java.io.Closeable it will call the close method to release/close external resources.

Here the list of all available publishers:

Additional topics

Contributions

I do consider the core pretty much feature complete, therefore I won't accept changes in the core module. However, there is loads of work to be done on supporting libraries and publishers for various systems, Here your help if welcome, you can have a look at the list of open issues marked as help wanted.

PRs are welcome ;-)

To contribute:

  • pick an issue you would like to work on.
  • drop a message to the issue so that I know someone else is working on it
  • follow the guidelines in the ticket
  • in doubt, just ask!

Need help?

If you have questions or you need help please open an issue or post your questions into Github Discussions board.

Alternatively you can post a question to the #mulog channel in the Clojurians Slack team.

Related projects

Here there are some other open-source projects which are related to μ/log:

  • slf4j-mulog - a SLF4j backend for μ/log.

    It enables you to send your traditional logs from your existing projects via μ/log and leverage all μ/log's capability to filter/transform/enrich events before publishing.

Articles

License

Copyright © 2019-2021 Bruno Bonacci - Distributed under the Apache License v2.0

mulog's People

Contributors

ak-coram avatar anonimitoraf avatar arnaudbos avatar aviflax avatar bn-darindouglass-zz avatar brandonstubbs avatar brunobonacci avatar dosbol avatar drewr avatar emlyn avatar evg-tso avatar ghoseb avatar jeroenvandijk avatar nivekuil avatar ozimos avatar piotr-yuxuan avatar practicalli-johnny avatar ricardosllm avatar rogererens avatar sathyavijayan avatar the-alchemist avatar thomascothran avatar vojkog avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mulog's Issues

Flushing logs

Is there a way to ensure any currently buffered logs are sent to the publishers?
It seems to me that any buffered logs that have not yet been sent are lost when the application quits, is that correct?
Would it be possible to add something like (μ/flush) that could be called on shutdown to ensure everything has been sent?

μ/log publisher for AWS CloudWatch Metrics.

A μ/log publisher for AWS CloudWatch Metrics.

We already have a publisher for CloudWatch Logs, however, currently,
is not possible to use CloudWatch Logs Insights to set Alarms and use
this information for automation. For example a use case would be to
autoscale the number of instances based on the number of items to
process (internal metric published via μ/log).

Preferred Approach:

  • Evaluate Cognitect aws-api vs Amazonica vs Java SDK

Guidelines to write a publisher:

add config for expanding mbean attributes into individual logs

We are currently sending our logs to Kibana/Elasticsearch. Elasticsearch has a limit of 1000 fields per index. We found a good chunk of these fields are being taken up by mbean :attributes.

Currently we're altering mbeans-sample to expand each mbean's :attributes map into multiple logs each with a :attribute field which should allow all of our mbean logs to take up a finite number of fields.

This feels like a good config option for the mbeans sampler.

Thoughts?

Edit:

One possible solution is to make transform act on the result of mbeans-sample instead of the each of the samples returned individually. Off the top of my head, this publisher is the only one that behaves this way.

Order of key/value pairs in logs

Hi @BrunoBonacci, thank you for the library! 🙇

I know it doesn't matter when logs are consumed by machines but I found very helpful to have user-defined key/value pairs to be displayed BEFORE :mulog-namespaced ones. Especially when publishing to either STDOUT or file.

What do you think? I believe I could even prepare a PR for this if that's OK.

👋

:mulog/publisher-error NullPointerException

mulog version 0.8.2

We're seeing periodic publisher errors.
image
All 3 peaks are caused by publisher-error messages. Sometimes events start emitting again, sometimes a restart is required.

using the following config:

[{:type :console}
 {:type :jvm-metrics}
 {:type        :elasticsearch
  :data-stream "MY-STREAM"
  :url         "https://my.elk.com"
  :http-opts   {:basic-auth #secret :elastic-auth}}]

Using elasticsearch 8.3.0, but saw similar issues on opensearch 1.3.1

the two first error events

  "_source": {
    "publisher_type.k": "elasticsearch",
    "mulog.origin.k": "mulog/core",
    "exception.x": ...,
    "mulog.action.k": "publish",
    "publisher_id.s": "4jqjYZMngOpiJBZBUhXxFL3DrbcnEDj0",
    "mulog.namespace.s": "clojure.core",
    "mulog.event_name.k": "mulog/publisher-error",
    "@timestamp": "2022-07-03T06:27:18.684Z",
    "mulog.trace_id": "4jsxlnBRuPVFLmaFzFJboSVsaA1h7Gb8",
    .....
  }

Exception from elastic publisher

clojure.lang.ExceptionInfo: Elasticsearch Bulk API reported errors {:errors ()}
	at com.brunobonacci.mulog.publishers.elasticsearch$post_records.invokeStatic(elasticsearch.clj:125)
	at com.brunobonacci.mulog.publishers.elasticsearch$post_records.invoke(elasticsearch.clj:107)
	at com.brunobonacci.mulog.publishers.elasticsearch.ElasticsearchPublisher.publish(elasticsearch.clj:217)
	at com.brunobonacci.mulog.core$start_publisher_BANG_$publish_attempt__17357.invoke(core.clj:194)
	at clojure.core$binding_conveyor_fn$fn__5823.invoke(core.clj:2050)
	at clojure.lang.AFn.applyToHelper(AFn.java:154)
	at clojure.lang.RestFn.applyTo(RestFn.java:132)
	at clojure.lang.Agent$Action.doRun(Agent.java:114)
	at clojure.lang.Agent$Action.run(Agent.java:163)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)
  "_source": {
    "publisher_type.k": "console",
    "mulog.origin.k": "mulog/core",
    "exception.x": "...",
    "mulog.action.k": "publish",
    "publisher_id.s": "4jqjYWXafN6FexqLUzA9-kQ83h_4RAze",
    "mulog.namespace.s": "clojure.core",
    "mulog.event_name.k": "mulog/publisher-error",
    "@timestamp": "2022-07-03T06:27:18.778Z",
    "mulog.trace_id": "4jsxlnXpmHzuVRNmjtz4NRRpwhcTU9L5"
  }

Exception from console publisher

java.lang.NullPointerException: Cannot invoke "Object.getClass()" because "x" is null
	at clojure.lang.Numbers.ops(Numbers.java:1095)
	at clojure.lang.Numbers.gte(Numbers.java:265)
	at clojure.lang.Numbers.gte(Numbers.java:3991)
	at com.brunobonacci.mulog.publishers.elasticsearch$post_records$fn__2136.invoke(elasticsearch.clj:126)
	at clojure.core$filter$fn__5962.invoke(core.clj:2834)
	at clojure.lang.LazySeq.sval(LazySeq.java:42)
	at clojure.lang.LazySeq.seq(LazySeq.java:51)
	at clojure.lang.RT.seq(RT.java:535)
	at clojure.core$seq__5467.invokeStatic(core.clj:139)
	at clojure.core$print_sequential.invokeStatic(core_print.clj:53)
	at clojure.core$fn__7391.invokeStatic(core_print.clj:174)
	at clojure.core$fn__7391.invoke(core_print.clj:174)
	at clojure.lang.MultiFn.invoke(MultiFn.java:234)
	at clojure.core$pr_on.invokeStatic(core.clj:3675)
	at clojure.core$pr_on.invoke(core.clj:3669)
	at clojure.core$print_prefix_map$fn__7414.invoke(core_print.clj:233)
	at clojure.core$print_sequential.invokeStatic(core_print.clj:66)
	at clojure.core$print_prefix_map.invokeStatic(core_print.clj:229)
	at clojure.core$print_map.invokeStatic(core_print.clj:238)
	at clojure.core$fn__7443.invokeStatic(core_print.clj:266)
	at clojure.core$fn__7443.invoke(core_print.clj:263)
	at clojure.lang.MultiFn.invoke(MultiFn.java:234)
	at clojure.core$print_throwable.invokeStatic(core_print.clj:524)
	at clojure.core$fn__7566.invokeStatic(core_print.clj:543)
	at clojure.core$fn__7566.invoke(core_print.clj:543)
	at clojure.lang.MultiFn.invoke(MultiFn.java:234)
	at clojure.core$pr_on.invokeStatic(core.clj:3675)
	at clojure.core$pr_on.invoke(core.clj:3669)
	at clojure.core$print_prefix_map$fn__7414.invoke(core_print.clj:233)
	at clojure.core$print_sequential.invokeStatic(core_print.clj:66)
	at clojure.core$print_prefix_map.invokeStatic(core_print.clj:229)
	at clojure.core$print_map.invokeStatic(core_print.clj:238)
	at clojure.core$fn__7443.invokeStatic(core_print.clj:266)
	at clojure.core$fn__7443.invoke(core_print.clj:263)
	at clojure.lang.MultiFn.invoke(MultiFn.java:234)
	at clojure.core$pr_on.invokeStatic(core.clj:3675)
	at clojure.core$pr.invokeStatic(core.clj:3678)
	at clojure.core$pr.invoke(core.clj:3678)
	at clojure.lang.AFn.applyToHelper(AFn.java:154)
	at clojure.lang.RestFn.applyTo(RestFn.java:132)
	at clojure.core$apply.invokeStatic(core.clj:667)
	at clojure.core$pr_str.invokeStatic(core.clj:4760)
	at clojure.core$pr_str.doInvoke(core.clj:4760)
	at clojure.lang.RestFn.invoke(RestFn.java:408)
	at com.brunobonacci.mulog.utils$edn_str.invokeStatic(utils.clj:79)
	at com.brunobonacci.mulog.utils$edn_str.doInvoke(utils.clj:68)
	at clojure.lang.RestFn.invoke(RestFn.java:410)
	at com.brunobonacci.mulog.publisher.ConsolePublisher.publish(publisher.clj:58)
	at com.brunobonacci.mulog.core$start_publisher_BANG_$publish_attempt__17357.invoke(core.clj:194)
	at clojure.core$binding_conveyor_fn$fn__5823.invoke(core.clj:2050)
	at clojure.lang.AFn.applyToHelper(AFn.java:154)
	at clojure.lang.RestFn.applyTo(RestFn.java:132)
	at clojure.lang.Agent$Action.doRun(Agent.java:114)
	at clojure.lang.Agent$Action.run(Agent.java:163)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)

I'm surprised that messages from the console publisher ends up in elastic.

Not really sure how to proceed with debugging this. Any input would be very much appreciated.

JSON-file publisher?

It would be useful to be able to write logs to a file in json-lines format (one json object per line), as this can be directly loaded into a Spark Dataset (and probably other uses too). I couldn't find a built-in way to do this, is that the case or did I miss something?

I wrote a simple publisher to do it (based on simple-file-publisher, and using charred for JSON conversion), would it be worth opening a PR to add it? If so, where would be the best place to put it? Inside mulog-json?

Elasticsearch returns HTTP 200 even when some items are failed.

Malformed records are rejected by ELS bulk API, but HTTP Status is 200

see discussion #78

sample response from ELS:

{:cached nil,
 :request-time 7,
 :repeatable? false,
 :protocol-version {:name "HTTP", :major 1, :minor 1},
 :streaming? true,
 :http-client
 #object[org.apache.http.impl.client.InternalHttpClient 0x12783998 "org.apache.http.impl.client.InternalHttpClient@12783998"],
 :chunked? false,
 :reason-phrase "OK",
 :headers
 {"Warning"
  "299 Elasticsearch-7.13.4-c5f60e894ca0c61cdbae4f5a686d9f08bcefc942 \"Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See https://www.elastic.co/guide/en/elasticsearch/reference/7.13/security-minimal-setup.html to enable security.\"",
  "content-type" "application/json; charset=UTF-8",
  "content-length" "268"},
 :orig-content-encoding "gzip",
 :status 200,
 :length 268,
 :body
 {
  "took": 0,
  "errors": true,
  "items": [
    {
      "index": {
        "_index": "mulog-2021.07.23",
        "_type": "_doc",
        "_id": "4dGYOCKQKZHXx3reYDSDyubtuy9keDKG",
        "status": 400,
        "error": {
          "type": "mapper_parsing_exception",
          "reason": "failed to parse",
          "caused_by": {
            "type": "illegal_argument_exception",
            "reason": "object field starting or ending with a [.] makes object resolution ambiguous: [.i]" } } } } ]} }

els publisher - fails and posts `mulog/publisher-error` when there are no records returned by the transform fn.

Steps to reproduce the error

Start the els publisher with transform function that returns no items. You should be able to see mulog/publisher-error whenever the periodic flush occurs.

Exception

clojure.lang.ExceptionInfo: clj-http: status 400 {:cached nil, :request-time 18, :repeatable? false, :protocol-version {:name "HTTP", :major 1, :minor 1}, :streaming? true, :http-client #object[org.apache.http.impl.client.InternalHttpClient 0x450a0279 "org.apache.http.impl.client.InternalHttpClient@450a0279"], :chunked? false, :type :clj-http.client/unexceptional-status, :reason-phrase "Bad Request", :headers {"Date" "Wed, 12 Aug 2020 17:46:16 GMT", "Content-Type" "application/json; charset=UTF-8", "Content-Length" "163", "Connection" "close", "Access-Control-Allow-Origin" "*"}, :orig-content-encoding nil, :status 400, :length 163, :body "{\"error\":{\"root_cause\":[{\"type\":\"parse_exception\",\"reason\":\"request body is required\"}],\"type\":\"parse_exception\",\"reason\":\"request body is required\"},\"status\":400}", :trace-redirects []}
	at slingshot.support$stack_trace.invoke(support.clj:201)
	at clj_http.client$exceptions_response.invokeStatic(client.clj:245)
	at clj_http.client$exceptions_response.invoke(client.clj:236)
	at clj_http.client$wrap_exceptions$fn__4979.invoke(client.clj:254)
	at clj_http.client$wrap_accept$fn__5181.invoke(client.clj:737)
	at clj_http.client$wrap_accept_encoding$fn__5188.invoke(client.clj:759)
	at clj_http.client$wrap_content_type$fn__5175.invoke(client.clj:720)
	at clj_http.client$wrap_form_params$fn__5274.invoke(client.clj:961)
	at clj_http.client$wrap_nested_params$fn__5295.invoke(client.clj:995)
	at clj_http.client$wrap_flatten_nested_params$fn__5304.invoke(client.clj:1019)
	at clj_http.client$wrap_method$fn__5242.invoke(client.clj:895)
	at clj_http.cookies$wrap_cookies$fn__2869.invoke(cookies.clj:131)
	at clj_http.links$wrap_links$fn__3994.invoke(links.clj:63)
	at clj_http.client$wrap_unknown_host$fn__5312.invoke(client.clj:1048)
	at clj_http.client$request_STAR_.invokeStatic(client.clj:1176)
	at clj_http.client$request_STAR_.invoke(client.clj:1169)
	at clj_http.client$post.invokeStatic(client.clj:1194)
	at clj_http.client$post.doInvoke(client.clj:1190)
	at clojure.lang.RestFn.invoke(RestFn.java:423)
	at com.brunobonacci.mulog.publishers.elasticsearch$post_records.invokeStatic(elasticsearch.clj:93)
	at com.brunobonacci.mulog.publishers.elasticsearch$post_records.invoke(elasticsearch.clj:91)
	at com.brunobonacci.mulog.publishers.elasticsearch.ElasticsearchPublisher.publish(elasticsearch.clj:177)
	at com.brunobonacci.mulog.core$start_publisher_BANG_$publish_attempt__7997.invoke(core.clj:190)
	at clojure.core$binding_conveyor_fn$fn__5739.invoke(core.clj:2033)
	at clojure.lang.AFn.applyToHelper(AFn.java:154)
	at clojure.lang.RestFn.applyTo(RestFn.java:132)
	at clojure.lang.Agent$Action.doRun(Agent.java:114)
	at clojure.lang.Agent$Action.run(Agent.java:163)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:830)

Full Record

{
  "_index": "mulog-2020.08",
  "_type": "_doc",
  "_id": "4XeMEacrExnbi8OrgMhC4ebNE6BAylOx",
  "_version": 1,
  "_score": null,
  "_source": {
    "publisher_type.k": "elasticsearch",
    "publisher_id.s": "4XcyPSU7i_bTXco0sbZQXRVTtq6i7kyy",
    "service.k": "dialog-manager",
    "mulog.trace_id": "4XeMEacrExnbi8OrgMhC4ebNE6BAylOx",
    "@timestamp": "2020-08-12T18:01:52.182Z",
    "mulog.event_name.k": "mulog/publisher-error",
    "mulog.action.k": "publish",
    "mulog.namespace.s": "clojure.core",
    "env.s": "stg",
    "pid.s": "3ddfb9c8cd23",
    "exception.x": "clojure.lang.ExceptionInfo: clj-http: status 400 {:cached nil, :request-time 12, :repeatable? false, :protocol-version {:name \"HTTP\", :major 1, :minor 1}, :streaming? true, :http-client #object[org.apache.http.impl.client.InternalHttpClient 0x3dcb35ad \"org.apache.http.impl.client.InternalHttpClient@3dcb35ad\"], :chunked? false, :type :clj-http.client/unexceptional-status, :reason-phrase \"Bad Request\", :headers {\"Date\" \"Wed, 12 Aug 2020 18:01:52 GMT\", \"Content-Type\" \"application/json; charset=UTF-8\", \"Content-Length\" \"163\", \"Connection\" \"close\", \"Access-Control-Allow-Origin\" \"*\"}, :orig-content-encoding nil, :status 400, :length 163, :body \"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"parse_exception\\\",\\\"reason\\\":\\\"request body is required\\\"}],\\\"type\\\":\\\"parse_exception\\\",\\\"reason\\\":\\\"request body is required\\\"},\\\"status\\\":400}\", :trace-redirects []}\n\tat slingshot.support$stack_trace.invoke(support.clj:201)\n\tat clj_http.client$exceptions_response.invokeStatic(client.clj:245)\n\tat clj_http.client$exceptions_response.invoke(client.clj:236)\n\tat clj_http.client$wrap_exceptions$fn__3272.invoke(client.clj:254)\n\tat clj_http.client$wrap_accept$fn__3474.invoke(client.clj:737)\n\tat clj_http.client$wrap_accept_encoding$fn__3481.invoke(client.clj:759)\n\tat clj_http.client$wrap_content_type$fn__3468.invoke(client.clj:720)\n\tat clj_http.client$wrap_form_params$fn__3567.invoke(client.clj:961)\n\tat clj_http.client$wrap_nested_params$fn__3588.invoke(client.clj:995)\n\tat clj_http.client$wrap_flatten_nested_params$fn__3597.invoke(client.clj:1019)\n\tat clj_http.client$wrap_method$fn__3535.invoke(client.clj:895)\n\tat clj_http.cookies$wrap_cookies$fn__2452.invoke(cookies.clj:131)\n\tat clj_http.links$wrap_links$fn__2870.invoke(links.clj:63)\n\tat clj_http.client$wrap_unknown_host$fn__3605.invoke(client.clj:1048)\n\tat clj_http.client$request_STAR_.invokeStatic(client.clj:1176)\n\tat clj_http.client$request_STAR_.invoke(client.clj:1169)\n\tat clj_http.client$post.invokeStatic(client.clj:1194)\n\tat clj_http.client$post.doInvoke(client.clj:1190)\n\tat clojure.lang.RestFn.invoke(RestFn.java:423)\n\tat com.brunobonacci.mulog.publishers.elasticsearch$post_records.invokeStatic(elasticsearch.clj:93)\n\tat com.brunobonacci.mulog.publishers.elasticsearch$post_records.invoke(elasticsearch.clj:91)\n\tat com.brunobonacci.mulog.publishers.elasticsearch.ElasticsearchPublisher.publish(elasticsearch.clj:177)\n\tat com.brunobonacci.mulog.core$start_publisher_BANG_$publish_attempt__8774.invoke(core.clj:190)\n\tat clojure.core$binding_conveyor_fn$fn__5739.invoke(core.clj:2033)\n\tat clojure.lang.AFn.applyToHelper(AFn.java:154)\n\tat clojure.lang.RestFn.applyTo(RestFn.java:132)\n\tat clojure.lang.Agent$Action.doRun(Agent.java:114)\n\tat clojure.lang.Agent$Action.run(Agent.java:163)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:830)\n",
    "mulog.origin.k": "mulog/core"
  },
  "fields": {
    "@timestamp": [
      "2020-08-12T18:01:52.182Z"
    ]
  },
  "highlight": {
    "mulog.event_name.k.keyword": [
      "@kibana-highlighted-field@mulog/publisher-error@/kibana-highlighted-field@"
    ]
  },
  "sort": [
    1597255312182
  ]
}

#32

Incompatible with cheshire?

When launch a project with the follow dependences a exception is rised.

a test repo:

https://github.com/falberto/ulog-test.git

 :dependencies [[org.clojure/clojure "1.10.1"]
                 [clj-http "3.11.0"]
                 [cheshire "5.10.0"]
                 [com.brunobonacci/mulog "0.6.4"]
                 [com.brunobonacci/mulog-zipkin "0.6.4"]]

#error {
 :cause com.fasterxml.jackson.core.util.JacksonFeature
 :via
 [{:type clojure.lang.Compiler$CompilerException
   :message Syntax error compiling at (test_ulog/core.clj:7:1).
   :data #:clojure.error{:phase :compile-syntax-check, :line 7, :column 1, :source test_ulog/core.clj}
   :at [clojure.lang.Compiler load Compiler.java 7648]}
  {:type clojure.lang.ExceptionInfo
   :message Unable to load appropriate publisher. Please ensure you have the following dependency [com.brunobonacci/mulog-zipkin "x.y.z"] in your project.clj
   :data {:config {:type :zipkin, :url http://localhost:9411/}}
   :at [com.brunobonacci.mulog.publisher$loading_error invokeStatic publisher.clj 156]}
  {:type clojure.lang.Compiler$CompilerException
   :message Syntax error macroexpanding at (core.clj:142:3).
   :data #:clojure.error{:phase :execution, :line 142, :column 3, :source core.clj}
   :at [clojure.lang.Compiler$InvokeExpr eval Compiler.java 3707]}
  {:type java.lang.NoClassDefFoundError
   :message com/fasterxml/jackson/core/util/JacksonFeature
   :at [com.fasterxml.jackson.databind.ObjectMapper <init> ObjectMapper.java 655]}
  {:type java.lang.ClassNotFoundException
   :message com.fasterxml.jackson.core.util.JacksonFeature
   :at [jdk.internal.loader.BuiltinClassLoader loadClass BuiltinClassLoader.java 581]}]
 :trace
 [[jdk.internal.loader.BuiltinClassLoader loadClass BuiltinClassLoader.java 581]
  [jdk.internal.loader.ClassLoaders$AppClassLoader loadClass ClassLoaders.java 178]
  [java.lang.ClassLoader loadClass ClassLoader.java 522]
  [com.fasterxml.jackson.databind.ObjectMapper <init> ObjectMapper.java 655]
  [com.fasterxml.jackson.databind.ObjectMapper <init> ObjectMapper.java 558]
  [jsonista.core$object_mapper invokeStatic core.clj 101]
  [jsonista.core$object_mapper invoke core.clj 101]
  [clojure.lang.AFn applyToHelper AFn.java 154]
  [clojure.lang.AFn applyTo AFn.java 144]
  [clojure.lang.Compiler$InvokeExpr eval Compiler.java 3702]
  [clojure.lang.Compiler$DefExpr eval Compiler.java 457]
  [clojure.lang.Compiler eval Compiler.java 7182]
  [clojure.lang.Compiler load Compiler.java 7636]
  [clojure.lang.RT loadResourceScript RT.java 381]
  [clojure.lang.RT loadResourceScript RT.java 372]
  [clojure.lang.RT load RT.java 459]
  [clojure.lang.RT load RT.java 424]
  [clojure.core$load$fn__6839 invoke core.clj 6126]
  [clojure.core$load invokeStatic core.clj 6125]
  [clojure.core$load doInvoke core.clj 6109]
  [clojure.lang.RestFn invoke RestFn.java 408]
  [clojure.core$load_one invokeStatic core.clj 5908]
  [clojure.core$load_one invoke core.clj 5903]
  [clojure.core$load_lib$fn__6780 invoke core.clj 5948]
  [clojure.core$load_lib invokeStatic core.clj 5947]
  [clojure.core$load_lib doInvoke core.clj 5928]
  [clojure.lang.RestFn applyTo RestFn.java 142]
  [clojure.core$apply invokeStatic core.clj 667]
  [clojure.core$load_libs invokeStatic core.clj 5985]
  [clojure.core$load_libs doInvoke core.clj 5969]
  [clojure.lang.RestFn applyTo RestFn.java 137]
  [clojure.core$apply invokeStatic core.clj 667]
  [clojure.core$require invokeStatic core.clj 6007]
  [clojure.core$require doInvoke core.clj 6007]
  [clojure.lang.RestFn invoke RestFn.java 421]
  [com.brunobonacci.mulog.common.json$eval796$loading__6721__auto____797 invoke json.clj 1]
  [com.brunobonacci.mulog.common.json$eval796 invokeStatic json.clj 1]
  [com.brunobonacci.mulog.common.json$eval796 invoke json.clj 1]
  [clojure.lang.Compiler eval Compiler.java 7177]
  [clojure.lang.Compiler eval Compiler.java 7166]
  [clojure.lang.Compiler load Compiler.java 7636]
  [clojure.lang.RT loadResourceScript RT.java 381]
  [clojure.lang.RT loadResourceScript RT.java 372]
  [clojure.lang.RT load RT.java 459]
  [clojure.lang.RT load RT.java 424]
  [clojure.core$load$fn__6839 invoke core.clj 6126]
  [clojure.core$load invokeStatic core.clj 6125]
  [clojure.core$load doInvoke core.clj 6109]
  [clojure.lang.RestFn invoke RestFn.java 408]
  [clojure.core$load_one invokeStatic core.clj 5908]
  [clojure.core$load_one invoke core.clj 5903]
  [clojure.core$load_lib$fn__6780 invoke core.clj 5948]
  [clojure.core$load_lib invokeStatic core.clj 5947]
  [clojure.core$load_lib doInvoke core.clj 5928]
  [clojure.lang.RestFn applyTo RestFn.java 142]
  [clojure.core$apply invokeStatic core.clj 667]
  [clojure.core$load_libs invokeStatic core.clj 5985]
  [clojure.core$load_libs doInvoke core.clj 5969]
  [clojure.lang.RestFn applyTo RestFn.java 137]
  [clojure.core$apply invokeStatic core.clj 667]
  [clojure.core$require invokeStatic core.clj 6007]
  [clojure.core$require doInvoke core.clj 6007]
  [clojure.lang.RestFn invoke RestFn.java 619]
  [com.brunobonacci.mulog.publishers.zipkin$eval790$loading__6721__auto____791 invoke zipkin.clj 1]
  [com.brunobonacci.mulog.publishers.zipkin$eval790 invokeStatic zipkin.clj 1]
  [com.brunobonacci.mulog.publishers.zipkin$eval790 invoke zipkin.clj 1]
  [clojure.lang.Compiler eval Compiler.java 7177]
  [clojure.lang.Compiler eval Compiler.java 7166]
  [clojure.lang.Compiler load Compiler.java 7636]
  [clojure.lang.RT loadResourceScript RT.java 381]
  [clojure.lang.RT loadResourceScript RT.java 372]
  [clojure.lang.RT load RT.java 459]
  [clojure.lang.RT load RT.java 424]
  [clojure.core$load$fn__6839 invoke core.clj 6126]
  [clojure.core$load invokeStatic core.clj 6125]
  [clojure.core$load doInvoke core.clj 6109]
  [clojure.lang.RestFn invoke RestFn.java 408]
  [clojure.core$load_one invokeStatic core.clj 5908]
  [clojure.core$load_one invoke core.clj 5903]
  [clojure.core$load_lib$fn__6780 invoke core.clj 5948]
  [clojure.core$load_lib invokeStatic core.clj 5947]
  [clojure.core$load_lib doInvoke core.clj 5928]
  [clojure.lang.RestFn applyTo RestFn.java 142]
  [clojure.core$apply invokeStatic core.clj 667]
  [clojure.core$load_libs invokeStatic core.clj 5985]
  [clojure.core$load_libs doInvoke core.clj 5969]
  [clojure.lang.RestFn applyTo RestFn.java 137]
  [clojure.core$apply invokeStatic core.clj 667]
  [clojure.core$require invokeStatic core.clj 6007]
  [clojure.core$require doInvoke core.clj 6007]
  [clojure.lang.RestFn invoke RestFn.java 408]
  [com.brunobonacci.mulog.publisher$load_function_from_name invokeStatic publisher.clj 143]
  [com.brunobonacci.mulog.publisher$load_function_from_name invoke publisher.clj 126]
  [com.brunobonacci.mulog.publisher$load_function_from_name invokeStatic publisher.clj 136]
  [com.brunobonacci.mulog.publisher$load_function_from_name invoke publisher.clj 126]
  [com.brunobonacci.mulog.publisher$load_dynamic_publisher$fn__570 invoke publisher.clj 182]
  [com.brunobonacci.mulog.publisher$load_dynamic_publisher invokeStatic publisher.clj 181]
  [com.brunobonacci.mulog.publisher$load_dynamic_publisher invoke publisher.clj 178]
  [com.brunobonacci.mulog.publisher$eval625$fn__626 invoke publisher.clj 285]
  [clojure.lang.MultiFn invoke MultiFn.java 229]
  [com.brunobonacci.mulog.core$start_publisher_BANG_ invokeStatic core.clj 176]
  [com.brunobonacci.mulog.core$start_publisher_BANG_ invoke core.clj 174]
  [clojure.core$partial$fn__5839 invoke core.clj 2624]
  [clojure.core$map$fn__5866 invoke core.clj 2753]
  [clojure.lang.LazySeq sval LazySeq.java 42]
  [clojure.lang.LazySeq seq LazySeq.java 51]
  [clojure.lang.RT seq RT.java 535]
  [clojure.core$seq__5402 invokeStatic core.clj 137]
  [clojure.core$dorun invokeStatic core.clj 3133]
  [clojure.core$doall invokeStatic core.clj 3148]
  [clojure.core$doall invoke core.clj 3148]
  [com.brunobonacci.mulog$start_publisher_BANG_ invokeStatic mulog.clj 145]
  [com.brunobonacci.mulog$start_publisher_BANG_ invoke mulog.clj 87]
  [com.brunobonacci.mulog$start_publisher_BANG_ invokeStatic mulog.clj 139]
  [com.brunobonacci.mulog$start_publisher_BANG_ invoke mulog.clj 87]
  [test_ulog.core$eval786 invokeStatic core.clj 7]
  [test_ulog.core$eval786 invoke core.clj 7]
  [clojure.lang.Compiler eval Compiler.java 7177]
  [clojure.lang.Compiler load Compiler.java 7636]
  [clojure.lang.RT loadResourceScript RT.java 381]
  [clojure.lang.RT loadResourceScript RT.java 372]
  [clojure.lang.RT load RT.java 459]
  [clojure.lang.RT load RT.java 424]
  [clojure.core$load$fn__6839 invoke core.clj 6126]
  [clojure.core$load invokeStatic core.clj 6125]
  [clojure.core$load doInvoke core.clj 6109]
  [clojure.lang.RestFn invoke RestFn.java 408]
  [clojure.core$load_one invokeStatic core.clj 5908]
  [clojure.core$load_one invoke core.clj 5903]
  [clojure.core$load_lib$fn__6780 invoke core.clj 5948]
  [clojure.core$load_lib invokeStatic core.clj 5947]
  [clojure.core$load_lib doInvoke core.clj 5928]
  [clojure.lang.RestFn applyTo RestFn.java 142]
  [clojure.core$apply invokeStatic core.clj 667]
  [clojure.core$load_libs invokeStatic core.clj 5985]
  [clojure.core$load_libs doInvoke core.clj 5969]
  [clojure.lang.RestFn applyTo RestFn.java 137]
  [clojure.core$apply invokeStatic core.clj 667]
  [clojure.core$require invokeStatic core.clj 6007]
  [clojure.core$require doInvoke core.clj 6007]
  [clojure.lang.RestFn invoke RestFn.java 408]
  [user$eval5 invokeStatic form-init8022119484354986898.clj 1]
  [user$eval5 invoke form-init8022119484354986898.clj 1]
  [clojure.lang.Compiler eval Compiler.java 7177]
  [clojure.lang.Compiler eval Compiler.java 7166]
  [clojure.lang.Compiler eval Compiler.java 7166]
  [clojure.lang.Compiler load Compiler.java 7636]
  [clojure.lang.Compiler loadFile Compiler.java 7574]
  [clojure.main$load_script invokeStatic main.clj 475]
  [clojure.main$init_opt invokeStatic main.clj 477]
  [clojure.main$init_opt invoke main.clj 477]
  [clojure.main$initialize invokeStatic main.clj 508]
  [clojure.main$null_opt invokeStatic main.clj 542]
  [clojure.main$null_opt invoke main.clj 539]
  [clojure.main$main invokeStatic main.clj 664]
  [clojure.main$main doInvoke main.clj 616]
  [clojure.lang.RestFn applyTo RestFn.java 137]
  [clojure.lang.Var applyTo Var.java 705]
  [clojure.main main main.java 40]]}`

Vulnerabilities

Hello, thanks for this project which looks great. I know some company which heavily relies to monitor heavy production load 🙂. With lein nvd check (Using nvd-clojure: and dependency-check: 5.3.2), I noticed that some subsystems have some known vulnerabilities:

  • mulog-adv-console, mulog-elasticsearch, mulog-jvm-metrics, mulog-prometheus, mulog-kafka, mulog-filesystem-metrics, mulog-slack, mulog-zipkin: 2 vulnerabilities detected. Severity: HIGH
dependency status
log4j-1.2.17.jar CVE-2019-17571, CVE-2020-9488
  • mulog-kinesis, mulog-cloudwatch: 4 vulnerabilities detected. Severity: HIGH
dependency status
jetty-util-9.4.24.v20191120.jar CVE-2020-27218, CVE-2020-27216
log4j-1.2.17.jar CVE-2019-17571, CVE-2020-9488

µ/log publisher for AWS CloudWatch Logs

A μ/log publisher for AWS CloudWatch Logs.

Preferred Approach:

  • Evaluate Cognitect aws-api vs Amazonica vs Java SDK

Guidelines to write a publisher:

Question about the ring buffer

For curiosity, I'm wondering why a ring buffer is used instead of one of the standard concurrency data structures such as a BlockingQueue

u/log as the single logging library?

Hi Bruno,

thanks for this inspiring take on logging and observability in general!

I've read the readme, the internals and a couple of discussions in the issues, as well as searched for SLF4J, before creating this issue.

u/log is a new take on logging, so there is naturally a gap between what the current JVM ecosystem uses and what you created. Are you typically bridging this gap? I don't see an SLF4J adapter anywhere in the repo.

To talk about a more concrete example - the services at my work are configured to log into files and filebeat is crawling them and publishing the log lines to logstash. There's a couple services that actually log JSON lines and simplify the process of giving these lines structure, but the pattern remains. If I am already logging structurally I can skip the hops and write to elasticsearch directly through u/log. This is simple to do since everything is ready in this repo. However what about the other libraries' logs? I'd like to keep them since e.g. errors from connection pools are important to see. But I'd like to drop the filebeat->logstash->elasticsearch pipeline and simplify the whole thing if I am to use u/log instead of having 2 ways to publish the logs.

So the question is - do you typically bridge these 2 worlds or do you keep them separate? Is there a SLF4J adapter that sends SLF4J logs as u/log logs? A simple converter like

{:level "ERROR" :message "crash"}

can be a reasonable start. I've read your rationale from #17 but I'd expect bridging some of the information over in some way would still be better than nothing. As I noted though you might already have a workflow and an explanation of that workflow might be beneficial to new users, even in the main readme I guess.

Thanks again and waiting on your input.

µ/log publisher for Dropwizard Metrics

A µ/log publisher for Dropwizard Metrics.

Preferred approach:

  • Use java client (not clj-metrics)
  • Apply a Counter/Rate to every event using the <event_name>_rate as metric name.
  • Apply a Timer to every event with a :mulog/duration, using the <event_name>_time as metric name
  • Generate a metric for every numerical attribute using <event_name>_<attribute> as a Gauge

Guidelines to write a publisher:

µ/log publisher for InfluxDB

A µ/log publisher for InfluxDB.

Preferred Approach:

Guidelines to write a publisher:

µ/log document detailing use with AWS Athena

It would be nice to have a document which illustrates how to use µ/log with Aws Athena.

The idea is to use as follow

µ/log -> Kinesis -> Fireshose -> S3 -> AWS Glue -> Athena and optionally -> QuickSight

The doc should show a step-by-step example of how to set up all the necessary components with scripts and screenshots.

ClojureScript support?

Hi there 👋

Does it make sense for μ/log to be used from ClojureScript, and if so, is this something you'd be interested in supporting?

I've been looking at alternatives to Amplitude and Sentry for our web app, and it feels like what we'd mostly need is something like μ/log + elasticsearch. To some extent that could be achieved via a backend that's running μ/log, but that begs the question of why not run μ/log directly in ClojureScript.

Do you have thoughts on this?

Cheers,
Filipe

µ/log publisher for Slack

A μ/log publisher for Slack.

Preferred Approach:

  • Incoming Web-Hooks seems to be the simplest solution.
  • Provide a good description on how to set it up with Slack.
  • Suggest other ways

Guidelines to write a publisher:

how does u/trace work and how to use it to trace events

I sort of intuitively understand what u/trace is doing but it'll be great to get a walkthrough of what is happening with the road disruption example and how it's used with ES to do stuff.

  • ie. How would one go about tracing a series of calls?

Correct capitalization for Elasticsearch

This library came up internally at Elastic today. It looks amazing and we'll be sure to try it out.

Would it be possible to rename all the uses of ElasticSearch to Elasticsearch? That latter is the correct capitalization.

Thanks for making this @BrunoBonacci!

`:data-stream` shows up as index in Elasticsearch 8

Using this elasticsearch publisher config:

 {:type        :elasticsearch
  :data-stream "clj-planner"
  :url         "https://my-url"
  :http-opts   {:basic-auth ["user" "pwd"]}}

The stream shows up as an index:
image

As the title says I'm using Elasticsearch v8 on elastic cloud. Not sure if that is something you're supporting or not.

sample project for mulog with elk

Hi Bruno,

Would it be too much to ask for a sample app using mulog and elk stack - preferably on docker compose - with 3 or 4 event types? I'd like to see how it's typically configured within a project.

Chris

Datadog exporter

Hello :-) Thanks once more for this awesome project. Would you consider adding a datadog exporter? Cheers!

performance

I saw in d7f6a36 you have issues with with-context which is just binding and calling merge. It immediately reminded me of metosin's observations and utility functions like fast-assoc and fast-merge. These and others have been gathered into clj-fast, maybe it's worth a look. There's also a link to structural, another library to keep in mind when performance is critical.

clojure.spec - core functions

Hi,

I accidentally passed the wrong arguments to u/trace which caused problems with mulog and publishing. No error was emitted when executing the function which made me miss it.

example
(u/trace [] ::eee 123))

This could have been detected quickly by having function specs for the core functions in mulog. I am happy to help you add this if you think its a good idea 👍

Divide by zero error in JVM metrics sampler

I got a divide by zero error from the JVM metrics sampler when running low on memory.
Here's a stack trace (from version 0.5.0):

 [[clojure.lang.Numbers divide "Numbers.java" 188]
  [clojure.lang.Numbers divide "Numbers.java" 3901]
  [com.brunobonacci.mulog.publishers.jvm_metrics$capture_memory_pools$iter__263__267$fn__268$fn__269 invoke "jvm_metrics.clj" 144]
  [com.brunobonacci.mulog.publishers.jvm_metrics$capture_memory_pools$iter__263__267$fn__268 invoke "jvm_metrics.clj" 139]
  [clojure.lang.LazySeq sval "LazySeq.java" 42]
  [clojure.lang.LazySeq seq "LazySeq.java" 51]
  [clojure.lang.RT seq "RT.java" 535]
  [clojure.core$seq__5402 invokeStatic "core.clj" 137]
  [clojure.core.protocols$seq_reduce invokeStatic "protocols.clj" 24]
  [clojure.core.protocols$fn__8146 invokeStatic "protocols.clj" 75]
  [clojure.core.protocols$fn__8146 invoke "protocols.clj" 75]
  [clojure.core.protocols$fn__8088$G__8083__8101 invoke "protocols.clj" 13]
  [clojure.core$reduce invokeStatic "core.clj" 6828]
  [clojure.core$into invokeStatic "core.clj" 6895]
  [clojure.core$into invoke "core.clj" 6887]
  [com.brunobonacci.mulog.publishers.jvm_metrics$capture_memory_pools invokeStatic "jvm_metrics.clj" 138]
  [com.brunobonacci.mulog.publishers.jvm_metrics$capture_memory_pools invoke "jvm_metrics.clj" 137]
  [com.brunobonacci.mulog.publishers.jvm_metrics$jvm_sample_memory invokeStatic "jvm_metrics.clj" 257]
  [com.brunobonacci.mulog.publishers.jvm_metrics$jvm_sample_memory invoke "jvm_metrics.clj" 249]
  [com.brunobonacci.mulog.publishers.jvm_metrics$jvm_sample_memory invokeStatic "jvm_metrics.clj" 252]
  [com.brunobonacci.mulog.publishers.jvm_metrics$jvm_sample_memory invoke "jvm_metrics.clj" 249]
  [com.brunobonacci.mulog.publishers.jvm_metrics$jvm_sample invokeStatic "jvm_metrics.clj" 310]
  [com.brunobonacci.mulog.publishers.jvm_metrics$jvm_sample invoke "jvm_metrics.clj" 286]
  [com.brunobonacci.mulog.publishers.jvm_metrics.JvmMetricsPublisher publish "jvm_metrics.clj" 336]
  [com.brunobonacci.mulog.core$start_publisher_BANG_$publish_attempt__6733 invoke "core.clj" 190]
  [clojure.core$binding_conveyor_fn$fn__5754 invoke "core.clj" 2033]
  [clojure.lang.AFn applyToHelper "AFn.java" 154]
  [clojure.lang.RestFn applyTo "RestFn.java" 132]
  [clojure.lang.Agent$Action doRun "Agent.java" 114]
  [clojure.lang.Agent$Action run "Agent.java" 163]
  [java.util.concurrent.ThreadPoolExecutor runWorker "ThreadPoolExecutor.java" 1149]
  [java.util.concurrent.ThreadPoolExecutor$Worker run "ThreadPoolExecutor.java" 624]
  [java.lang.Thread run "Thread.java" 748]]},

It might be an idea to check for a zero denominator in capture-memory-pools and handle it gracefully.

Indentation-aligned log lines

I inadvertently discovered that mulog's structured logging facilitates map alignment to make the log events, in my eyes, a lot more readable:

{:mulog/event-name :app.server/api,
 :mulog/timestamp "Tue Apr 06 14:18:11 PDT 2021",
 :mulog/duration 0.741753,
 :mulog/namespace "app.server",
 :mulog/outcome :ok,
 :req/user #uuid "0dc03bed-b65d-4bef-89f3-37635ea8ff8f"}

{:mulog/event-name :app.server/api,
 :mulog/timestamp  "Tue Apr 06 14:18:20 PDT 2021",
 :mulog/duration   0.630531,
 :mulog/namespace  "app.server",
 :mulog/outcome    :ok,
 :req/user         #uuid "0dc03bed-b65d-4bef-89f3-37635ea8ff8f"}

Currently this is done with aggressive-indent-mode on in the repl buffer. Maybe it makes sense to add to mulog directly, fitting in with pretty logging as part of #45?

DEPRECATION WARNING: on `:filesystem-metrics` sampler config option

Notice type Deprecation warning
Component μ/log sampler :filesystem-metrics
Version affected 0.8.0+
Warning removed version 0.10.0

A config change is required if you use custom transformations via :transform with the :filesystem-metrics sampler.

Description

In order to unify the custom transform behaviour across all the samplers and the publishers and avoid confusion the following change in behaviour is adopted:

  • All samplers will accept a custom transform function called :transform-samples which takes a sequence of samples and return an updated sequence of samples:
transform-samples -> sample-seq -> sample-seq

The :transform-samples function will be executed on the samples before being recorded, this is an important difference with the publishers custom :transform which is applied to events (not samples) that are already recorded and about to be published.

Changes from v0.7.1

You need to update your publisher configuration ONLY if you were using a custom transform:

;; BEFORE
(μ/start-publisher! 
  {:type :filesystem-metrics
   :transform (partial filter #(> (:total-bytes %) 1e9))})

;; AFTER
(μ/start-publisher! 
  {:type :filesystem-metrics
   ;; KEY CHANGED
   :transform-samples (partial filter #(> (:total-bytes %) 1e9))})

Only the config key :transform has changed into :transform-samples, behaviour stays the same.

A warning message appears only if the old :transform key is used.
The warning will be removed and the old key will be ignored from version v0.10.0 onward.

jackson issues + swap to `clojure.data.json`

We ran into some jackson issues when using mulog 0.7.1. I have applied the fix from #58.

However, given:

  1. how disruptive jackson is known to be in the java ecosystem; and
  2. the significant speed improvements of clojure.data.json

I think it's worthwhile thinking about updating mulog-json to use the pure-clojure clojure.data.json implementation.

@BrunoBonacci I would be willing to work on this issue if you agree this is worthwhile.

µ/log publisher for files (Advanced File publisher)

A μ/log publisher for Files (Advanced File publisher).

Support:

  • formatting/templating
  • file rotation on size/time
  • output formats: text, json, edn, binary, compressed.
  • dispatch-by type
  • file format suitable for parallel processing

Guidelines to write a publisher:

Cascading errors in elasticsearch publisher when using multi publisher

Continuation of issue originally reported as #91

Repro:

  1. Start a multi publisher
(u/start-publisher!
 {:type :multi
  :publishers [{:type        :elasticsearch
                :data-stream "my-stream"
                :url         "https://my.elk"
                :http-opts   opts}
               {:type        :console}]})
  1. Post a bad record
(u/log :my/event :val2 [1 2 3])
(u/log :my/event :val2 ["some" "text"]) ;; mulog/publisher-error: failed to parse field [val2.a] of type [long]

Expected: 1 publisher error for the bad record
Actual: a cascade of errors which eventually brings mulog down.

{:publisher-type :elasticsearch, :mulog/namespace "clojure.core", :publisher-id "4jzFz5OKnOsC00M4TcskJccIJs_C7ItD", :mulog/action :publish, :mulog/timestamp 1657273030686, :exception #error {  :cause "Elasticsearch Bulk API reported errors"  :data {:errors ({:create {:_index "logs-test", :_id "4jzG-WmGzvfDvJt7SB6slj3raDjtjLzS", :status 400, :error {:type "mapper_parsing_exception", :reason "failed to parse field [val2.a] of type [long] in document with id '4jzG-WmGzvfDvJt7SB6slj3raDjtjLzS'. Preview of field's value: 'some'", :caused_by {:type "illegal_argument_exception", :reason "For input string: \"some\""}}}} {:create {:_index "logs-test", :_id "4jzG-ZFNXwV0VofVY3DqlNNKl5JSrdcv", :status 409, :error {:type "version_conflict_engine_exception", :reason "[4jzG-ZFNXwV0VofVY3DqlNNKl5JSrdcv]: version conflict, document already exists (current version [1])", :index_uuid "nq3iCwP_QCOWuCBTdWS6zA", :shard "0", :index "logs-test"}}} {:create {:_index "logs-test", :_id "4jzG-r4U-r2-q8CI_5-Jx24fQzafEA1y", :status 409, :error {:type "version_conflict_engine_exception", :reason "[4jzG-r4U-r2-q8CI_5-Jx24fQzafEA1y]: version conflict, document already exists (current version [1])", :index_uuid "nq3iCwP_QCOWuCBTdWS6zA", :shard "0", :index "logs-test"}}} {:create {:_index "logs-test", :_id "4jzG08Y2pWosLNJgJQy1ORsjQVGfrlpp", :status 409, :error {:type "version_conflict_engine_exception", :reason "[4jzG08Y2pWosLNJgJQy1ORsjQVGfrlpp]: version conflict, document already exists (current version [1])", :index_uuid "nq3iCwP_QCOWuCBTdWS6zA", :shard "0", :index "logs-test"}}} {:create {:_index "logs-test", :_id "4jzG0R8kxONgyOPYQZyU2AXs2VouFSX1", :status 409, :error {:type "version_conflict_engine_exception", :reason "[4jzG0R8kxONgyOPYQZyU2AXs2VouFSX1]: version conflict, document already exists (current version [1])", :index_uuid "nq3iCwP_QCOWuCBTdWS6zA", :shard "0", :index "logs-test"}}} {:create {:_index "logs-test", :_id "4jzG0Rj621FKEVVs4Q_EHZ-IXM9YSt7W", :status 409, :error {:type "version_conflict_engine_exception", :reason "[4jzG0Rj621FKEVVs4Q_EHZ-IXM9YSt7W]: version conflict, document already exists (current version [1])", :index_uuid "nq3iCwP_QCOWuCBTdWS6zA", :shard "0", :index "logs-test"}}} {:create {:_index "logs-test", :_id "4jzG0aHtixv-koZmyVRcErXPe6KHQ2Om", :status 409, :error {:type "version_conflict_engine_exception", :reason "[4jzG0aHtixv-koZmyVRcErXPe6KHQ2Om]: version conflict, document already exists (current version [1])", :index_uuid "nq3iCwP_QCOWuCBTdWS6zA", :shard "0", :index "logs-test"}}} {:create {:_index "logs-test", :_id "4jzG0inyJ4hBzCXTmu-4xBnlo8ZMUVIT", :status 409, :error {:type "version_conflict_engine_exception", :reason "[4jzG0inyJ4hBzCXTmu-4xBnlo8ZMUVIT]: version conflict, document already exists (current version [1])", :index_uuid "nq3iCwP_QCOWuCBTdWS6zA", :shard "0", :index "logs-test"}}} {:create {:_index "logs-test", :_id "4jzG10cEQEcPjbVZyA_oUIMRn9WPzzXe", :status 409, :error {:type "version_conflict_engine_exception", :reason "[4jzG10cEQEcPjbVZyA_oUIMRn9WPzzXe]: version conflict, document already exists (current version [1])", :index_uuid "nq3iCwP_QCOWuCBTdWS6zA", :shard "0", :index "logs-test"}}} {:create {:_index "logs-test", :_id "4jzG1JEzyHKYzd0tFpbe3q6H8zGZLEfX", :_version 1, :result "created", :_shards {:total 2, :successful 2, :failed 0}, :_seq_no 149, :_primary_term 1, :status 201}})}  :via  [{:type clojure.lang.ExceptionInfo    :message "Elasticsearch Bulk API reported errors"    :data {:errors ({:create {:_index "logs-test", :_id "4jzG-WmGzvfDvJt7SB6slj3raDjtjLzS", :status 400, :error {:type "mapper_parsing_exception", :reason "failed to parse field [val2.a] of type [long] in document with id '4jzG-WmGzvfDvJt7SB6slj3raDjtjLzS'. Preview of field's value: 'some'", :caused_by {:type "illegal_argument_exception", :reason "For input string: \"some\""}}}} {:create {:_index "logs-test", :_id "4jzG-ZFNXwV0VofVY3DqlNNKl5JSrdcv", :status 409, :error {:type "version_conflict_engine_exception", :reason "[4jzG-ZFNXwV0VofVY3DqlNNKl5JSrdcv]: version conflict, document already exists (current version [1])", :index_uuid "nq3iCwP_QCOWuCBTdWS6zA", :shard "0", :index "logs-test"}}} {:create {:_index "logs-test", :_id "4jzG-r4U-r2-q8CI_5-Jx24fQzafEA1y", :status 409, :error {:type "version_conflict_engine_exception", :reason "[4jzG-r4U-r2-q8CI_5-Jx24fQzafEA1y]: version conflict, document already exists (current version [1])", :index_uuid "nq3iCwP_QCOWuCBTdWS6zA", :shard "0", :index "logs-test"}}} {:create {:_index "logs-test", :_id "4jzG08Y2pWosLNJgJQy1ORsjQVGfrlpp", :status 409, :error {:type "version_conflict_engine_exception", :reason "[4jzG08Y2pWosLNJgJQy1ORsjQVGfrlpp]: version conflict, document already exists (current version [1])", :index_uuid "nq3iCwP_QCOWuCBTdWS6zA", :shard "0", :index "logs-test"}}} {:create {:_index "logs-test", :_id "4jzG0R8kxONgyOPYQZyU2AXs2VouFSX1", :status 409, :error {:type "version_conflict_engine_exception", :reason "[4jzG0R8kxONgyOPYQZyU2AXs2VouFSX1]: version conflict, document already exists (current version [1])", :index_uuid "nq3iCwP_QCOWuCBTdWS6zA", :shard "0", :index "logs-test"}}} {:create {:_index "logs-test", :_id "4jzG0Rj621FKEVVs4Q_EHZ-IXM9YSt7W", :status 409, :error {:type "version_conflict_engine_exception", :reason "[4jzG0Rj621FKEVVs4Q_EHZ-IXM9YSt7W]: version conflict, document already exists (current version [1])", :index_uuid "nq3iCwP_QCOWuCBTdWS6zA", :shard "0", :index "logs-test"}}} {:create {:_index "logs-test", :_id "4jzG0aHtixv-koZmyVRcErXPe6KHQ2Om", :status 409, :error {:type "version_conflict_engine_exception", :reason "[4jzG0aHtixv-koZmyVRcErXPe6KHQ2Om]: version conflict, document already exists (current version [1])", :index_uuid "nq3iCwP_QCOWuCBTdWS6zA", :shard "0", :index "logs-test"}}} {:create {:_index "logs-test", :_id "4jzG0inyJ4hBzCXTmu-4xBnlo8ZMUVIT", :status 409, :error {:type "version_conflict_engine_exception", :reason "[4jzG0inyJ4hBzCXTmu-4xBnlo8ZMUVIT]: version conflict, document already exists (current version [1])", :index_uuid "nq3iCwP_QCOWuCBTdWS6zA", :shard "0", :index "logs-test"}}} {:create {:_index "logs-test", :_id "4jzG10cEQEcPjbVZyA_oUIMRn9WPzzXe", :status 409, :error {:type "version_conflict_engine_exception", :reason "[4jzG10cEQEcPjbVZyA_oUIMRn9WPzzXe]: version conflict, document already exists (current version [1])", :index_uuid "nq3iCwP_QCOWuCBTdWS6zA", :shard "0", :index "logs-test"}}} {:create {:_index "logs-test", :_id "4jzG1JEzyHKYzd0tFpbe3q6H8zGZLEfX", :_version 1, :result "created", :_shards {:total 2, :successful 2, :failed 0}, :_seq_no 149, :_primary_term 1, :status 201}})}    :at [com.brunobonacci.mulog.publishers.elasticsearch$post_records invokeStatic "elasticsearch.clj" 126]}]  :trace  [[com.brunobonacci.mulog.publishers.elasticsearch$post_records invokeStatic "elasticsearch.clj" 126]   [com.brunobonacci.mulog.publishers.elasticsearch$post_records invoke "elasticsearch.clj" 107]   [com.brunobonacci.mulog.publishers.elasticsearch.ElasticsearchPublisher publish "elasticsearch.clj" 218]   [com.brunobonacci.mulog.core$start_publisher_BANG_$publish_attempt__2755 invoke "core.clj" 194]   [clojure.core$binding_conveyor_fn$fn__5823 invoke "core.clj" 2050]   [clojure.lang.AFn applyToHelper "AFn.java" 154]   [clojure.lang.RestFn applyTo "RestFn.java" 132]   [clojure.lang.Agent$Action doRun "Agent.java" 114]   [clojure.lang.Agent$Action run "Agent.java" 163]   [java.util.concurrent.ThreadPoolExecutor runWorker "ThreadPoolExecutor.java" 1136]   [java.util.concurrent.ThreadPoolExecutor$Worker run "ThreadPoolExecutor.java" 635]   [java.lang.Thread run "Thread.java" 833]]}, :mulog/origin :mulog/core, :mulog/trace-id #mulog/flake "4jzG1arG65y-1p9rPckU9BCjA2eW0oyg", :mulog/event-name :mulog/publisher-error}```               

`mulog/trace` captures incorrect namespace

During implementation of mulog we found that when using mulog/trace the value for :mulog/namespace isn't the namespace where the trace/log originated.

Here's a minimal example:

(ns repl
  (:require [com.brunobonacci.mulog :as log]))

(defn log-something []
  (log/trace ::here [] (Thread/sleep 100)))

(ns other-ns
  (:require [com.brunobonacci.mulog :as log]))

(defn start-and-log []
  (let [pub (log/start-publisher! {:type :console :pretty? true})]
    (repl/log-something)
    pub))

Running clj -Scp $(lein classpath) -i repl.clj -e "(do (require 'other-ns) (other-ns/start-and-log) (Thread/sleep 1000))" returns the following:

{:mulog/event-name :repl/here,
 :mulog/timestamp 1600456838488,
 :mulog/trace-id #mulog/flake "4YMr-xWd4709HLRatGWaM4ZUjltmTpb7",
 :mulog/root-trace #mulog/flake "4YMr-xWd4709HLRatGWaM4ZUjltmTpb7",
 :mulog/duration 110060556,
 :mulog/namespace "user",
 :mulog/outcome :ok}

timestamp accuracy

Hello,

thank you for this great project! I have a brief question: Is it possible to configure higher timestamp accuracy to maintain logging order?

Thank you in advance
Thomas

incorrect namespace in `mulog/log`

Seems to be the same macro expansion issue as #35

using the following:

(ns myns.test
  (:require [com.brunobonacci.mulog :as u]
            [com.brunobonacci.mulog.core :as core]))

(defmacro log [event-name & pairs]
  `(core/log* core/*default-logger* ~event-name (list :mulog/namespace ~(str *ns*) ~@pairs)))


(defn say-hello [_]
  (u/start-publisher!
   {:type :console :pretty? true})

  (u/log ::hello)

  (log ::hello)

  (Thread/sleep 1000))

deps.edn

{:deps {org.clojure/clojure {:mvn/version "1.10.3"}
        com.brunobonacci/mulog {:mvn/version "0.8.1"}}
 :aliases
 {:test-logging
  {:exec-fn myns.test/say-hello}}}

I get the following:

$ clj -X:test-logging
{:mulog/event-name :myns.test/hello,
 :mulog/timestamp 1647551467446,
 :mulog/trace-id #mulog/flake "4hp6K8t6WJfjOz-0uZe-HQBZHNDYYSAQ",
 :mulog/namespace "user"}

{:mulog/event-name :myns.test/hello,
 :mulog/timestamp 1647551467446,
 :mulog/trace-id #mulog/flake "4hp6K8tE09T_P8HzrNQgRtAmcXHS4OuH",
 :mulog/namespace "myns.test"}

namespace is user when not unquoting the ns.

Thoughts on leveled logging in ***μ/log***?

I'm experimenting with your library in a very small side-project, which, by the way, doesn't even require any of μ/log properties. Performance, efficiency or even structured logging are not things I need, but I like the ideas implemented here and I wanted to give it a try.

Something I would need, on the other hand, is leveled logging such as info, error, debug, etc.

You explicitly state in the Motivations that μ/log is designed for event logging as opposed to message logging. At first glance, I don't see why providing various log levels with helpers would break that promise.

So instead of implementing an ad-hoc layer in my project I've tried to provide a mechanism for leveled logging inside μ/log which leverages local contexts and transformation functions for filtering.

I'd be happy to contribute it. Let's discuss on principles or on design in the PR #16 , if you deem it useful.

µ/log publisher for Prometheus

A µ/log publisher for Prometheus. Both approaches should be available: scraping and via the push gateway.

Preferred approach:

Guidelines to write a publisher:

Async trace support

Hi,
The library supports tracing synchronous code blocks using u/trace.
However, I don't see explicit support for async code blocks. (functions that return a core.async channel, CompletableStage, etc.).
It would be possible to manually measure the latency and directly call com.brunobonacci.mulog.core/log-trace, but some gaps exist with this approach:

  • com.brunobonacci.mulog.core/local-context is a thread local var. Execution thread and callback thread could differ.
  • The :capture field assumes the function is syncrounous.

Maybe splitting the com.brunobonacci.mulog/trace into (at least) two functions could help with this kind of tracing:
(While also keeping the syncrounous com.brunobonacci.mulog/trace function)

  • u/start-trace - Would accept the same parameters as u/trace, except :capture and would return a context object. (will contain :event-name, :tid, :ptid, :t0, :ts and :pairs).
  • u/commit-trace - Would accept the above context and call com.brunobonacci.mulog.core/log-trace.
  • u/update-trace - Would accept the above context and additional :pairs, additions key-value pairs from the async result. Would return updated context.

An example user function could be:

(require '[clojure.core.async :as async])
(require '[com.brunobonacci.mulog :as u])

(defn process-async []
  (let [promise-chan (async/promise-chan)]
    (async/go
      ; Some async I/O operation
      (async/<! (async/timeout 500))
      (async/>!! promise-chan :success))
    promise-chan))

(u/trace ::process-async 
         {:capture (fn [x] {:type (type x)})}
         (process-async))
=>
#object[clojure.core.async.impl.channels.ManyToManyChannel
        0x3d81eb99
        "clojure.core.async.impl.channels.ManyToManyChannel@3d81eb99"]
{:mulog/duration 87346, :mulog/namespace "trace-tester.core", :mulog/outcome :ok, :type clojure.core.async.impl.channels.ManyToManyChannel, :mulog/parent-trace nil, :mulog/root-trace #mulog/flake "4cKUX92Slc_PH_g9ozZFmiT8DOuI3RvV", :mulog/timestamp 1622808281153, :mulog/trace-id #mulog/flake "4cKUX92Slc_PH_g9ozZFmiT8DOuI3RvV", :mulog/event-name :trace-tester.core/process-async}

Expose JVM metrics with Prometheus inline exporter

Hello, thanks for this nice project! I've been stumbling upon one exception, and one unexpected behaviour while trying to expose JVM metrics to Prometheus scrapper:

(ns piotr-yuxuan.mulog-expose-kafka-publisher
  (:require [com.brunobonacci.mulog :as mulog]
            [com.brunobonacci.mulog.publishers.prometheus :as prom]))

(def pub
  (prom/prometheus-publisher {:type :prometheus}))

(def px
  (mulog/start-publisher! {:type :inline :publisher pub}))

(prom/registry pub)
;; Execution error (ClassCastException) at com.brunobonacci.mulog.publishers.prometheus.PrometheusPublisher/registry (prometheus.clj:226).
;; class io.prometheus.client.CollectorRegistry cannot be cast to class clojure.lang.IFn (io.prometheus.client.CollectorRegistry and clojure.lang.IFn are in unnamed module of loader 'app')

(prom/write-str pub)
;; => ""

(mulog/log "event-name" :a "a" :b "b")

(prom/write-str pub)
;; => "# HELP piotr_yuxuan_mulog_expose_kafka_publisher_event_name Counter of piotr-yuxuan.mulog-expose-kafka-publisher/\"event-name\" events.
;;     # TYPE piotr_yuxuan_mulog_expose_kafka_publisher_event_name counter
;;     piotr_yuxuan_mulog_expose_kafka_publisher_event_name{a=\"a\",b=\"b\",} 1.0
;;     "

(def jvm-pub
  (mulog/start-publisher! {:type :jvm-metrics
                           ;; the interval in millis between two samples (default: 60s)
                           :sampling-interval 100
                           :jvm-metrics {:memory true
                                         :gc true
                                         :threads true
                                         :jvm-attrs true}}))

(prom/write-str pub)
;; => "# HELP clojure_core_mulog_jvm_metrics_sampled Counter of clojure.core/:mulog/jvm-metrics-sampled events.
;;     # TYPE clojure_core_mulog_jvm_metrics_sampled counter
;;     clojure_core_mulog_jvm_metrics_sampled 113.0
;;     # HELP piotr_yuxuan_mulog_expose_kafka_publisher_event_name Counter of piotr-yuxuan.mulog-expose-kafka-publisher/\"event-name\" events.
;;     # TYPE piotr_yuxuan_mulog_expose_kafka_publisher_event_name counter
;;     piotr_yuxuan_mulog_expose_kafka_publisher_event_name{a=\"a\",b=\"b\",} 1.0
;;     "

So far the counter clojure_core_mulog_jvm_metrics_sampled increases, but no further metrics are exposed. I think this snippet is pretty close to the doc so I'm a bit puzzled.

On my machine this behaviour appeared consistent accross versions 0.5.0, 0.6.0, 0.7.0, and 0.7.1.

µ/log publisher for Terminal (Advanced Console publisher)

A μ/log publisher for Console (Advanced Console publisher).

Support:

  • event templating/formatting
  • ascii coloring

Guidelines to write a publisher:

µ/log publisher for Kinesis

A μ/log publisher for Kinesis.

Preferred Approach:

  • Evaluate Cognitect aws-api vs Amazonica vs Java SDK (least dependencies)

Guidelines to write a publisher:

provide the option to print JSON to the console

Apps in Kube tend to print their logs to stdout which allows Kube to handle logging in a centralized way. Many log aggregation services can query these logs and ingest them into their system. If these logs are in JSON format, the services (e.g. sumologic) automatically decode them and make their fields available for consumption in their app.

It would be an improvement ifmulog provided a way to print JSON logs to the console. Adding a :format :edn|:json config (or similar) to the ConsolePublisher would be one way to provide this in common code and not require a new, custom publisher.

µ/log sampler for JVM parameters

A function which captures (samples) some key JVM metrics from five groups [:memory :files :gc :threads :attributes] similarly to: Dropwizard Metrics JVM

The function should take a config and returns a nested map with the sampled values.
The config contains a map with flags (true/false) whether a particular metric should be collected.

(jvm-sample {:memory {:heap true, :buffers true,}, :gc {:collections true, :duration true}...})
;;=> {:memory {:used_heap 3487563, 
               :max_heap 345364567,
               ...}
         :gc {:collections 45,
              :duration 345323
              ....}
}

The function should be fast (few milliseconds) and not throw exceptions.

synchronous publisher?

Hi Bruno,

I've been reading about event sourcing. I was thinking if it would make sense to have a synchronous publisher so that events always get stored in a DB?

Another, partly related thought - would a "Logger" abstraction make sense? Could one want or need to log different events with different subscribers? Right now what you have is basically 1 global logger. If I think about OpenTracing it reminds me of GlobalTracer, but one can still create other Tracers.

DEPRECATION WARNING: on `:mbean` sampler config option

Notice type Deprecation warning
Component μ/log sampler :mbean
Version affected 0.8.0+
Warning removed version 0.10.0

A config change is required if you use custom transformations via :transform with the :mbean sampler.

Description

In order to unify the custom transform behaviour across all the samplers and the publishers and avoid confusion the following change in behaviour is adopted:

  • All samplers will accept a custom transform function called :transform-samples which takes a sequence of samples and return an updated sequence of samples:
transform-samples -> sample-seq -> sample-seq

The :transform-samples function will be executed on the samples before being recorded, this is an important difference with the publishers custom :transform which is applied to events (not samples) that are already recorded and about to be published.

Changes from v0.7.1

You need to update your publisher configuration ONLY if you were using a custom transform:

;; BEFORE
(μ/start-publisher!
  {:type :mbean
   :mbeans-patterns ["java.lang:type=Memory" "java.nio:*"]
   :transform walk/stringify-keys})

;; AFTER
(μ/start-publisher!
  {:type :mbean
   :mbeans-patterns ["java.lang:type=Memory" "java.nio:*"]
   :transform-samples (partial map walk/stringify-keys)})

Only the config key :transform has changed into :transform-samples
Previously the :transform function was applied to a single sample which breaks the general transformation functions,
now the :transform-samples applies to a sequence of samples.

A warning message appears only if the old :transform key is used.
The warning will be removed and the old key will be ignored from version v0.10.0 onward.

See discussion on #72 for more info.

Integration test with testcontainers for Elasticsearch

An integration test to verify indexing configuration with various Elasticsearch versions using https://github.com/javahippie/clj-test-containers

The test should:

  • start Elasticsearch for a given version
  • initialize a publisher to the ELS container
  • send one or more records
  • verify that the records have been indexed properly (direct call to ELS)
  • shut down the container

The version should be parametric such that the test can be repeated for a number of different versions, like: 6.7.0, 7.4.1, 7.8.0, 7.10.0 etc

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.