Giter Club home page Giter Club logo

akka-http-metrics's Introduction

Hi there! I'm Michel 👋

I'm a passionate developer with a wide range of interests, from bit-level protocol implementation to high-level system design.

I love using modern technologies, write expressive code and solve hairy problems.

Open-source is deeply meaningful to me, and I'm committed to giving back to the community whenever possible.

akka-http-metrics's People

Contributors

aaabramov avatar aleksandr-vin avatar fraer avatar jsimek avatar kpritam avatar manuzhang avatar moonkev avatar pdezwart avatar rustedbones avatar scala-steward avatar xperimental avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

akka-http-metrics's Issues

Specify label for pathSingleSlash route

Hello, with akka-http-metrics-prometheus v1.2.0
when i use pathSingleSlash directive to match all incoming requests to the root or / for example:

pathSingleSlash {
   get {
        complete("welcome !")
    }
} ~  path("version") {
   get {
        complete("1.0.0")
   }
}

so when i hit the root
curl -X GET http://localhost:8080

The associated counter to this root endpoint is specified with an unlabelled path:

# TYPE akka_http_responses_total counter
akka_http_responses_total{method="GET",path="unlabelled",status="2xx",} 1.0

Is there a way to specify a label for the root path ?

Path label is should reflect matched path only

Raised from #40

when developing akka-http-metrics I made the wrong assumption that success responses are created when path is fully matched. This is not the case.
There are 2 possible solutions:

  • Have a custom RequestContext that can tell on complete how much of the path was matched
  • Have the path labelled as opt-in
    In both cases this is not related to the http method. I'm closing this issue and will reference it in a new dedicated one.

Custom dimensions

Hi,

I was wondering how can we introduce more metrics such as counts per response code. Can we extend the existing counters/timer/Gauge to introduce a new one according to needs?

Thanks

Implicit use of HttpMetricsRoute seems to no longer work

With the following code under Scala 2.12:

import fr.davit.akka.http.metrics.core.scaladsl.server.HttpMetricsRoute._
...
Http().bindAndHandle(route.recordMetrics(registry), "localhost", 8080)

even with scalac option -language:implicitConversions, I get the compile error:

... value recordMetrics is not a member of akka.http.scaladsl.server.Route
[error]     val server = Http().bindAndHandle(rootRoute.recordMetrics(metricsRegistry, settings), interface, port)

I'm using "fr.davit" %% "akka-http-metrics-prometheus" % "0.6.0"

However, this does work:

import fr.davit.akka.http.metrics.core.scaladsl.server.HttpMetricsRoute
...
val server = Http().bindAndHandle(HttpMetricsRoute(rootRoute).recordMetrics(metricsRegistry, settings), interface, port)

I feel it would be a good idea to update the example to use this explicit code that works under more situations.

5xx and 4xx responses go to unlabelled when handled with AKKA `ExecutionDirectives.handleExceptions`

All 2xx responses are recorded properly to their respective endpoint label, however, some 5xx and 4xx responses go to unlabelled:

image

image

The registry is created as follows:

protected def createPrometheusRegistry(
    metricsNamespace: String = "core_server"
  ): PrometheusRegistry =
    synchronized {
      assume(!metricsNamespace.endsWith("_"))
      val prometheusCollector = CollectorRegistry.defaultRegistry
      val prometheusSettings = PrometheusSettings
        .default
        .withNamespace(metricsNamespace)
        .withIncludeMethodDimension(true)
        .withIncludePathDimension(true)
        .withIncludeStatusDimension(true)
        .withDurationConfig(
          Buckets(0.005, 0.01, .025, .05, .075, .1, .25, .5, .75, 0.875, 1, 1.75, 2.5, 5, 7.5, 10,
            15, 20, 30)
        )
        .withReceivedBytesConfig {
          val buckets =
            Range(0, 1000, 100) ++ Range(1000, 10000, 1000) ++ Range(10000, 100000, 10000)
          Buckets(buckets.map(_.toDouble).toList)
        }
        .withSentBytesConfig {
          val buckets =
            Range(0, 1000, 100) ++ Range(1000, 10000, 1000) ++ Range(10000, 100000, 10000)
          Buckets(buckets.map(_.toDouble).toList)
        }

      PrometheusRegistry(prometheusCollector, prometheusSettings)
    }

And then we have the main route for that prometheusRegistry:

pathLabeled("metrics") {
            import fr.davit.akka.http.metrics.prometheus.marshalling.PrometheusMarshallers._
            metrics(prometheusRegistry)
          }

So that metrics are available on /metrics for the same AKKA server where we serve regular API calls. Therefore /api/v1/... (API calls) are routed through the load balancer to the end-user, so that the end-users can call paths that start with /api/v1, but /metrics is available only intra-cluster for Prometheus to scrape.

Below are examples on how we use pathPrefixLabeled and pathPrefix:

image

When a route responds with 400 for example, from its own route, let's say /api/v1/chat/ticket, the 4xx is properly recorded to route with label /api/v1/chat/ticket. However, when /api/v1/chat/ticket throws an exception and the exception is caught by AKKA's ExecutionDirectives.handleExceptions, when handleExceptions responds with 400, it is then recorded as unlabelled.

The exception handling encapsulator encapsulates all directives nearly, as follows:

handleRejections(rejectionHandler) {
      handleExceptions(exceptionHandler) {
        encodeResponse {
          cors(corsSettings) {
            pathPrefixLabeled("api" / Segment) {

The below code is from this very library, HttpMetricsDirectives. I can see that the response is patched with a PathLabel, but I figure this does not get passed when an exception is thrown from the path, because then there is no response.

private def rawPathPrefixLabeled[L](pm: PathMatcher[L], label: Option[String]): Directive[L] = {
    implicit val LIsTuple: Tuple[L] = pm.ev
    extractRequestContext.flatMap { ctx =>
      val pathCandidate = ctx.unmatchedPath.toString
      pm(ctx.unmatchedPath) match {
        case Matched(rest, values) =>
          tprovide(values) & mapRequestContext(_ withUnmatchedPath rest) & mapResponse { response =>
            val suffix = response.attribute(HttpMetrics.PathLabel).getOrElse("")
            val pathLabel = label match {
              case Some(l) => "/" + l + suffix // pm matches additional slash prefix
              case None    => pathCandidate.substring(0, pathCandidate.length - rest.charCount) + suffix
            }
            response.addAttribute(HttpMetrics.PathLabel, pathLabel)
          }
        case Unmatched =>
          reject
      }
    }
  }

Is it possible to label these responses originating from the exception handling encapsulator?

If not, my idea is to extend the pathPrefixLabeled and pathPrefix to include the handleExceptions(exceptionHandler) part themselves and then the response would be originating from the proper route /api/v1/chat/ticket?

Support individual metrics per route

Given the fact the .recordMetrics API returns a Flow, rather than a Route, it's not possible to instantiate separate collectors per API - it can only be done on the parent route.

This can be an important requirement when supporting multiple APIs in a single web server (e.g. ingesting streaming content and serving static resources).

Supporting individual metrics per route could either be done automatically (e.g. for Prometheus add a label with an inferred route name / path), or manually, by using instantiating the akka-http-metrics flow per Route.

Custom counter names

Hi!

Is it possible to somehow customize the name of the counters? I've looked at the code but it doesn't look like it.

If I want to use this lib in several different microservices (or similar), then I would want to know which microservice the counters belong to. So being able to specify a counter name prefix in the settings would be useful.

Thanks and regards,
Daniel

Unable to get started with akk-http-metrics for scala Akka HTTP and prometheus.

I have a AKKA HTTP application written in scala and would like to integrated akka-http-metrics for exposing API usage metrics in prometheus format.

I started with adding the dependency

<dependency>
      <groupId>fr.davit</groupId>
      <artifactId>akka-http-metrics-prometheus_2.12</artifactId>
      <version>1.4.1</version>
    </dependency>

I have multiple issues to get started

and then I see that your docs mention

Record metrics from your akka server by importing the implicits from HttpMetricsRoute

Hence, i imported the class.
There are two erros

  1. My IDE is not able to resolve the class HttpMetricsRoute
  2. The mvn build fails.
[ERROR] error: missing or invalid dependency detected while loading class file 'HttpMetrics.class'.
[INFO] Could not access type ClassicActorSystemProvider in package akka.actor,
[INFO] because it (or its dependencies) are missing. Check your build definition for
[INFO] missing or conflicting dependencies. (Re-run with `-Ylog-classpath` to see the problematic classpath.)
[INFO] A full rebuild may help if 'HttpMetrics.class' was compiled against an incompatible version of akka.actor.
[WARNING] three warnings found
[ERROR] one error found
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  12.044 s
[INFO] Finished at: 2020-12-16T09:43:43+05:30
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:3.2.2:compile (scala-compile) on project provisioner-server-rest: wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 1 (Exit value: 1) -> [Help 1]

My existing route is

 Http().bindAndHandle(
      service.route, conf.getString(PROVISIONER_LISTEN_INTERFACE), conf.getInt(PROVISIONER_PORT)
    )

In docs, you mention to use newMeteredServerAt(). I get compilation errors with

val registry: HttpMetricsRegistry = ... // concrete registry implementation

    Http().newMeteredServerAt(conf.getString(PROVISIONER_LISTEN_INTERFACE), conf.getInt(PROVISIONER_PORT), registry).bindFlow(service.route)
  1. How do i instantiate the metric registry. My existing code has no such object / class.

Appreciate your support. Is it possible to do a ZM meeting for quick resolution ?

Rejected requests can create metrics dimension with unmatched path

When path dimension is enabled, all rejected requests will create a metrics with their own path as dimension value.

The path dimension is not then bounded and does not follow the metrics guideline.

Rejected requests should not generate metrics containing unmatched path in their label.

Not all metrics are exposed for Prometheus

Project using:

AKKA 2.6.6
AKKA HTTP 10.1.12

    "fr.davit" %% "akka-http-metrics-prometheus" % "1.1.1"

Creating registry as:

  protected val prometheusCollector = CollectorRegistry
    .defaultRegistry
  protected val prometheusSettings = PrometheusSettings
    .default
    .withNamespace(metricsNamespace)
    .withIncludeMethodDimension(true)
    .withIncludePathDimension(true)
    .withIncludeStatusDimension(true)
    .withDefineError(_.status.isFailure)
  protected val prometheusRegistry =
    PrometheusRegistry(prometheusCollector, prometheusSettings)

Exposing as:

      pathLabeled("metrics") {
        import fr.davit.akka.http.metrics.prometheus.marshalling.PrometheusMarshallers._
        metrics(prometheusRegistry)
      } ~

But only the following metrics are exposed, moreover, it should show at least 1 for connection_total:

# HELP server_connections_active Active TCP connections
# TYPE server_connections_active gauge
server_connections_active 0.0
# HELP server_connections_total Total TCP connections
# TYPE server_connections_total counter
server_connections_total 0.0

JVM metrics?

Hi,

I was wondering if there was a way to include JVM metrics such as heap size, garbage collections etc

tx.,

Can't see http Status group & response time(duration) on graphite..

Hi :)

I used the following two dependencies

    "fr.davit"               %% "akka-http-metrics-core" % "0.5.0",
    "fr.davit"               %% "akka-http-metrics-datadog" % "0.5.0",

I have the following setting

  val registry = DatadogRegistry(JimiStatsDClient.apply)
    val settings = HttpMetricsSettings.default.withIncludeStatusDimension(true).withIncludePathDimension(true)

    Http().bindAndHandle(routes.recordMetrics(registry, settings), httpInterface, httpPort)

I have the following metrics folder in my graphite.
Screenshot 2019-08-08 at 13 00 59

However, I am not seeing,
http Status counts & response time(duration).

Please let me know if I have missed something.

Thank you,
Sean

No akka_http_responses_size_bytes_bucket for streamed endpoints

Hello, with akka-http-metrics-prometheus v1.3.0

I noticed that when an endpoint responds with streamed data, for ex:

pathLabeled("testStream", "testStream") {
  get {
     complete(Source(List("a","b","c")))
  }
} 

There is no akka_http_responses_size_bytes_bucket metric for label testStream, however for non-streamed endpoints it is computed correctly.

Here is an example of /metrics output after 1 call to /testStream and 1 call to /metrics (only method and path dimensions are enabled):

# HELP akka_http_responses_duration_seconds HTTP response duration
# TYPE akka_http_responses_duration_seconds histogram
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="0.005",} 0.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="0.01",} 0.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="0.025",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="0.05",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="0.075",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="0.1",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="0.25",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="0.5",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="0.75",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="1.0",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="2.5",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="5.0",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="7.5",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="10.0",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/testStream",le="+Inf",} 1.0
akka_http_responses_duration_seconds_count{method="GET",path="/testStream",} 1.0
akka_http_responses_duration_seconds_sum{method="GET",path="/testStream",} 0.024
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="0.005",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="0.01",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="0.025",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="0.05",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="0.075",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="0.1",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="0.25",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="0.5",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="0.75",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="1.0",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="2.5",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="5.0",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="7.5",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="10.0",} 1.0
akka_http_responses_duration_seconds_bucket{method="GET",path="/metrics",le="+Inf",} 1.0
akka_http_responses_duration_seconds_count{method="GET",path="/metrics",} 1.0
akka_http_responses_duration_seconds_sum{method="GET",path="/metrics",} 0.0
# HELP akka_http_requests_active Active HTTP requests
# TYPE akka_http_requests_active gauge
akka_http_requests_active 0.0
# HELP akka_http_requests_size_bytes HTTP request size
# TYPE akka_http_requests_size_bytes histogram
akka_http_requests_size_bytes_bucket{le="0.0",} 2.0
akka_http_requests_size_bytes_bucket{le="100.0",} 2.0
akka_http_requests_size_bytes_bucket{le="200.0",} 2.0
akka_http_requests_size_bytes_bucket{le="300.0",} 2.0
akka_http_requests_size_bytes_bucket{le="400.0",} 2.0
akka_http_requests_size_bytes_bucket{le="500.0",} 2.0
akka_http_requests_size_bytes_bucket{le="600.0",} 2.0
akka_http_requests_size_bytes_bucket{le="700.0",} 2.0
akka_http_requests_size_bytes_bucket{le="800.0",} 2.0
akka_http_requests_size_bytes_bucket{le="900.0",} 2.0
akka_http_requests_size_bytes_bucket{le="1000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="2000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="3000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="4000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="5000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="6000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="7000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="8000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="9000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="10000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="20000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="30000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="40000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="50000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="60000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="70000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="80000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="90000.0",} 2.0
akka_http_requests_size_bytes_bucket{le="+Inf",} 2.0
akka_http_requests_size_bytes_count 2.0
akka_http_requests_size_bytes_sum 0.0
# HELP akka_http_requests_total Total HTTP requests
# TYPE akka_http_requests_total counter
akka_http_requests_total 2.0
# HELP akka_http_responses_size_bytes HTTP response size
# TYPE akka_http_responses_size_bytes histogram
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="0.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="100.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="200.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="300.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="400.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="500.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="600.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="700.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="800.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="900.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="1000.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="2000.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="3000.0",} 0.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="4000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="5000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="6000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="7000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="8000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="9000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="10000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="20000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="30000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="40000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="50000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="60000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="70000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="80000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="90000.0",} 1.0
akka_http_responses_size_bytes_bucket{method="GET",path="/metrics",le="+Inf",} 1.0
akka_http_responses_size_bytes_count{method="GET",path="/metrics",} 1.0
akka_http_responses_size_bytes_sum{method="GET",path="/metrics",} 3827.0
# HELP akka_http_responses_total HTTP responses
# TYPE akka_http_responses_total counter
akka_http_responses_total{method="GET",path="/testStream",} 1.0
akka_http_responses_total{method="GET",path="/metrics",} 1.0

Hiding metrics route from scrapping

Is it possible to avoid metrics collection for metrics endpoint?

Despite HttpMetricsDirectives provides special method for metrics route

def metrics[T <: HttpMetricsRegistry: ToEntityMarshaller](registry: T): StandardRoute = complete(registry)

Metrics are still being collected with unlabelled path.

My routes:

val routes: Route =
    handleExceptions(appExceptionHandler) {
      concat(
        path("metrics") {
          metrics(registry)(PrometheusMarshallers.marshaller)
        },
        pathPrefixLabeled("prefix1") {
          concat(
            pathPrefixLabeled("prefix1_1") {
              ???
            },
            pathPrefixLabeled("prefix1_2") {
              ???
            }
          )
        }
      )
    }

And then

val prometheus = CollectorRegistry.defaultRegistry
val settings =
  PrometheusSettings
    .default
    .withIncludeMethodDimension(true)
    .withIncludePathDimension(true)
    .withIncludeStatusDimension(true)
    .withDefineError(_.status.isFailure)

val registry  = PrometheusRegistry(prometheus, settings)

val futureBinding =
    Http()
      .newMeteredServerAt("0.0.0.0", 9000, registry)
      .bindFlow(HttpMetrics.metricsRouteToFlow(routes))

bug: prometheus label issues in 1.7.0

Hi there

We were using path, status and method dimensions in our app

Something like:

 PrometheusSettings.default
   .withDurationConfig(Buckets(buckets: _*))
    .withDefineError(_.status.isFailure)
    .withIncludeMethodDimension(true)
    .withIncludePathDimension(true)
    .withIncludeStatusDimension(true)
    .withServerDimensions(
      Dimension("env", env),
      Dimension("stack", stack),
      Dimension("version", version)
    )

but it seems that the label values are been swapped e.g.

akka_http_responses_duration_seconds_bucket{env="prd",method="vacation",path="d6bfafe",stack="GET",status="/health/liveness",version="2xx",le="1.5",} 56.0

while in 1.6.0 it was working flawlessly

akka_http_responses_duration_seconds_bucket{env="prd",method="GET",path="/health/liveness",stack="vacation",status="2xx",version="d6bfafe",le="1.5",} 56.0

If I find some spare time, I will try to issue a PR to fix the issue :)

Getting error when using bindFlow : `.Error in stage [fr.davit.akka.http.metrics.core.MeterStage$$anon$1-MeterStage]: No value present

Hello,
I'm getting the following stack, when trying to use the v1.5.1 in the following manner:

object Metrics {
      import akka.util.Timeout
      import fr.davit.akka.http.metrics.core.scaladsl.server.HttpMetricsDirectives.metrics
      import fr.davit.akka.http.metrics.prometheus.marshalling.PrometheusMarshallers._
      import fr.davit.akka.http.metrics.prometheus.{Buckets, PrometheusRegistry, PrometheusSettings, Quantiles}
      import io.prometheus.client.CollectorRegistry

      class ClusterMetricsRouter(implicit context : ActorRefFactory, timeout : Timeout) {
        import ClusterMetricsRouter._

        implicit val _ = Implicits.system
        implicit val executionContext = context.dispatcher

        val route : Route = {
          metrics(ClusterMetricsRouter.registry)
        }

        val internal_route : Route = {
          metrics(ClusterMetricsRouter.internal_registry)
        }
      }
      object ClusterMetricsRouter {
        private val settings: PrometheusSettings = PrometheusSettings
          .default
          .withIncludePathDimension(true)
          .withIncludeMethodDimension(true)
          .withIncludeStatusDimension(true)
          .withDurationConfig(Buckets(1, 2, 3, 5, 8, 13, 21, 34))
          .withReceivedBytesConfig(Quantiles(0.5, 0.75, 0.9, 0.95, 0.99))
          .withSentBytesConfig(PrometheusSettings.DefaultQuantiles)
          .withDefineError(_.status.isFailure)

        private val collector: CollectorRegistry = CollectorRegistry.defaultRegistry

        val registry: PrometheusRegistry = PrometheusRegistry(collector, settings)
    }
}

...

def routeWithClose(route : Future[akka.Done] => Route) : Flow[HttpRequest, HttpResponse, Any] = {
      import akka.http.scaladsl.server
      Flows.lazyFlow { () =>
        val p = Promise[akka.Done]()
        server.Route.toFlow(route(p.future)).watchTermination() { case (mat, done) =>
          p.completeWith(done)
          mat
        }
      }
    }

 ...

Http()
          .newMeteredServerAt(
            "0.0.0.0",
            8443,
            Routes.Metrics.ClusterMetricsRouter.registry
          )
          .enableHttps(ConnectionContext.httpsServer(ssl))
          .bindFlow(routeWithClose(done => Routes.route(done)))

I'm using an older release because of this issue, since I need .bindFlow above.
#184

However - this throws the following stack:

2021-11-01T21:20:37.641+00:00 | ERROR | default-akka.actor.default-dispatcher-12 | akka.actor.RepointableActorRef | Error in stage [fr.davit.akka.http.metrics.core.MeterStage$$anon$1-MeterStage]: No value present |
"" | java.util.NoSuchElementException: No value present
        at java.base/java.util.Optional.get(Optional.java:141)
        at fr.davit.akka.http.metrics.core.MeterStage$$anon$1$$anon$3.onPush(MeterStage.scala:76)
        at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:542)
        at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:496)
        at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:390)
        at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:650)
        at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:521)
        at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:625)
        at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:800)
        at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$shortCircuitBatch(ActorGraphInterpreter.scala:787)
        at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:819)
        at akka.actor.Actor.aroundReceive(Actor.scala:537)
        at akka.actor.Actor.aroundReceive$(Actor.scala:535)
        at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:716)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
        at akka.actor.ActorCell.invoke(ActorCell.scala:548)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
        at akka.dispatch.Mailbox.run(Mailbox.scala:231)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
        at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
        at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1016)
        at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1665)
        at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1598)
        at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)

Is there some setup that I'm missing from the above? Equivalent code works elsewhere, when not using the .bindFlow

I'm hoping that this is a user error, and you can point me to something I'm doing wrong in my code to resolve the issue?

Bug 1.4.0: java.lang.IllegalArgumentException: requirement failed: Responses with this status code must have an empty entity

Upon upgrading to 1.4.0 with Scala Steward, 6 out of our 504 automatically fail due to:

java.lang.IllegalArgumentException: requirement failed: Responses with this status code must have an empty entity
	at scala.Predef$.require(Predef.scala:281) ~[scala-library.jar:?]
	at akka.http.scaladsl.model.HttpResponse.<init>(HttpMessage.scala:515) ~[akka-http-core_2.12-10.2.2.jar:10.2.2]
	at akka.http.scaladsl.model.HttpResponse.copyImpl(HttpMessage.scala:565) ~[akka-http-core_2.12-10.2.2.jar:10.2.2]
	at akka.http.scaladsl.model.HttpResponse.transformEntityDataBytes(HttpMessage.scala:549) ~[akka-http-core_2.12-10.2.2.jar:10.2.2]
	at fr.davit.akka.http.metrics.core.HttpMetricsRegistry.onResponse(HttpMetricsRegistry.scala:146) ~[akka-http-metrics-core_2.12-1.4.0.jar:1.4.0]
	at fr.davit.akka.http.metrics.core.MeterStage$$anon$1$$anon$3.onPush(MeterStage.scala:78) ~[akka-http-metrics-core_2.12-1.4.0.jar:1.4.0]
	at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:541) ~[akka-stream_2.12-2.6.10.jar:2.6.10]
	at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:423) ~[akka-stream_2.12-2.6.10.jar:2.6.10]
	at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:625) ~[akka-stream_2.12-2.6.10.jar:2.6.10]
	at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:502) ~[akka-stream_2.12-2.6.10.jar:2.6.10]
	at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:600) ~[akka-stream_2.12-2.6.10.jar:2.6.10]
	at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:769) ~[akka-stream_2.12-2.6.10.jar:2.6.10]
	at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:784) ~[akka-stream_2.12-2.6.10.jar:2.6.10]
	at akka.actor.Actor.aroundReceive(Actor.scala:537) ~[akka-actor_2.12-2.6.10.jar:2.6.10]
	at akka.actor.Actor.aroundReceive$(Actor.scala:535) ~[akka-actor_2.12-2.6.10.jar:2.6.10]
	at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:691) ~[akka-stream_2.12-2.6.10.jar:2.6.10]
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:577) [akka-actor_2.12-2.6.10.jar:2.6.10]
	at akka.actor.ActorCell.invoke(ActorCell.scala:547) [akka-actor_2.12-2.6.10.jar:2.6.10]
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270) [akka-actor_2.12-2.6.10.jar:2.6.10]
	at akka.dispatch.Mailbox.run(Mailbox.scala:231) [akka-actor_2.12-2.6.10.jar:2.6.10]
	at akka.dispatch.Mailbox.exec(Mailbox.scala:243) [akka-actor_2.12-2.6.10.jar:2.6.10]
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) [?:?]
	at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020) [?:?]
	at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656) [?:?]
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594) [?:?]
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183) [?:?]

We create socket connections as:

pathLabeled("channel") {
                get {
                  handleWebSocketMessages(
                    fConversationFactory()
                      .socketForChat(chatID, kind)
                  )
                }
              } ~

Above that we have a pathPrefixLabeled and so on.

More detail on how to plot relevant Duration Config on Prometheus

This is more of a question, than an issue.

I am currently using the prometheus package, and am publishing duration config exactly similar to the example that you've provided.

val settings = PrometheusSettings
      .default
      .withDurationConfig(Buckets(1, 2, 3, 5, 8, 13, 21, 34))
      .withReceivedBytesConfig(Quantiles(0.5, 0.75, 0.9, 0.95, 0.99))
      .withSentBytesConfig(PrometheusSettings.DefaultQuantiles)

On prometheus, i am setting up the following graph, which has the same query for 99, 95, 90 and 50 percentile.
histogram_quantile(0.99, avg(rate(akka_http_responses_duration_seconds_bucket[5m])) by (le))

Screenshot 2020-07-27 at 1 27 39 PM

The response times displayed in the graph are known to be inaccurate because i have set an overall timeout at 200ms.
Sorry for my lack of knowledge here, but if there would a little more explanation around how to configure the duration config would really help me debug better and resolve the issue on my end.

Request/response headers as metric dimensions

It would be nice to be able to configure some headers to be picked up as dimensions for a namespace, for example to track user-agent usage or responses from APIs marked with deprecation.

Is there any alternative suggestion for this, currently?

Error in stage [fr.davit.akka.http.metrics.core.MeterStage$$anon$1-MeterStage]: No value present

AKKA 2.6.13 with HTTP 10.2.4 & Scala 2.12.12.

Error:

2021-03-27 14:56:38.315 ERROR [t-dispatcher-18] r$$anonfun$receive$1: Error in stage [fr.davit.akka.http.metrics.core.MeterStage$$anon$1-MeterStage]: No value present
java.util.NoSuchElementException: No value present
	at java.util.Optional.get(Optional.java:148) ~[?:?]
	at fr.davit.akka.http.metrics.core.MeterStage$$anon$1$$anon$3.onPush(MeterStage.scala:76) ~[akka-http-metrics-core_2.12-1.5.1.jar:1.5.1]
	at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:541) ~[akka-stream_2.12-2.6.13.jar:2.6.13]
	at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:423) ~[akka-stream_2.12-2.6.13.jar:2.6.13]
	at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:625) ~[akka-stream_2.12-2.6.13.jar:2.6.13]
	at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:502) ~[akka-stream_2.12-2.6.13.jar:2.6.13]
	at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:600) ~[akka-stream_2.12-2.6.13.jar:2.6.13]
	at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:773) ~[akka-stream_2.12-2.6.13.jar:2.6.13]
	at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:788) ~[akka-stream_2.12-2.6.13.jar:2.6.13]
	at akka.actor.Actor.aroundReceive(Actor.scala:537) ~[akka-actor_2.12-2.6.13.jar:2.6.13]
	at akka.actor.Actor.aroundReceive$(Actor.scala:535) ~[akka-actor_2.12-2.6.13.jar:2.6.13]
	at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:691) ~[akka-stream_2.12-2.6.13.jar:2.6.13]
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:577) [akka-actor_2.12-2.6.13.jar:2.6.13]
	at akka.actor.ActorCell.invoke(ActorCell.scala:547) [akka-actor_2.12-2.6.13.jar:2.6.13]
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270) [akka-actor_2.12-2.6.13.jar:2.6.13]
	at akka.dispatch.Mailbox.run(Mailbox.scala:231) [akka-actor_2.12-2.6.13.jar:2.6.13]
	at akka.dispatch.Mailbox.exec(Mailbox.scala:243) [akka-actor_2.12-2.6.13.jar:2.6.13]
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) [?:?]
	at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020) [?:?]
	at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656) [?:?]
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594) [?:?]
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183) [?:?]

Debug path with some data:

image

image

image

Application code:

image

All non-WS paths work properly.
This is how the application binds:

val bindingFuture =
            portBind.prometheusRegistry
              .map(Http().newMeteredServerAt(portBind.host, port, _))
              .getOrElse(Http().newServerAt(portBind.host, port))
              .bindFlow(
                if (portBind.secureConfiguration.isDefined) redirectHandler
                else portBind.handler
              )

Provide support for HTTP/2

As of today, Akka Http only allows using HTTP2 with bindAndHandleAsync method. Since .recordMetrics returns a flow, I can't use this library with HTTP2 which only takes HttpRequest ⇒ Future[HttpResponse] as handler.

Getting exception when StatusDimension or PathDimension is true (prometheus)

scala = 2.13
akka-http-metrics-prometheus = 0.6.0
akka-stream = 2.5.25
akka-http = 10.1.9

val settings: HttpMetricsSettings = HttpMetricsSettings
      .default
      .withIncludeStatusDimension(true)
      .withIncludePathDimension(true)
val registry: PrometheusRegistry = PrometheusRegistry(settings)

val bindingFuture = Http().bindAndHandle(route.recordMetrics(registry), "localhost", 8080)

after any request:

akka.http.impl.util.One2OneBidiFlow$OutputTruncationException: Inner flow was completed without producing result elements for 1 outstanding elements
 at akka.http.impl.util.One2OneBidiFlow$OutputTruncationException$.apply(One2OneBidiFlow.scala:22)
 at akka.http.impl.util.One2OneBidiFlow$OutputTruncationException$.apply(One2OneBidiFlow.scala:22)
 at akka.http.impl.util.One2OneBidiFlow$One2OneBidi$$anon$1$$anon$4.onUpstreamFinish(One2OneBidiFlow.scala:97)
 at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:506)
 at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:376)
 at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:606)
 at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:485)
 at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:581)
 at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:749)
 at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:764)
 at akka.actor.Actor.aroundReceive(Actor.scala:539)
 at akka.actor.Actor.aroundReceive$(Actor.scala:537)
 at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:671)
 at akka.actor.ActorCell.receiveMessage(ActorCell.scala:612)
 at akka.actor.ActorCell.invoke(ActorCell.scala:581)
 at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:268)
 at akka.dispatch.Mailbox.run(Mailbox.scala:229)
 at akka.dispatch.Mailbox.exec(Mailbox.scala:241)
 at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
 at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
 at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

if remove

.withIncludeStatusDimension(true)
.withIncludePathDimension(true)

then everything ok

StatusGroup / Path for requests?

Since rejected response can't have labeled path, there is no way to know exactly what request is rejected. Can we add StatusGroup and Path to requests?

NullPointerException when running with recordMetrics

I am trying to use the akka-http-metrics-prometheus registry

Scala Version: 2.12.11
Akka Http Version: 10.1.12
Akka Http Metrics Prometheus Version: 1.1.0

My Code looks like this

object Registry {

  def apply() = {

    val settings = PrometheusSettings
      .default
      .withDurationConfig(Buckets(1, 2, 3, 5, 8, 13, 21, 34))
      .withReceivedBytesConfig(Quantiles(0.5, 0.75, 0.9, 0.95, 0.99))
      .withSentBytesConfig(PrometheusSettings.DefaultQuantiles)

    val prometheus = new CollectorRegistry()

    PrometheusRegistry(settings = settings)

  }

}
Http().bindAndHandle(routes.recordMetrics(Registry()), appConfig.server.interface,  appConfig.server.port)

When i try to run any route, I am facing a NullPointerException.

Could not materialize handling flow for IncomingConnection(/127.0.0.1:8080,/127.0.0.1:64662,Flow(FlowShape(IncomingTCP.in(1710177258),GraphStages$Detacher.out(817845304))))
java.lang.NullPointerException
	at scala.concurrent.impl.Promise$DefaultPromise.onComplete(Promise.scala:307)
	at fr.davit.akka.http.metrics.core.HttpMetricsRegistry.onConnection(HttpMetricsRegistry.scala:126)
	at fr.davit.akka.http.metrics.core.scaladsl.server.HttpMetricsRoute.$anonfun$recordMetrics$1(HttpMetricsRoute.scala:65)
	at akka.stream.impl.Compose.apply(TraversalBuilder.scala:169)
	at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:529)
	at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:449)
	at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:441)
	at akka.stream.scaladsl.RunnableGraph.run(Flow.scala:703)
	at akka.http.scaladsl.HttpExt.$anonfun$bindAndHandle$1(Http.scala:252)
	at akka.stream.impl.fusing.MapAsyncUnordered$$anon$31.onPush(Ops.scala:1401)
	at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:541)
	at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:495)
	at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:390)
	at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:624)
	at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:501)
	at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:599)
	at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:768)
	at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:783)
	at akka.actor.Actor.aroundReceive(Actor.scala:534)
	at akka.actor.Actor.aroundReceive$(Actor.scala:532)
	at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:690)
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:573)
	at akka.actor.ActorCell.invoke(ActorCell.scala:543)
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:269)
	at akka.dispatch.Mailbox.run(Mailbox.scala:230)
	at akka.dispatch.Mailbox.exec(Mailbox.scala:242)
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)

I am pretty sure that i have made some sort of mistake during initialisation and hence the issue.
Would really appreciate if you can point me in the right direction.

Thanks

Add directive to provide custom dimensions

We provide graphql endpoint and need to populate operation name as dimension of metrics. Currently we use a workaround to add it as a suffix to path dimension:

def labeled(name: String): Directive0 = mapResponse { response =>
    response.addAttribute(HttpMetrics.PathLabel, s"($name)")
}

pathLabeled("graphql") {
      post {
        entity(as[GraphQLRequest]) { query =>
          labeled(query.operationName.getOrElse("unknown")) {
            ...

Would be nice to have it as a separate dimension. Plus we can populate Referer (after mapping into finite cardinality).

Prometheus duration conversion is off by x10

Hi,

I setup akka-http-metrics to my akka http project and I have following metrics for requests duration:

akka_http_requests_duration_seconds{quantile="0.75",} 30.29
akka_http_requests_duration_seconds{quantile="0.95",} 30.29
akka_http_requests_duration_seconds{quantile="0.98",} 30.29
akka_http_requests_duration_seconds{quantile="0.99",} 30.29
akka_http_requests_duration_seconds{quantile="0.999",} 30.29
akka_http_requests_duration_seconds_count 6.0
akka_http_requests_duration_seconds_sum 139.39999999999998

This is local testing so it takes forever :).
But response time is never more than 3 seconds, but it show 30 seconds here. Does it somehow mean 3 seconds?

For other request I see similar things, when request lasts 6 ms, it shows 0.06 seconds (should be 0.006)

Adding http method as dimension

Would it be possible to add the http method as a dimension to the path. Currently I've made separate "pathLabeled" specifying if its a GET, POST etc. But it would be better to be able to have each path endpoint combined, no matter what the http method is.

Exposing datadog metrics

Hi,
I don't see any example to expose datadog metrics. Is this functionality is supported or not?

thanks

1.7.0 not found in repos

[error] (update) sbt.librarymanagement.ResolveException: Error downloading fr.davit:akka-http-metrics-prometheus_2.13:1.7.0
--
11-Apr-2022 12:24:41 | [error]   Not found
11-Apr-2022 12:24:41 | [error]   Not found
11-Apr-2022 12:24:41 | [error]   not found: /home/java/.ivy2/localfr.davit/akka-http-metrics-prometheus_2.13/1.7.0/ivys/ivy.xml
11-Apr-2022 12:24:41 | [error]   not found: https://repo1.maven.org/maven2/fr/davit/akka-http-metrics-prometheus_2.13/1.7.0/akka-http-metrics-prometheus_2.13-1.7.0.pom
11-Apr-2022 12:24:41 | [error]   not found: https://oss.sonatype.org/content/repositories/public/fr/davit/akka-http-metrics-prometheus_2.13/1.7.0/akka-http-metrics-prometheus_2.13-1.7.0.pom

metric for "active_connections" ?

First of all, thank you so much! This is tool so convenient.

Coming to my question. I see that there is a metric for active requests, but I was wondering if it's possible to get a metric for active connections as well?

I have been trying to get hold of active connections from an akka-http application, but as of now, I have no idea where to get it from.

Problem with big requests (>~100kb)

Hi!

After updating to akka-http 10.2.2 and akka-http-metrics 1.4.1, I'm having problems with "big" requests (~> 100kb or so) resulting in 500 internal server error. I have a really hard time tracking down what's really causing this, but my conclusion is that it at least seems to be related to akka-http-metrics. In my experience, the problem arises whenever I add the following import in my Main file where I set up my server.

import fr.davit.akka.http.metrics.core.HttpMetrics._

I've tried to debug but it seems like the request never even enters the route, so I suspect that something might be crashing pretty early in the request management, resulting in an automatic 500 being returned.

I know this is not much to go on and it's possible this is not a problem with akka-http-metrics at all, but I've been banging my head against the wall with this problem for days now and just curious if there's anyone else seeing the same problem.

I'm using the following versions:

akka-http: 10.2.2
akka: 2.5.32
akka-http-metrics: 1.4.1

Add a method bindFlow(handler: Flow[I,O,Mat]) to HttpMetricsServerBuilder

When I plug the akka-http-metrics library version 1.6.0 there is no provision to pass a flow in the bindFlow method. The case class HttpMetricsServerBuilder only has the below method for bindFlow:

def bindFlow(route: Route): Future[ServerBinding]

The previous version 1.5.1 had the support for handlerFlow: Flow[HttpRequest, HttpResponse, _].

We use the library rocks.heikoseeberger.accessus.Accessus where we use the method ".withTimeStampedAccessLog(....) whcih return Flow[HttpRequest, HttpResponse, M]:

Http.newSeverAt(host,port).bindFlow(routes.withTimeStampedAccessLog(.....)

After using the akka-http-metrics library which we need to track latency for akka http routes the below code is no longer compiling due to constraints on the last line:

`
import rocks.heikoseeberger.accessus.Accessus._
import fr.davit.akka.http.metrics.core.HttpMetrics._

val server = Http()
.newMeteredServerAt(settings.server.host, settings.server.port, MetricsController.registry)

server.bindFlow(route.withTimestampedAccessLog(...)). //This line is not compiling as the bindFlow method expects a route
`

Possible Enhancement: Make metric names configurable

Hi @RustedBones - would you be open to a PR that makes the names of metrics configurable? Possibly add to each individual backend's settings. For example for the prometheus backend, active requests setting, it could be accessed from the prometheus settings as follows

  override lazy val active: Gauge = io.prometheus.client.Gauge
    .build()
    .namespace(settings.namespace)
    .name(settings.activeRequestsMetricName) // Could also have a separate MetricsName case class, etc.

Where activeRequests requests would default to "requests.active" (as it is now).

If you are open to this, I will happily do for all backends. Happy to do using another implementation suggestion as well. We actually have a couple of different use cases where this would be quite beneficial.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.