Giter Club home page Giter Club logo

prometheus_flask_exporter's Introduction

Prometheus Flask exporter

PyPI PyPI PyPI - Downloads Coverage Status Code Climate Test & publish package

This library provides HTTP request metrics to export into Prometheus. It can also track method invocations using convenient functions.

Installing

Install using PIP:

pip install prometheus-flask-exporter

or paste it into requirements.txt:

# newest version
prometheus-flask-exporter

# or with specific version number
prometheus-flask-exporter==0.23.1

and then install dependencies from requirements.txt file as usual:

pip install -r requirements.txt

Usage

from flask import Flask, request
from prometheus_flask_exporter import PrometheusMetrics

app = Flask(__name__)
metrics = PrometheusMetrics(app)

# static information as metric
metrics.info('app_info', 'Application info', version='1.0.3')

@app.route('/')
def main():
    pass  # requests tracked by default

@app.route('/skip')
@metrics.do_not_track()
def skip():
    pass  # default metrics are not collected

@app.route('/<item_type>')
@metrics.do_not_track()
@metrics.counter('invocation_by_type', 'Number of invocations by type',
         labels={'item_type': lambda: request.view_args['type']})
def by_type(item_type):
    pass  # only the counter is collected, not the default metrics

@app.route('/long-running')
@metrics.gauge('in_progress', 'Long running requests in progress')
def long_running():
    pass

@app.route('/status/<int:status>')
@metrics.do_not_track()
@metrics.summary('requests_by_status', 'Request latencies by status',
                 labels={'status': lambda r: r.status_code})
@metrics.histogram('requests_by_status_and_path', 'Request latencies by status and path',
                   labels={'status': lambda r: r.status_code, 'path': lambda: request.path})
def echo_status(status):
    return 'Status: %s' % status, status

Default metrics

The following metrics are exported by default (unless the export_defaults is set to False).

  • flask_http_request_duration_seconds (Histogram) Labels: method, path and status. Flask HTTP request duration in seconds for all Flask requests.
  • flask_http_request_total (Counter) Labels: method and status. Total number of HTTP requests for all Flask requests.
  • flask_http_request_exceptions_total (Counter) Labels: method and status. Total number of uncaught exceptions when serving Flask requests.
  • flask_exporter_info (Gauge) Information about the Prometheus Flask exporter itself (e.g. version).

The prefix for the default metrics can be controlled by the defaults_prefix parameter. If you don't want to use any prefix, pass the prometheus_flask_exporter.NO_PREFIX value in. The buckets on the default request latency histogram can be changed by the buckets parameter, and if using a summary for them is more appropriate for your use case, then use the default_latency_as_histogram=False parameter.

To register your own default metrics that will track all registered Flask view functions, use the register_default function.

app = Flask(__name__)
metrics = PrometheusMetrics(app)

@app.route('/simple')
def simple_get():
    pass
    
metrics.register_default(
    metrics.counter(
        'by_path_counter', 'Request count by request paths',
        labels={'path': lambda: request.path}
    )
)

Note: register your default metrics after all routes have been set up. Also note, that Gauge metrics registered as default will track the /metrics endpoint, and this can't be disabled at the moment.

If you want to apply the same metric to multiple (but not all) endpoints, create its wrapper first, then add to each function.

app = Flask(__name__)
metrics = PrometheusMetrics(app)

by_path_counter = metrics.counter(
    'by_path_counter', 'Request count by request paths',
    labels={'path': lambda: request.path}
)

@app.route('/simple')
@by_path_counter
def simple_get():
    pass
    
@app.route('/plain')
@by_path_counter
def plain():
    pass
    
@app.route('/not/tracked/by/path')
def not_tracked_by_path():
    pass

You can avoid recording metrics on individual endpoints by decorating them with @metrics.do_not_track(), or use the excluded_paths argument when creating the PrometheusMetrics instance that takes a regular expression (either a single string, or a list) and matching paths will be excluded. These apply to both built-in and user-defined default metrics, unless you disable it by setting the exclude_user_defaults argument to False. If you have functions that are inherited or otherwise get metrics collected that you don't want, you can use @metrics.exclude_all_metrics() to exclude both default and non-default metrics being collected from it.

Configuration

By default, the metrics are exposed on the same Flask application on the /metrics endpoint and using the core Prometheus registry. If this doesn't suit your needs, set the path argument to None and/or the export_defaults argument to False plus change the registry argument if needed.

The group_by constructor argument controls what the default request duration metric is tracked by: endpoint (function) instead of URI path (the default). This parameter also accepts a function to extract the value from the request, or a name of a property of the request object. Examples:

PrometheusMetrics(app, group_by='path')         # the default
PrometheusMetrics(app, group_by='endpoint')     # by endpoint
PrometheusMetrics(app, group_by='url_rule')     # by URL rule

def custom_rule(req):  # the Flask request object
    """ The name of the function becomes the label name. """
    return '%s::%s' % (req.method, req.path)

PrometheusMetrics(app, group_by=custom_rule)    # by a function

# Error: this is not supported:
PrometheusMetrics(app, group_by=lambda r: r.path)

The group_by_endpoint argument is deprecated since 0.4.0, please use the new group_by argument.

The register_endpoint allows exposing the metrics endpoint on a specific path. It also allows passing in a Flask application to register it on but defaults to the main one if not defined.

Similarly, the start_http_server allows exposing the endpoint on an independent Flask application on a selected HTTP port. It also supports overriding the endpoint's path and the HTTP listen address.

You can also set default labels to add to every request managed by a PrometheusMetrics instance, using the default_labels argument. This needs to be a dictionary, where each key will become a metric label name, and the values the label values. These can be constant values, or dynamic functions, see below in the Labels section.

The static_labels argument is deprecated since 0.15.0, please use the new default_labels argument.

If you use another framework over Flask (perhaps Connexion) then you might return responses from your endpoints that Flask can't deal with by default. If that is the case, you might need to pass in a response_converter that takes the returned object and should convert that to a Flask friendly response. See ConnexionPrometheusMetrics for an example.

Labels

When defining labels for metrics on functions, the following values are supported in the dictionary:

  • A simple static value
  • A no-argument callable
  • A single argument callable that will receive the Flask response as the argument

Label values are evaluated within the request context.

Initial metric values

For more info see: https://github.com/prometheus/client_python#labels

Metrics without any labels will get an initial value. Metrics that only have static-value labels will also have an initial value. (except when they are created with the option initial_value_when_only_static_labels=False) Metrics that have one or more callable-value labels will not have an initial value.

Application information

The PrometheusMetrics.info(..) method provides a way to expose information as a Gauge metric, the application version for example.

The metric is returned from the method to allow changing its value from the default 1:

metrics = PrometheusMetrics(app)
info = metrics.info('dynamic_info', 'Something dynamic')
...
info.set(42.1)

Examples

See some simple examples visualized on a Grafana dashboard by running the demo in the examples/sample-signals folder.

Example dashboard

App Factory Pattern

This library also supports the Flask app factory pattern. Use the init_app method to attach the library to one or more application objects. Note, that to use this mode, you'll need to use the for_app_factory() class method to create the metrics instance, or pass in None for the app in the constructor.

metrics = PrometheusMetrics.for_app_factory()
# then later:
metrics.init_app(app)

Securing the metrics endpoint

If you wish to have authentication (or any other special handling) on the metrics endpoint, you can use the metrics_decorator argument when creating the PrometheusMetrics instance. For example to integrate with Flask-HTTPAuth use it like it's shown in the example below.

app = Flask(__name__)
auth = HTTPBasicAuth()
metrics = PrometheusMetrics(app, metrics_decorator=auth.login_required)

# ... other authentication setup like @auth.verify_password below

See a full example in the examples/flask-httpauth folder.

Custom metrics endpoint

You can also take full control of the metrics endpoint by generating its contents, and managing how it is exposed by yourself.

app = Flask(__name__)
# path=None to avoid registering a /metrics endpoint on the same Flask app
metrics = PrometheusMetrics(app, path=None)

# later ... generate the response (and its content type) to expose to Prometheus
response_data, content_type = metrics.generate_metrics()

See the related conversation in issue #135.

Debug mode

Please note, that changes being live-reloaded, when running the Flask app with debug=True, are not going to be reflected in the metrics. See #4 for more details.

Alternatively - since version 0.5.1 - if you set the DEBUG_METRICS environment variable, you will get metrics for the latest reloaded code. These will be exported on the main Flask app. Serving the metrics on a different port is not going to work most probably - e.g. PrometheusMetrics.start_http_server(..) is not expected to work.

WSGI

Getting accurate metrics for WSGI apps might require a bit more setup. See a working sample app in the examples folder, and also the prometheus_flask_exporter#5 issue.

Multiprocess applications

For multiprocess applications (WSGI or otherwise), you can find some helper classes in the prometheus_flask_exporter.multiprocess module. These provide convenience wrappers for exposing metrics in an environment where multiple copies of the application will run on a single host.

# an extension targeted at Gunicorn deployments
from prometheus_flask_exporter.multiprocess import GunicornPrometheusMetrics

app = Flask(__name__)
metrics = GunicornPrometheusMetrics(app)

# then in the Gunicorn config file:
from prometheus_flask_exporter.multiprocess import GunicornPrometheusMetrics

def when_ready(server):
    GunicornPrometheusMetrics.start_http_server_when_ready(8080)

def child_exit(server, worker):
    GunicornPrometheusMetrics.mark_process_dead_on_child_exit(worker.pid)

Also see the GunicornInternalPrometheusMetrics class if you want to have the metrics HTTP endpoint exposed internally, on the same Flask application.

# an extension targeted at Gunicorn deployments with an internal metrics endpoint
from prometheus_flask_exporter.multiprocess import GunicornInternalPrometheusMetrics

app = Flask(__name__)
metrics = GunicornInternalPrometheusMetrics(app)

# then in the Gunicorn config file:
from prometheus_flask_exporter.multiprocess import GunicornInternalPrometheusMetrics

def child_exit(server, worker):
    GunicornInternalPrometheusMetrics.mark_process_dead_on_child_exit(worker.pid)

There's a small wrapper available for Gunicorn and uWSGI, for everything else you can extend the prometheus_flask_exporter.multiprocess.MultiprocessPrometheusMetrics class and implement the should_start_http_server method at least.

from prometheus_flask_exporter.multiprocess import MultiprocessPrometheusMetrics

class MyMultiprocessMetrics(MultiprocessPrometheusMetrics):
    def should_start_http_server(self):
        return this_worker() == primary_worker()

This should return True on one process only, and the underlying Prometheus client library will collect the metrics for all the forked children or siblings.

An additional Flask extension for apps with processes=N and threaded=False exists with the MultiprocessInternalPrometheusMetrics class.

from flask import Flask
from prometheus_flask_exporter.multiprocess import MultiprocessInternalPrometheusMetrics

app = Flask(__name__)
metrics = MultiprocessInternalPrometheusMetrics(app)

...

if __name__ == '__main__':
    app.run('0.0.0.0', 4000, processes=5, threaded=False)

Note: this needs the PROMETHEUS_MULTIPROC_DIR environment variable to point to a valid, writable directory.

You'll also have to call the metrics.start_http_server() function explicitly somewhere, and the should_start_http_server takes care of only starting it once. The examples folder has some working examples on this.

Please also note, that the Prometheus client library does not collect process level metrics, like memory, CPU and Python GC stats when multiprocessing is enabled. See the prometheus_flask_exporter#18 issue for some more context and details.

A final caveat is that the metrics HTTP server will listen on any paths on the given HTTP port, not only on /metrics, and it is not implemented at the moment to be able to change this.

uWSGI lazy-apps

When uWSGI is configured to run with lazy-apps, exposing the metrics endpoint on a separate HTTP server (and port) is not functioning yet. A workaround is to register the endpoint on the main Flask application.

app = Flask(__name__)
metrics = UWsgiPrometheusMetrics(app)
metrics.register_endpoint('/metrics')
# instead of metrics.start_http_server(port)

See #31 for context, and please let me know if you know a better way!

Connexion integration

The Connexion library has some support to automatically deal with certain response types, for example dataclasses, which a plain Flask application would not accept. To ease the integration, you can use ConnexionPrometheusMetrics in place of PrometheusMetrics that has the response_converter set appropriately to be able to deal with whatever Connexion supports for Flask integrations.

import connexion
from prometheus_flask_exporter import ConnexionPrometheusMetrics

app = connexion.App(__name__)
metrics = ConnexionPrometheusMetrics(app)

See a working sample app in the examples folder, and also the prometheus_flask_exporter#61 issue.

There's a caveat about this integration, where any endpoints that do not return JSON responses need to be decorated with @metrics.content_type('...') as this integration would force them to be application/json otherwise.

metrics = ConnexionPrometheusMetrics(app)

@metrics.content_type('text/plain')
def plain_response():
    return 'plain text'

See the prometheus_flask_exporter#64 issue for more details.

Flask-RESTful integration

The Flask-RESTful library has some custom response handling logic, which can be helpful in some cases. For example, returning None would fail on plain Flask, but it works on Flask-RESTful. To ease the integration, you can use RESTfulPrometheusMetrics in place of PrometheusMetrics that sets the response_converter to use the Flask-RESTful API response utilities.

from flask import Flask
from flask_restful import Api
from prometheus_flask_exporter import RESTfulPrometheusMetrics

app = Flask(__name__)
restful_api = Api(app)
metrics = RESTfulPrometheusMetrics(app, restful_api)

See a working sample app in the examples folder, and also the prometheus_flask_exporter#62 issue.

License

MIT

prometheus_flask_exporter's People

Contributors

apellini avatar atheriel avatar boarik avatar corinnekelly avatar damian0o avatar denist-huma avatar elephantum avatar haoxins avatar harnash avatar jeteon avatar juanjbrown avatar kabooboo avatar kojiromike avatar larrycai avatar nottrobin avatar punkeel avatar qbiqing avatar rycus86 avatar schelv avatar snyk-bot avatar stefanbrand avatar tirkarthi avatar yrro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

prometheus_flask_exporter's Issues

control metrics

Hi.
Is it possible to use the library to generate metrics within methods and not just as annotations?

Thanks!!

Need help with a simple use case using Counter

Hi,

Im very new to Flask and Python in general.
I would like to expose metrics of a simple app I created.

The thing is I don't understand how to increment the value of a Counter for example..

@app.route('/newaliment', methods=['GET', 'POST'])
@login_required
@metrics.counter('aliments', 'number_of_aliments_added')
def newaliment():
    form = AlimentsForm()
    if form.validate_on_submit():
        aliment = Aliment(aliment_name = form.aliment_name.data, description =form.description.data, author=current_user )
        db.session.add(aliment)
        db.session.commit()
        flash('Congratulations, you added a new aliment !')
        return redirect(url_for('newaliment'))
    return render_template('newAliment.html', title='NewAliment', form=form)

I initialize my counter but how can I increment it by "1" when an aliment is added in my database please ?

Thanks for your help and excuse my stupid question.

AttributeError: 'Request' object has no attribute 'prom_start_time'

before_request from export_defaults() is not guaranteed to run if some other before_request hook raises exception, for example for authentication.

Therefore, when after_request is triggered request.prom_start_time might not be set.

Suggested fix would be to skip the histogram section in after_request if request has no prom_start_time attribute.

uwsgi app stall gathering /metrics

Hello ๐Ÿ‘‹ ,
I have a timeout on /metrics endpoint. I followed the example of uwsgi file:

cat app/__init__.py
...
app = Flask(__appname__)
metrics = UWsgiPrometheusMetrics(app)
metrics.register_endpoint('/metrics')
...
cat app/api/__init.py
...
app.logger.info('Starting the app...')
metrics.start_http_server(port))
metrics.info('app_info', 'Application info', version=__version__)
...

The app starts correctly. One of my resources is correctly responding, ex:

[pid: 64365|app: 0|req: 4/5] 127.0.0.1 () {24 vars in 257 bytes} [Wed Oct 23 09:47:48 2019] GET /test => generated 4 bytes in 0 msecs (HTTP/1.1 200) 2 headers in 78 bytes (2 switches on core 0)

but the /metrics endpoint is not, the HTTP session timeout with:

curl: (56) Recv failure: Connection reset by peer

Any idea?

uwsgi config:

[uwsgi]
http-socket = :5000
; wsgi-file = wsgi.py
module = wsgi:app
; socket = /var/run/app
; chdir = /usr/local/bin/app
; chown-socket = www-data:www-data
chmod-socket = 660
callable = app
master = true
processes = 2
die-on-term = true
; logto = /var/log/uwsgi/app.log
venv = /tmp/venv
env = prometheus_multiproc_dir=/tmp/flask_microservice_boilerplate/metrics

the prometheus_multiproc_dir dir is correctly filled with:

ls /tmp/flask_microservice_boilerplate/metrics
counter_45330.db  counter_57640.db  gauge_all_45304.db  gauge_all_50896.db  gauge_all_57640.db  gauge_all_64364.db  histogram_50896.db  histogram_64364.db
counter_50895.db  counter_60502.db  gauge_all_45330.db  gauge_all_56034.db  gauge_all_60498.db  gauge_all_64365.db  histogram_57639.db  histogram_64365.db
counter_50896.db  counter_64364.db  gauge_all_50891.db  gauge_all_57636.db  gauge_all_60502.db  histogram_45330.db  histogram_57640.db
counter_57639.db  counter_64365.db  gauge_all_50895.db  gauge_all_57639.db  gauge_all_64361.db  histogram_50895.db  histogram_60502.db

Add flag to disable exporting python_* metrics

Thank you for this great library! Thanks to it I was able to add metrics to my app in no time. :)

But I would to not export metrics that I am not going to use so please add a flag to stop exporting these:

gdubicki@mac ~ $ curl  http://localhost:5000/metrics
# HELP python_gc_objects_collected_total Objects collected during gc
# TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 7789.0
python_gc_objects_collected_total{generation="1"} 1763.0
python_gc_objects_collected_total{generation="2"} 0.0
# HELP python_gc_objects_uncollectable_total Uncollectable object found during GC
# TYPE python_gc_objects_uncollectable_total counter
python_gc_objects_uncollectable_total{generation="0"} 0.0
python_gc_objects_uncollectable_total{generation="1"} 0.0
python_gc_objects_uncollectable_total{generation="2"} 0.0
# HELP python_gc_collections_total Number of times this generation was collected
# TYPE python_gc_collections_total counter
python_gc_collections_total{generation="0"} 112.0
python_gc_collections_total{generation="1"} 10.0
python_gc_collections_total{generation="2"} 0.0
# HELP python_info Python platform information
# TYPE python_info gauge
python_info{implementation="CPython",major="3",minor="7",patchlevel="2",version="3.7.2"} 1.0

They are exported even if I set export_defaults=False.

need help on usage of the tool

I have a flask app running at port 5006.

I have included the below information:

from prometheus_flask_exporter import PrometheusMetrics
metrics = PrometheusMetrics(app)
@app.route('/skip')
@metrics.do_not_track()
def skip():
    pass  # default metrics are not collected

@app.route('/<item_type>')
@metrics.do_not_track()
@metrics.counter('invocation_by_type', 'Number of invocations by type',
         labels={'item_type': lambda: request.view_args['type']})
def by_type(item_type):
    pass  # only the counter is collected, not the default metrics

@app.route('/long-running')
@metrics.gauge('in_progress', 'Long running requests in progress')
def long_running():
    pass

if __name__ == '__main__':
    app.debug = False
    app.run(host="0.0.0.0", port=5006)
@app.route('/status/<int:status>')
@metrics.do_not_track()
@metrics.summary('requests_by_status', 'Request latencies by status',
                 labels={'status': lambda r: r.status_code})
@metrics.histogram('requests_by_status_and_path', 'Request latencies by status and path',
                   labels={'status': lambda r: r.status_code, 'path': lambda: request.path})
def echo_status(status):
    return 'Status: %s' % status, status

I also have Prometheus running on a docker container on port 9090. I have Grafana pulling these events on a dashboard listening to events on http://localhost:9090.

I dont see the activity on Flask propagating to Prometheus and Grafana. Is there something I am missing in my setup?

Examples of Prometheus queries

There should be an example of useful queries with the default data gathered.
Comming to this out of the blue its a bit hard to figure out what should I inspect. For e.g. is this is a right approach?

avg(
    (
        flask_http_request_duration_seconds_sum{status='200'} / 
        flask_http_request_duration_seconds_count{status='200'}
    )
) by (endpoint)

App factory project structure with GunicornInternalPrometheusMetrics()

I have an app that I want to run replicas of in a K8s cluster. For this I am using the GunicornInternalPrometheusMetrics class like so inside __init__.py:

from flask import Flask
from prometheus_flask_exporter.multiprocess import GunicornInternalPrometheusMetrics

def create_app():
     app = Flask(__name__)
     metrics = GunicornInternalPrometheusMetrics(app)

In the app file I define the app object like so:

from app import create_app

app = create_app(api_config or "default")

However, this does not make the metrics decorator available for my modules (which have the routes).

Therefore, I tried to make it in the app factory way:

from flask import Flask
from prometheus_flask_exporter.multiprocess import GunicornInternalPrometheusMetrics

mnetrics = GunicornInternalPrometheusMetrics(app=None)

def create_app():
     app = Flask(__name__)
     metrics.init_app(app)

However, this causes flask to complain that I am working outside of the context.

What is the preferred way to do this so that the decorator (ie. something like @metrics.gauge('in_progress', 'Long running requests in progress')) is possible?

"Duplicated timeseries in CollectorRegistry" when testing

Hi,

Thanks for this awesome module!

When testing my flask application, I instantiate a new instance of my application for every test using fixtures. For the first test, everything goes right, but on subsequent tests, I get a "Duplicated timeseries" error. My understanding is that PrometheusMetrics uses the same registry for every application. Is this the expected behavior?

Here's a minimal snippet that reproduces this behaviour.

from flask import Flask
from prometheus_flask_exporter import PrometheusMetrics

app1 = Flask(__name__)
metrics1 = PrometheusMetrics(app1)
# some test

app2 = Flask(__name__)
metrics2 = PrometheusMetrics(app2)
# some other test

The output:

    metrics2 = PrometheusMetrics(app2)
/usr/local/lib/python3.6/site-packages/prometheus_flask_exporter/__init__.py:115: in __init__
    self.init_app(app)
/usr/local/lib/python3.6/site-packages/prometheus_flask_exporter/__init__.py:137: in init_app
    self._defaults_prefix, app
/usr/local/lib/python3.6/site-packages/prometheus_flask_exporter/__init__.py:253: in export_defaults
    **buckets_as_kwargs
/usr/local/lib/python3.6/site-packages/prometheus_client/metrics.py:494: in __init__
    labelvalues=labelvalues,
/usr/local/lib/python3.6/site-packages/prometheus_client/metrics.py:103: in __init__
    registry.register(self)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <prometheus_client.registry.CollectorRegistry object at 0x7f425acb0d68>, collector = <prometheus_client.metrics.Histogram object at 0x7f424a45c3c8>

    def register(self, collector):
        '''Add a collector to the registry.'''
        with self._lock:
            names = self._get_names(collector)
            duplicates = set(self._names_to_collectors).intersection(names)
            if duplicates:
                raise ValueError(
                    'Duplicated timeseries in CollectorRegistry: {0}'.format(
>                       duplicates))
E               ValueError: Duplicated timeseries in CollectorRegistry: {'flask_http_request_duration_seconds_bucket', 'flask_http_request_duration_seconds_count', 'flask_http_request_duration_seconds_sum', 'flask_http_request_duration_seconds_created'}

I was able to get rid of the error by instantiating a new CollectorRegistry at every app instantiation.

from flask import Flask
from prometheus_flask_exporter import PrometheusMetrics
from prometheus_client import CollectorRegistry

app1 = Flask(__name__)
metrics1 = PrometheusMetrics(app1, registry=CollectorRegistry())

app2 = Flask(__name__)
metrics2 = PrometheusMetrics(app2, registry=CollectorRegistry())

Thanks!

Unable to have common metric key with labels to separate type over different routes

Setup;

  1. Two (or more) API handling methods with their own routes
  2. metric registered with common name auditls_api
  3. use of type: label to separate values in prometheus later
@app.route("/api/v1/dns/<target>", methods=["GET"])
@metrics.counter('auditls_api', 'API Calls for DNS information',
         labels={'type': 'dns', 'target': lambda: request.view_args['target']})
def get_dns(target):
  #do athing

@app.route("/api/v1/http/<target>", methods=["GET"])
@metrics.counter('auditls_api', 'API Calls for HTTP response information',
         labels={'type': 'http', 'target': lambda: request.view_args['target']})
def get_http_status(target):
   # do different thing

Expected

Metrics are collected for each API route, and exposed to prometheus such that I might do queries like sum by (type)(rate(autidls_api_total[5m]) so I can view the relative distribution over all the types of API call being handled.

At the moment, it seems the proposed implementation is to have auditls_api_by_dns_total and auditls_api_by_http as seperate metrics, but i can't then group those back together.

Actual

Error: 2020-06-24T12:25:00.277947666Z ValueError: Duplicated timeseries in CollectorRegistry: {'auditls_api_created', 'auditls_api_total'}


Do you have any guidance on what I should be doing here?

uwsgi config "lazy-apps: yes" "should_start_http_server" return False

"UWsgiPrometheusMetrics().should_start_http_server()" returns False when I set uwsgi's startup parameter "lazy-apps:yes".

How can I use it when setting "lazy-apps:yes" ?
uwsgi config:

uwsgi:
  wsgi: wsgi:app
  http-socket: 0.0.0.0:11010
  processes: 4
  threads: 16
  master: yes
  ignore-write-errors: yes
  ignore-sigpipe: yes
  die-on-term: yes
  wsgi-disable-file-wrapper: yes
  max-requests: 65535
  max-requests-delta: 1024
  log-prefix: uWSGI
  log-date: yes
  log-slow: 10000
  disable-logging: yes
  need-app: true
  reload-mercy: 1
#  lazy-apps: yes

code:

    from prometheus_flask_exporter.multiprocess import UWsgiPrometheusMetrics
    metrics = UWsgiPrometheusMetrics(app)
    metrics.start_http_server(9200, )

Adding default label

Hi,

I'm kind of new to Prometheus, so my usage might not be the best approach.

I have multiple flask applications and monitored within the same Prometheus. I want to be able to filter using application_name label.

I want to add application_name label to the default exports. what is the best way to approaching this?

I tried the following:

metrics = GunicornPrometheusMetrics(app, export_defaults=False)
metrics.info('app_info', 'Application info', version=app_info.get('version', '0.0.0'), app_name=app_name)
...
...
@app.route("/")
@metrics.do_not_track()
@metrics.summary('requests_by_status', 'Request latencies by status',
                 labels={'status': lambda r: r.status_code, 'app_name': app_name})
@metrics.histogram('requests_by_status_and_path', 'Request latencies by status and path',
                   labels={'status': lambda r: r.status_code, 'path': lambda: request.path, 'app_name': app_name})
@metrics.counter('invocation_by_type', 'Number of invocations by type',
                 labels={'item_type': lambda: request.view_args['type'], 'app_name': app_name})
def index():
...

same decorators are replicated on each route.

But I get the following error:

ValueError: Duplicated timeseries in CollectorRegistry: {'requests_by_status_count', 'requests_by_status_sum', 'requests_by_status', 'requests_by_status_created'}

Any help is appreciated!

Flask endpoint names may differ from function name

Thanks for fixing #22! It fixes the simple case but I also realised something else.

The problem with the following code is that Flask endpoint names can be different than the function name, consider @app.route('/hello', endpoint='my_hello').

endpoint_name = request.endpoint or ''
if request.blueprint and '.' in endpoint_name:
endpoint_name = endpoint_name.rsplit('.', 1)[1]
if endpoint_name == f.__name__:
# we are in a request handler method
response_for_metric = make_response(response)

I think there is a nice solution though! See Flask.view_functions. This snippet corresponds to the quoted lines above:

        if request.endpoint is not None:
            view_func = current_app.view_functions[request.endpoint]

            # There may be decorators 'above' us, but before the function is
            # registered with Flask
            while view_func != func:
                try:
                    view_func = view_func.__wrapped__
                except AttributeError:
                    break

            if view_func == func:
                # we are in a request handler method
                response_for_metric = make_response(response)

Note that func is the function itself:

@functools.wraps(f)
def func(*args, **kwargs):

Prometheus Flask exporter with __main__

I want to use Prometheus Flask exporter with __main__.

This works fine by running env FLASK_APP=app.py flask run --port=80 --host='0.0.0.0':

from flask import Flask
from prometheus_flask_exporter import PrometheusMetrics

app = Flask(__name__)
metrics = PrometheusMetrics(app)

app.debug = True

@app.route("/", methods=['GET'])
def index():
    return "hello world"

But I want to use my app in __main__, running python app.py.

from flask import Flask
from prometheus_flask_exporter import PrometheusMetrics

app = Flask(__name__)
metrics = PrometheusMetrics(app=None, path='/metrics')

app.debug = True

@app.route("/", methods=['GET'])
def index():
    return "hello world"

if __name__ == '__main__':
    metrics.init_app(app)
    app.run(host='0.0.0.0', port=80)

Here I get 400 on /metrics.

I got no clue how to init metrics correctly.

thx for helping
klml

PS I asked this already stackoverflow.com, but got no answer.

Add support for grouping by url_rule

In PR#3 the following flag was added: group_by_endpoint. This is a great addition since it allows you to group metrics by route rather then by absolute path.

However, this will log out metrics grouped on the route name, e.g. api.users_user whereas it might be better to group it on the rule, e.g. /users/<int:user_id> (request.url_rule)

Maybe group_by_endpoint can be extended to support either url_rule specifically or maybe make this more general by exposing duration_group directly?

wsgi daemon mode

I run my stateless flask apps with mod_wsgi/apache using daemon mode like:

WSGIDaemonProcess foo-services python-home=/opt/my_org/foo-services/_env processes=8 threads=48 maximum-requests=10000 display-name=%{GROUP}
WSGIApplicationGroup %{GLOBAL}
WSGISocketPrefix /var/run/wsgi


Alias /image-services "/opt/my_org/foo-services/wsgi.py"
<Location "/for-services">
SetHandler wsgi-script
Options +ExecCGI
FileETag None
ExpiresActive On
ExpiresDefault "access plus 1 year"
WSGIProcessGroup image-services
</Location>

Which means when a request gets to the service it could be hitting 1 of 8 daemon processes each of which have their own memory in isolation of the others. Does the metrics endpoint store the prometheus data in a way that is shared across these daemons?

I can create some tests to verify if thats the case or not, just curious if the answer is already known.

Thanks,
Thatcher

corrupted buffer

seems to be problem of the low level library, shall this library handle this exception?

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/prometheus_flask_exporter/__init__.py", line 567, in func
    response = f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/prometheus_flask_exporter/__init__.py", line 635, in func
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/prometheus_flask_exporter/__init__.py", line 219, in prometheus_metrics
    return generate_latest(registry), 200, headers
  File "/usr/local/lib/python3.8/site-packages/prometheus_client/exposition.py", line 106, in generate_latest
    for metric in registry.collect():
  File "/usr/local/lib/python3.8/site-packages/prometheus_client/registry.py", line 82, in collect
    for metric in collector.collect():
  File "/usr/local/lib/python3.8/site-packages/prometheus_client/multiprocess.py", line 149, in collect
    return self.merge(files, accumulate=True)
  File "/usr/local/lib/python3.8/site-packages/prometheus_client/multiprocess.py", line 41, in merge
    metrics = MultiProcessCollector._read_metrics(files)
  File "/usr/local/lib/python3.8/site-packages/prometheus_client/multiprocess.py", line 61, in _read_metrics
    file_values = MmapedDict.read_all_values_from_file(f)
  File "/usr/local/lib/python3.8/site-packages/prometheus_client/mmap_dict.py", line 87, in read_all_values_from_file
    used = _unpack_integer(data, 0)[0]
struct.error: unpack_from requires a buffer of at least 4 bytes for unpacking 4 bytes at offset 0 (actual buffer size is 0)

maybe happen because we use docker to run our flask app, since writing files to disk invoke a lot of disk io
we map the folder into tmpfs

  environment: 
    prometheus_multiproc_dir: /tmp
  tmpfs:
    - /tmp

Usage with flaskRESTFUL

How can metrics be logged and customized when API is constructed using flaskRESTful library, where instead of routes, you have:

class Coordinates(Resource):
    def __init__(self):
        self.parser = reqparse.RequestParser()
        self.parser.add_argument('radius', type=float, required=False, help="Radius", location='args', default=1)

    def post(self):
        args = self.parser.parse_args()

HTTP Errors do not seem to be tracked

Hi there,

I noticed that, non-global metrics do not seem to handle HTTP errors raised by flask. For instance, for this very dummy handler:


@app.route('/')
@metrics.summary('http_index_requests_by_status',
                 'Request latencies by status',
                 labels={'status': lambda r: r.status_code})
@metrics.histogram('http_index_requests_by_status_and_path', 
                   'Index requests latencies by status and path',
                   labels={
                       'status': lambda r: r.status_code,
                       'path': lambda: request.path})
def index():
    if not state["running"]:
        return abort(500)
    return 'Hello World!'

The http_index_requests_by_status_and_path metric never reports the 500 whereas the global flask_http_request_duration_seconds_count does. I wonder if the code handles the exception that is triggered by calling ฬ€abort(500)`.

metrics.do_not_track() blocks also metrics.summary()/โ€ฆ

I have a blueprint endpoint with an variable rule:

bp = Blueprint("rdap", __name__)

@bp.route("/domain/<string:domain>")
def query(domain: str) -> Tuple[Dict, int]:
    โ€ฆ

Since that gives me an overwhelming cardinality, I wanted to drop the domain from the metric. There doesn't seem to be a good way, I wanted to try a custom metric and took this straight from the examples:

@bp.route("/domain/<string:domain>")
@metrics.do_not_track()
@metrics.summary('requests_by_status', 'Request latencies by status',
                 labels={'status': lambda r: r.status_code})
def query(domain: str) -> Tuple[Dict, int]:
    โ€ฆ

Unfortunately, that doesn't work: the metric gets created, but not computed.

If I remove the do_not_track((), it starts computing. This looks like a bug to me since do_not_track() should only disable default metrics. Or am I missing something? Is this because I'm using blueprints and/or delayed app creation?

Help Needed for metric type summary

I have two questions

  1. Is there a way to store average latency time based on the histogram flask_http_request_duration_seconds_sum and flask_http_request_duration_seconds_count as another metric?

  2. What is the best way to pass r.status_code if you do not use @app.route to instantiate? I am using BluePrint and Flask_restful to register and instantiate separately as in

Following your example on creating a summary

@app.route('/status/<int:status>')
@metrics.summary('requests_by_status', 'Request latencies by status', labels={'status': lambda r: r.status_code})
        def echo_status(status):
            return 'Status: %s' % status, status
from flask import Blueprint
from flask_restful import  Resource

api_v1 = Blueprint('api_v1', __name__, url_prefix='/api/v1')
rest_api_v1 = Api(api_v1, errors=errors)


def add_resource(resource, route, endpoint=None, defaults=None):
    """Add a resource for both trailing slash and no trailing slash to prevent redirects."""
    slashless = route.rstrip('/')
    endpoint = endpoint or resource.__name__.lower()
    defaults = defaults or {}

    # resources without slashes
    rest_api_v1.add_resource(resource,
                             slashless,
                             endpoint=endpoint + '__slashless',
                             defaults=defaults)

class Test(Resource):
    status = 200

    @staticmethod
    @metrics.summary('dummy_test', 'Test Request latencies by status',
                     labels={'dummy_test': lambda r: ???})
    def get():
        try:
            time.sleep(random.randint(0, 5))
            if random.choice([True, False]):
                return 'OK', 200
            else:
                return 'Not OK', 400
        except Exception as e:
            logger.error('%r' % e)

add_resource(Test, '/test', endpoint='test')

Restrict access to /metrics endpoint

Hi,
for security reasons, I would like to restrict access to the /metrics endpoint to a certain IP address (the external Prometheus server).
What would be the best way to implement this?
Thanks!


Python 3.6, Flask 1.0.2, prometheus-flask-exporter 0.5.1

_total suffix getting added to metric.counter

metrics.register_default(
    metrics.counter(
        'by_path_counter', 'Request count by request paths',
        labels={'path': lambda: request.path}
    )
)

When I try doing this, the metric comes as by_path_counter_total, I don't understand why that is happening?

adding account info into labels

setup:

  1. using make_app() factory pattern

question:

  1. I want to add a label containing account_id :
def make_app():
   metrics._static_labels = {"account_id": current_user.id}
   metrics.init_app(app)
   return app

but obviously, I get current_user :None because metrics.init_app(app) and make_app() is called before the app get account info .

any idea on how to set label containing account_id in make_app() ?

I have tried using lambda: request.account.id but it doesn't give me the result, rather it gave a function object

  1. I have tried using lambda: request.account.id -> returns <function make_app.<locals>.<lambda> at 0x7f28b6d4b730>
  2. tried PrometheusMetrics(app, group_by=custom_rule) -> but this is about we add another default metrics, not a label.

WSGI Example with dedicated port

Hi,

First off great work with prometheus_flask_exporter it makes instrumentation easier to get into.

I am struggling to get the exporter exposed on a dedicated port (say TCP/9100) when using gunicorn or uwsgi. Do you have any example of such a setup?

Here is an example with dev server on metrics on a dedicated port:

from flask import Flask
from prometheus_flask_exporter import PrometheusMetrics

app = Flask(__name__)
metrics = PrometheusMetrics(app, path=None)


@app.route('/')
def index():
    return 'Hello world'


if __name__ == '__main__':
    metrics.start_http_server(9100)
    app.run(debug=False, port=5000)

Doesn't seem to capture metrics for failed (500) requests?

Hi, thanks for sharing this, it seems very close to what I need. My use case is to collect metrics for all requests by status code, for monitoring and alerting. So it's very important that 500s be included in the metrics.

After enabling prometheus_flask_exporter, I get metrics at /metrics immediately (yay!) but if I cause an unhandled exception, a 500 is returned to the client, but no metrics are captured. Is this true for others, or just me?

http://flask.pocoo.org/docs/1.0/api/ suggests that after_request may not be called in this case, and it looks like that's the hook that the exporter is relying on.

Setting path for GunicornPrometheusMetrics

I am wondering whether there would be any negative side-effects to setting GunicornPrometheusMetrics().path to not be None.

I prefer to service the metrics inside of the main Flask app, rather than its own http server.

Is there any reason that setting path was explicitly disabled in the GunicornPrometheusMetrics constructor?

http.HTTPStatus not cast to int in status field

Hi

I have the following code that use http module for readability purposes.

import http

from flask import Flask
from prometheus_flask_exporter import PrometheusMetrics

app = Flask("app")
metrics = PrometheusMetrics(app)

@app.route("/")
def index_json_no_content():
    return {}, http.HTTPStatus.NO_CONTENT

@app.route("/json")
def json_route():
    return {"foo": "bar"}

if __name__ == "__main__":
    app.run()

Flask perfectly understand http.HTTPStatus. It is an enum.IntEnum, so it is cast as int implicitly.
See bellow curl request / response:

> GET / HTTP/1.1
> Host: localhost:5000
> User-Agent: curl/7.58.0
> Accept: */*

< HTTP/1.0 204 NO CONTENT
< Content-Type: application/json
< Server: Werkzeug/1.0.1 Python/3.7.4
< Date: Fri, 05 Jun 2020 14:01:02 GMT

However, metrics generated does not cast response.status_code as int.
See:

So produce metrics are mixing int and str status_code.
See:

flask_http_request_duration_seconds_bucket{le="0.05",method="GET",path="/",status="HTTPStatus.NO_CONTENT"} 2.0
flask_http_request_duration_seconds_created{method="GET",path="/json",status="200"} 1.5913664307018697e+09

Is it possible to cast status_code as int, so flask user can use http module (or whatever class to implement __int__) ?
Or is there any other better way to let user be able to use http module and keep consistent results ?

GunicornPrometheusMetrics Doesn't Export Python Metrics

When using prometheus_flask_exporter with gunicorn it seems that the built in metrics (python_gc, memory, cpu, etc) that are usually exported when just using this with the internal flask server does not get pushed to /metrics

Support using a prefix other than "flask"

I've noticed that all the metrics have the prefix "flask" hardcoded for them. In my environment, I'd like to use the library on multiple Flask based applications. It would be nice to be able to assign each one a unique prefix. Is this something that would be welcome as an addition to the library or is the official way to deal with this metric relabelling on ingest?

Using `start_http_server` results into undefined behaviour

When starting a Flask app in debug mode (werkzeug server) on port, let's say, 5000 and using PrometheusMetrics.start_http_server on another port, let's say 9000, Prometheus endpoint is served properly on port 9000, but also on port 5000, resulting in responses that come randomly from the original or the prometheus Flask app.

I'm not sure if that's a limitation of werkzeug - however, when running the original Flask app with another WSGI container, this behaviour disappears.

So, why not just use start_http_server of prometheus_client under the hood? Is there a particular reason to serve metrics via Flask?

Compatibility with Counter.count_exceptions()

I've been using this package with a great degree of success on one of my projects. It's made my life a lot easier! There is one piece of functionality present in the prometheus_client package that I haven't been able to replicate with this package, however: Exception counting with Counter.count_exceptions(). Is it possible to replicate that functionality within this package, and if so, is there a recommended way of doing so? I've made several attempts to do so myself, but after spending a bit of time on it decided it may be better to reach out here.

Very much appreciate any info you might have on this!

Share `metrics` object across modules

Hello, my API is spread across several Blueprints and namespaces while the metrics object gets instantiated in my main app.py. Is there a built-in way to share the object across modules, maybe something like current_app in Flask?

app.py

import api.internal
import api.external

app = Flask(__name__)
metrics = GunicornPrometheusMetrics(app)

I have seen the app factory example, but there everything is set up in its own app_setup.py. But I cannot import app.py in my APIs as this would introduce circular dependencies.

Am I missing something (as a beginner) or should I just create a "singleton" method somewhere?

metrics = None

def get_current_metrics(app):
    global metrics

    if metrics is None and app:
        metrics = GunicornPrometheusMetrics(app)
        
    return metrics

Thanks in advance

I cannot esclude some metrics using excluded_paths flag

We have developed this application and we rely on this library for the metrics. I'm trying to exclude some metrics from the total count. I'm not able to exclude the endpoints described in [1] from the metrics using excluded_paths.

I tried using:

excluded_paths=["/swagger-ui", "/api/v1/swagger-ui"]

but I had no success. Anyone had a similar issue?

Thanks for the help

References:
[1]

flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_swagger_ui_index",le="0.005",method="GET",status="200"} 5.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_swagger_ui_index",le="0.01",method="GET",status="200"} 5.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_swagger_ui_index",le="0.025",method="GET",status="200"} 5.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_swagger_ui_index",le="0.05",method="GET",status="200"} 5.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_swagger_ui_index",le="0.075",method="GET",status="200"} 5.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_swagger_ui_index",le="0.1",method="GET",status="200"} 5.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_swagger_ui_index",le="0.25",method="GET",status="200"} 5.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_swagger_ui_index",le="0.5",method="GET",status="200"} 5.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_swagger_ui_index",le="0.75",method="GET",status="200"} 5.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_swagger_ui_index",le="1.0",method="GET",status="200"} 5.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_swagger_ui_index",le="2.5",method="GET",status="200"} 5.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_swagger_ui_index",le="5.0",method="GET",status="200"} 5.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_swagger_ui_index",le="7.5",method="GET",status="200"} 5.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_swagger_ui_index",le="10.0",method="GET",status="200"} 5.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_swagger_ui_index",le="+Inf",method="GET",status="200"} 5.0
flask_http_request_duration_seconds_count{endpoint="/api/v1./api/v1_swagger_ui_index",method="GET",status="200"} 5.0
flask_http_request_duration_seconds_sum{endpoint="/api/v1./api/v1_swagger_ui_index",method="GET",status="200"} 0.005000014789402485
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_openapi_json",le="0.005",method="GET",status="200"} 1.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_openapi_json",le="0.01",method="GET",status="200"} 1.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_openapi_json",le="0.025",method="GET",status="200"} 1.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_openapi_json",le="0.05",method="GET",status="200"} 1.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_openapi_json",le="0.075",method="GET",status="200"} 1.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_openapi_json",le="0.1",method="GET",status="200"} 1.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_openapi_json",le="0.25",method="GET",status="200"} 1.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_openapi_json",le="0.5",method="GET",status="200"} 1.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_openapi_json",le="0.75",method="GET",status="200"} 1.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_openapi_json",le="1.0",method="GET",status="200"} 1.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_openapi_json",le="2.5",method="GET",status="200"} 1.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_openapi_json",le="5.0",method="GET",status="200"} 1.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_openapi_json",le="7.5",method="GET",status="200"} 1.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_openapi_json",le="10.0",method="GET",status="200"} 1.0
flask_http_request_duration_seconds_bucket{endpoint="/api/v1./api/v1_openapi_json",le="+Inf",method="GET",status="200"} 1.0
flask_http_request_duration_seconds_count{endpoint="/api/v1./api/v1_openapi_json",method="GET",status="200"} 1.0

Love the tool, but one big problem, disk usage

I filled up my drive with gauges and counters without any other than the default configuration. It's for a very busy website and we blew through 10GB of space in one day on every node. Perhaps I have configured something incorrectly?

Metrics per view function name instead of URL path?

A new set of metrics for each path makes for a very noisy export, in our case. There could be literally thousands of paths.

Would it be possible to support tracking by the name of the function handling the request, rather than the path? Similar to what django-prometheus does.

So that the code:

metrics = prometheus_flask_exporter.PrometheusMetrics(app)

@route('/<page_name>')
def process_page(page_name):
    # ...

Instead of outputting:

flask_http_request_duration_seconds_bucket{le="0.005",method="GET",path="/page1",status="200"} 0.0
flask_http_request_duration_seconds_bucket{le="+Inf",method="GET",path="/page1",status="200"} 1.0
...
flask_http_request_duration_seconds_bucket{le="0.005",method="GET",path="/page2",status="200"} 0.0
flask_http_request_duration_seconds_bucket{le="+Inf",method="GET",path="/page2",status="200"} 1.0

Would output:

flask_http_request_duration_seconds_bucket{le="0.005",method="GET",function="process_page",status="200"} 0.0
flask_http_request_duration_seconds_bucket{le="+Inf",method="GET",function="process_page",status="200"} 2.0

Exclude swagger requests

I just started this on clean flask app and my metrics are already polluted by many swagger mnetrics. How can I exclude endpoints which are not exposed by me, but as middleware through flask?

Screen Shot 2019-09-10 at 11 31 53

Improvement: prometheus_multiproc_dir

Could this variable not be read from the application config? You could pass this value to any child process to avoid having it set within the environment. Would be nice for tests etc.

lable callables aren't given response objects for blueprint endpoints

In a blueprint, request.endpoint is <blueprint name>.<endpoint>, e.g. auth.login

This line doesn't work:

if request.endpoint == f.__name__:

Example:

from flask import Flask, Blueprint
from prometheus_flask_exporter import PrometheusMetrics

metrics = PrometheusMetrics(app=None)

bp = Blueprint('myblueprint', __name__)

@bp.route('/hello')
@metrics.summary('requests_by_status', 'Request latencies by status',
                 labels={'status': lambda r: r.status_code})
def hello():
    return 'Hello!'

app = Flask(__name__)
app.register_blueprint(bp)
metrics.init_app(app)

app.run()

Exceptions shouldn't bypass Flask's error handling

When an exception occurs in the view function, prometheus_flask_exporter turns it into a custom response, bypassing all of the Flask app's error handlers:

except Exception as ex:
response = make_response('Exception: %s' % ex, 500)

It gets returned from the function on this line:

You could instead call Flask.handle_user_exception. You can see where that's usually called here. It should also be called for HTTPException.

P.S. For non-error responses, make_response is called but isn't returned. This could be expensive (serializing a response, for example), so it should be returned rather than letting Flask call it again:

response_for_metric = make_response(response)

Option to add new metrics or new lable

Hello,

I there an option to add a new metrics or new lable to the existing metrics?
for example, i want to add a lable with the requester ip to the default metric

or add a new metric that contain the location data of the requester ip.

I tried to use the group by, but it overwrite the path lable, and i cant add more than one lable.

Thanks,
Idan

Gauge with callable label_value using deprecated function

This is running in python3.7 + flask, on a mac.

Abbreviated (and anonymized) setup is:

@app.route('URL_RULE', methods=['POST'])
@metrics.gauge('name',
               'description',
               labels={'group_id': lambda: request.view_args.get('group_id')},
               multiprocess_mode='livesum')
def bulk_post(group_id: str):
    pass

I get the following warning:

src/api/app.py:642: in <module>
    multiprocess_mode='livesum')
../../../../.local/share/virtualenvs/venv/lib/python3.7/site-packages/prometheus_flask_exporter/__init__.py:369: in gauge
    before=lambda metric: metric.inc()
../../../../.local/share/virtualenvs/venv/lib/python3.7/site-packages/prometheus_flask_exporter/__init__.py:426: in _track
    ) if labels else tuple()
../../../../.local/share/virtualenvs/venv/lib/python3.7/site-packages/prometheus_flask_exporter/__init__.py:425: in <genexpr>
    for key, call in labels.items()
../../../../.local/share/virtualenvs/venv/lib/python3.7/site-packages/prometheus_flask_exporter/__init__.py:418: in label_value
    if inspect.getargspec(f).args:
/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/inspect.py:1078: in getargspec
    DeprecationWarning, stacklevel=2)
E   DeprecationWarning: inspect.getargspec() is deprecated since Python 3.0, use inspect.signature() or inspect.getfullargspec()

make_response does not work with pydantic

Setup:

The API I'm working with uses Connexion and Pydantic as return wrapper for type safety.

Problem:

Connexion can handle pydantic classes and is able to parse them into a proper response. I.e. that a normal response could be a single pydantic dataclass object even without any status code. (https://pydantic-docs.helpmanual.io/usage/dataclasses/)
As soon as I use a decorator on such an endpoint the response is 500.

It would be really appreciated if this behavior could be deactivated somehow.

response = make_response(response)

Wiki: Need to add parameter in labels

What is recommended way for adding Parameter in metrics... ?

I mean, Currently we have:
invocation_by_type_created{item_type="hey"} 1.5603457069620097e+09
Which can be achieved by:

@metrics.counter('invocation_by_type', 'Number of invocations by type',
         labels={'item_type': "hey"})
def echo_status(status):
    return 'Status: %s' % status, status

Now, We need to have some timedelta included in output metrics,
something like:
invocation_by_type_created{item_type="hey", timedelta=52} 1.5603457069620097e+09

which is ofcause the function execution time..

usage of flask_exporter_info

Hi, I would like to make use of this metrics to count the active workers under gunicorn.

I use the app pattern but found that when the gunicorn re-spawn a new worker, the pid of the old one still being export, would you give me some idea how to achieve the counting or remove the old pid from the metrics?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.