Giter Club home page Giter Club logo

pygelf's People

Contributors

bmerry avatar chilledornaments avatar coretl avatar dschweisguth avatar eorochena avatar et304383 avatar etiennepelletier avatar fladi avatar gjermund66 avatar gvsheva avatar ivolzhevakv avatar jsargiot avatar keeprocking avatar kepeder avatar sleuth56 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

pygelf's Issues

TLS support?

Hi there

I'm wanting to push GELF over the Internet and so want to use graylog-server's TLS support on it's GELF port - but I can't find a client that supports TLS (besides their java graylog-collector)

Would it be possible to add TLS support to pygelf?

exc_info is false

Hello.
The error appear in this place: https://github.com/keeprocking/pygelf/blob/master/pygelf/gelf.py#L40

Traceback (most recent call last):
  File "/usr/lib/python3.5/logging/handlers.py", line 621, in emit
    s = self.makePickle(record)
  File "/usr/local/lib/python3.5/dist-packages/pygelf/handlers.py", line 86, in makePickle
    return self.convert_record_to_gelf(record)
  File "/usr/local/lib/python3.5/dist-packages/pygelf/handlers.py", line 34, in convert_record_to_gelf
    gelf.make(record, self.domain, self.debug, self.version, self.additional_fields, self.include_extra_fields),
  File "/usr/local/lib/python3.5/dist-packages/pygelf/gelf.py", line 41, in make
    gelf['full_message'] = '\n'.join(traceback.format_exception(*record.exc_info))
TypeError: format_exception() argument after * must be an iterable, not bool

In documentation:

If exc_info does not evaluate as false, it causes exception information to be added to the logging message. If an exception tuple (in the format returned by sys.exc_info()) or an exception instance is provided, it is used; otherwise, sys.exc_info() is called to get the exception information.

May be add check?

Add error handling

Currently GelfHttpHandler and GelfHttpsHandler are not utilising handleError which violates python documentation about writing own handlers.

Solution:

def emit(record):
    try:
         ... # logic here
    except:
        self.handleError(record)

Add support for using with a QueueHandler/QueueListener

If Python3's QueueHandler/QueueListener are used then record.exc_info is
None, but record.exc_text contains the traceback. Reading the source of
the logging module it would appear that exc_text can safely be assumed
to contain the exception text if it is not None, so can be used to populate
graylog's full_message.

#18 contains an attempt to fix this

Inclusion of additional_env_fields fails

Feature introduced in PR#47 #47 seems to fail.

Python version 3.8
OS: Windows 10
PYGELF Version: 0.4.1

Example of log handler used:

    gray_handler = GelfTcpHandler(
        host='1.2.3.4',
        port='12201',
        include_extra_fields=True,
        debug=True,
        static_fields={'api': 'my-api', 'env': 'test'},
        additional_env_fields={'env': 'FLASK_ENV'}
    )

Error message:

--- Logging error ---
Traceback (most recent call last):
  File "C:\bin\Python\Python38\lib\logging\handlers.py", line 631, in emit
    s = self.makePickle(record)
  File "...\lib\site-packages\pygelf\handlers.py", line 62, in makePickle
    return self.convert_record_to_gelf(record) + b'\x00'
  File "...\lib\site-packages\pygelf\handlers.py", line 42, in convert_record_to_gelf
    gelf.make(record, self.domain, self.debug, self.version, self.additional_fields, self.additional_env_fields, self.include_extra_fields),
  File "...\lib\site-packages\pygelf\gelf.py", line 60, in make
    for name, env in additional_env_fields:
ValueError: too many values to unpack (expected 2)
Call stack:

Problematic code:

for name, env in additional_env_fields:

If this is a dict, an iterator must be used. IE.

for name, env in additional_env_fields.items():

GelfTcpHandler/GelfUDPHandler not liked under CentOS-6

Hi there

I guess pygelf is written for python-2.7. Under python-2.6, those classes aren't liked - they generate the following error

Traceback (most recent call last):
File "/usr/local/bin/filelogs-to-gelf2", line 76, in
handle = GelfTcpHandler(gelf_server, 12201,0,0)
File "/usr/lib/python2.6/site-packages/pygelf/tcp.py", line 19, in init
super(GelfTcpHandler, self).init(host, port)
TypeError: super() argument 1 must be type, not classobj

The fix is simple: it just needs to be defined as a "object". ie

class GelfTcpHandler(SocketHandler,object):

Apparently (this is what google told me) under python-2.7 "object" is the default, whereas under 2.6 it has to be declared?

Anyway, making that change makes pygelf work on CentOS-6 (and any other python-2.6 system I guess)

DNS lookup happening on every log message (UDP)

When using GelfUdpHandler with a hostname (rather than an IP address) as a destination, it gets resolved by DNS on every log message, which is a big performance concern. This is behaviour inherited from Python's DatagramHandler (see https://bugs.python.org/issue47149), but presumably pygelf could work around it by resolving the hostname itself on startup.

No error if server port is closed

I am using pygelf to send logs to graylog

this is my code

self.gelfhandler = GelfTcpHandler(host=str(host), port=int(port),include_extra_fields=True)

and then i am sending the log entry ( already converted to gelf)

self.gelfhandler.send(json.dumps(message))

but I am not getting any error if graylog port is not open or suddenly is closed inbetween.

Make SKIP_LIST customizable

Currently, there is no way to specify which fields from the log record I want to skip and which ones I want to keep.
This would be a nice improvement.

Support RFC 5424 timestamp

Would you be open to supporting a timestamp format as specified by RFC 5424 (section 6.2.3) for readability? I know that this is not specified by GELF 1.1 so I would suggest to make it configurable using a timestamp_format='rfc-3339' parameter.

Related standards: ISO 8601, RFC 3339.

Ref: severb/graypy#99

Add more flexibility for configuring additional fields

Hi, I just started to use this package recently, and I found there's a possible change to make this package more flexible on configuration.

Currently I configured gelf handler in django settings.py like this:

handlers =  {
            'level': 'INFO',
            'class': 'pygelf.GelfUdpHandler',
            'host': GRAYLOG_ADDR,
            'port': GRAYLOG_PORT,
            '_extra': 'myExtra', 
            '_one_more': 'myOneMore'
        }

But I don't want my handlers to be a giant dictionary with a bunch of messed-up fields. So I want to pull these things out to an independent extra_fields_dict and pass it in handlers like blew:

extra_fields_dict = {
    '_extra': 'myExtra',
    '_one_more': 'myOneMore'
}
handlers =  {
            'level': 'INFO',
            'class': 'pygelf.GelfUdpHandler',
            'host': GRAYLOG_ADDR,
            'port': GRAYLOG_PORT,
            'additional_fields': extra_fields_dict
        }

I assume this will not be a difficult change, so I just wonder if you can add this flexibility to configuration.

cant connect with TLS

as a test, im not including any certs and have validate to false without any luck. Switching to a TCP handler does work though. Correct me if im wrong, but wouldnt a TLS handler act the same as a TCP handler if validate is set to false?

Set full_message

maybe describe on the README how to set full_message? if it's possible.
or make it possible if it's not?

GelfKafkaHandler

Hi guys,

what I needed is the feature that logs created in my pyhton code (e.g. "logger.info("whatever")") are converted into the GELF format and then sent via a Kafka bus to graylog (Gelf Kafka Input). I found this module here and it provided nearly exactly what I needed - except the "GelfKafkaHandler".

I was thinking of a additional Handler and were glad to see that such an additional handler was very easy to implement. I wrote some additional lines of code and tested it on a local setup - it works !

So now - please let me know what you are thinking! Is this a sensible feature?

Looking forward to get your feedback.

every GELF record triggers getfqdn?

Hi there

I was debugging an unrelated issue and detected tonnes of DNS lookups for the hostname I was running pygelf on (permanent process, converting squid logs into GELF in realtime). It looks to me like pygelf is doing a DNS lookup for $HOSTNAME for every GELF record it generates? That's a nasty overhead if you're generating 100+/sec streams of GELF data

I don't know for sure, but could it be the "'source': socket.getfqdn()" in gelf.py that's doing that?

Shouldn't that be replaced by defining that as a variable at startup, and then just reusing that variable afterwards? I know the hostname could be put into /etc/hosts, but actually strace showed the DNS lookups occurring before /etc/hosts was opened - so I don't even think that would help - and even that's an overhead that could be skipped if it only did it once?

TCP/TLS streams do not work with compression enabled

After some debugging time, I think I just came to the following conclusion: compress=True cannot work with TCP and TLS handlers, due to the GELF message structure.

In particular, I was experiencing lots of message lost in simple (non-load) test scenarios using TCP and TLS handlers. Instead, delivery was fine when switching to UDP or when disabling compression. I think that this is due to GELF using \x00 as a message separator, and the zlib compression introducing such bytes as artifacts of the deflation.

To prevent such subtle losses in the future, I think the GelfTcpHandler and GelfTlsHandler should be changed to ignore the compress parameter and enforce it to False for all streams.

Option to use LevelName instead of LevelNumber

I noticed the log level is logged as a number. but the README seems to promise the level name.

What do you think about an option to send the name instead of the level number? Its less ambiguous IMO.

source Field Value -- Feature Request

It seems that socket.getfqdn() (handlers.py) falls back to assigning the first FQDN it finds in /etc/hosts. So when a message comes into graylog, the source is localhost.localdomain (for people that haven't edited /etc/hosts manually). Making a call to socket.gethostname() returns the system's hostname as you'd expect to see it, so you know where the messages are actually coming from.

I'm happy to create a PR for the change.

If it'd be a breaking change (I don't see how it would be), maybe a hint in the document would save some time and Googling.

Very cool tool, by the way! Saved me a ton of headaches:)

The Usage section in README has a problem

In the last line of that sample:

logging.info('hello gelf')

You should use your defined logger to log info, not the logging module. So I request to fix this typo to:

logger.info('hello gelf')

just in case if some else went into the wrong place.

KeyError when logging.BasicConfig initialized with object instead of int

Sorry newbie here. I get the following which seems like

--- Logging error --- Traceback (most recent call last): File "/usr/lib/python3.6/logging/handlers.py", line 633, in emit s = self.makePickle(record) File "/usr/local/lib/python3.6/dist-packages/pygelf/handlers.py", line 58, in makePickle return self.convert_record_to_gelf(record) + b'\x00' File "/usr/local/lib/python3.6/dist-packages/pygelf/handlers.py", line 38, in convert_record_to_gelf gelf.make(record, self.domain, self.debug, self.version, self.additional_fields, self.include_extra_fields), File "/usr/local/lib/python3.6/dist-packages/pygelf/gelf.py", line 37, in make 'level': LEVELS[record.levelno], KeyError: 5

Which seems like this...

LEVELS = {

"enumeration" type dict doesn't handle the event that a Level 5 is thrown. I don't know the specifics here but is this possibly a bug?

I would clone the repo and propose a PR but i'm still new to GH and would prefer to avoid the cognitive burden while I'm working on this task.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.