t0xa / gelfj Goto Github PK
View Code? Open in Web Editor NEWGraylog Extended Log Format (GELF) implementation in Java and log4j appender without any dependencies.
Home Page: https://github.com/t0xa/gelfj/wiki
License: Other
Graylog Extended Log Format (GELF) implementation in Java and log4j appender without any dependencies.
Home Page: https://github.com/t0xa/gelfj/wiki
License: Other
Is there any chance of supporting log4j2 anytime soon?
In the pom.xml the project used JDK 1.5, while GelfMessage.java used some methods (Arrays.CopyOf(), Arrays.CopyOfRange()) that are belonged to JDK 1.6 only.
I see in the log4j.xml example you can set the threshold, I added this parameter to my log4j.properties file but it doesn't seem to be honoring that. Is this parameter supported?
When using a positional parameter string like the following:
{%1$s}
the GelfHandler throws an uncatched exception.
It probably is a good idea to either make this behaviour configurable (to disable MessageFormat) or
change the ordering of the formatters or to catch the exceptions. Other loggers we use do not show this behaviour .
Is there any workaround for that?
I added a reproducer under e5a4730.
It'd be very helpful if each of the releases was tagged so they can be easily checked out of source control and worked with.
Since GitHub removed the ability to host arbitrary files for download, I guess the gelfj JARs need a new home. It'd be good to get this info on the wiki and in the README.
Hello,
I'm a beginner and have trouble with the implementation of graylog2.
My AM-Q doesn't have a pom.xml, so I will use a jar file. I downloaded the jar file gelfj 1.1.7 (http://mvnrepository.com/artifact/org.graylog2/gelfj/1.1.7) and implement the file to my classpath. Then I configurated my graylog2 appender and change the log4j.rootLogger.
But it doesn't work. Didi I forgot something? What is with a graylog handler?
Thank you.
Regards
lnix1988
The old version of log4j in the pom.xml is causing problems when sending messages to graylog. Can you update the log4j dependency to version 1.2.17?
In the pom.xml
for the RabbitMQ client dependency I believe it can be and should be optional:
<dependency>
<groupId>com.rabbitmq</groupId>
<artifactId>amqp-client</artifactId>
<version>3.0.4</version>
<optional>true</optional>
</dependency>
Notice the <optional>true</optional>
.
As long as GelfAMQPSender
is not instantiated (which would be the case if you did not use AMQP) I believe everything will be fine (ie no class missing exceptions).
The very useful option "additionalFields" is not documented on the project start page (https://github.com/t0xa/gelfj)
Hey,
Getting ready to try this out. Talking with joemiller on #graylog2, gelfj requires Log4j 1.2.16. Here's his quick fix for backwards compatibility, which could possibly be determined programmatically and incorporated into gelfj:
+++ b/src/main/java/org/graylog2/log/GelfAppender.java
@@ -64,7 +64,9 @@ public class GelfAppender extends AppenderSkeleton {
@OverRide
protected void append(LoggingEvent event) {
Level level = event.getLevel();
long timeStamp = event.getTimeStamp();
// long timeStamp = event.getTimeStamp();
long timeStamp = System.currentTimeMillis()/1000;
Hi, I am trying to figure out the way to add a context to logged gelf message (noob in java), with no lock for now.
I.e. I want to log a message "User Logged In", with additional fields (ip, browser, someInfo) which would be translated in ctxt_ on graylog. Is it possible to do this in Java?
I'm getting null pointer exceptions that appear to be originating from the graylog appender. Looking a bit into the code it appears the exception could be thrown if bytesList array contains null entries possibly? I have some log messages that dump an entire http response for debugging and I think perhaps the chunking logic doesn't properly handle large messages maybe?
java.lang.NullPointerException
at sun.nio.ch.IOUtil.write(IOUtil.java:147)
at sun.nio.ch.DatagramChannelImpl.write0(DatagramChannelImpl.java:439)
at sun.nio.ch.DatagramChannelImpl.write(DatagramChannelImpl.java:456)
at java.nio.channels.DatagramChannel.write(DatagramChannel.java:435)
at org.graylog2.GelfSender.sendDatagrams(GelfSender.java:41)
at org.graylog2.GelfSender.sendMessage(GelfSender.java:36)
at org.graylog2.log.GelfAppender.append(GelfAppender.java:139)
at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
at org.apache.log4j.Category.callAppenders(Category.java:206)
at org.apache.log4j.Category.forcedLog(Category.java:391)
at org.apache.log4j.Category.log(Category.java:856)
at org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:281)
When a chunked message is sent to graylog2 server the server will never beable to decode it:
2012-11-01 11:18:45,742 DEBUG: org.graylog2.messagehandlers.gelf.ChunkedGELFClientHandler - Got GELF message chunk: GELFClientChunk:
Hash: bbb1b89a6c657373 Sequence: 0/4 Arrival: 1351768725 Data size: 4802
As you can see the datasize is larger than the max chunk-size, this because of the automatic concatting of the ByteBuffer
s in the channel.write()
Will submit a pull request resolving the issue making the request in graylog2 look like:
2012-11-01 11:21:38,017 DEBUG: org.graylog2.messagehandlers.gelf.GELFClientHandlerThread - Received message is chunked. Handling now.
2012-11-01 11:21:38,017 DEBUG: org.graylog2.messagehandlers.gelf.ChunkedGELFClientHandler - Got GELF message chunk: GELFClientChunk:
Hash: bbb459916c657373 Sequence: 0/4 Arrival: 1351768898 Data size: 1420
2012-11-01 11:21:38,018 DEBUG: org.graylog2.messagehandlers.gelf.GELFClientHandlerThread - Received message is chunked. Handling now.
2012-11-01 11:21:38,018 DEBUG: org.graylog2.messagehandlers.gelf.ChunkedGELFClientHandler - Got GELF message chunk: GELFClientChunk:
Hash: bbb459916c657373 Sequence: 1/4 Arrival: 1351768898 Data size: 1420
2012-11-01 11:21:38,019 DEBUG: org.graylog2.messagehandlers.gelf.GELFClientHandlerThread - Received message is chunked. Handling now.
2012-11-01 11:21:38,019 DEBUG: org.graylog2.messagehandlers.gelf.ChunkedGELFClientHandler - Got GELF message chunk: GELFClientChunk:
Hash: bbb459916c657373 Sequence: 2/4 Arrival: 1351768898 Data size: 1420
2012-11-01 11:21:38,019 DEBUG: org.graylog2.messagehandlers.gelf.GELFClientHandlerThread - Received message is chunked. Handling now.
2012-11-01 11:21:38,020 DEBUG: org.graylog2.messagehandlers.gelf.ChunkedGELFClientHandler - Got GELF message chunk: GELFClientChunk:
Hash: bbb459916c657373 Sequence: 3/4 Arrival: 1351768898 Data size: 504
log4j appender has a little bug. when graylog host is specified with protocol (tcp: or udp:), the url is lost.
A case of wrong substring, should be substring(4), not substring(0, 4) in the two places:
if (graylogHost.startsWith("tcp:")) {
String tcpGraylogHost = graylogHost.substring(0, 4);
gelfSender = new GelfTCPSender(tcpGraylogHost, graylogPort);
} else if (graylogHost.startsWith("udp:")) {
String udpGraylogHost = graylogHost.substring(0, 4);
gelfSender = new GelfUDPSender(udpGraylogHost, graylogPort);
} else {
gelfSender = new GelfUDPSender(graylogHost, graylogPort);
}
Tried to compile the latest version (as of today) with "mvn package":
(NOTE: "mvn test" results in a success!)
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building gelfj 1.0.1
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-resources-plugin:2.5:resources (default-resources) @ gelfj ---
[debug] execute contextualize
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory C:\Java\gelfj-master\src\main\resources
[INFO]
[INFO] --- maven-compiler-plugin:2.0.2:compile (default-compile) @ gelfj ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-resources-plugin:2.5:testResources (default-testResources) @ gelfj ---
[debug] execute contextualize
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO]
[INFO] --- maven-compiler-plugin:2.0.2:testCompile (default-testCompile) @ gelfj ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-surefire-plugin:2.10:test (default-test) @ gelfj ---
[INFO] Surefire report directory: C:\Java\gelfj-master\target\surefire-reports
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running org.graylog2.GelfMessageTest
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.118 sec
Running org.graylog2.log.GelfAppenderTest
Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 21.088 sec <<< FAILURE!
Running org.graylog2.logging.GelfHandlerTest
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.025 sec
Results :
Failed tests: handleNDC(org.graylog2.log.GelfAppenderTest): expected:<Foobar[]> but was:<Foobar[ Foobar]>
Tests run: 17, Failures: 1, Errors: 0, Skipped: 0
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 22.809s
[INFO] Finished at: Thu Apr 04 12:21:47 BST 2013
[INFO] Final Memory: 6M/120M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.10:test (default-test) on project gelfj: There are test failures.
[ERROR]
[ERROR] Please refer to C:\Java\gelfj-master\target\surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
When I log a JSON message to GelfAppender it does not understand JSON. I therefore get a single 'message' field in Graylog with a JSON string.
What I really wanted is for the GelfAppender to 'understand' a JSON message and populate the additional fields for me. There doesn't seem to be a way to do this, although please correct me if there is?
If not, then I think an easy way to do this is to extend GelfAppender to GalfJsonAppender and set the additional fields on append method. If you agree, I am happy to do a PR.
Regards, James
JBoss AS 7, only set the Handler properties if the Handler has corresponding setters. see custom-handler (https://docs.jboss.org/author/display/AS71/Logging+Configuration).
JBoss/Wildfly does invoke setters multiple times, so multiple 'additionalField' keys are processed correctly.
in case Graylog server wasn't available and now the log4j property file changes (with "watch"), the close method will fail on NPE. Here's the fix:
public class MyGelfAppender extends GelfAppender {
@Override
public void close() {
GelfSender x = this.getGelfSender();
if (x != null) {
x.close();
}
}
}
We are seeing duplicate log entries with the same millisecond timestamp but have verified that the application is only outputting one entry to the local log files.
here is our log4j.properties file
# Set the default level to WARN
# Each log appender can override this setting, but this keeps imported libraries from spewing forth junk
# this is the version of the file from /projects/logging
log4j.debug=TRUE
log4j.rootLogger=WARN, LFS, I,D, graylog2
# Log clearwater code at debug
log4j.logger.com.ca=DEBUG, graylog2
log4j.logger.com.clearwateranalytics=DEBUG, graylog2
# Set the request package to log at INFO to save logging, this includes the RequestMethod
log4j.logger.com.ca.wsclient.request=INFO, graylog2
# INFO Logger
log4j.appender.I=org.apache.log4j.RollingFileAppender
log4j.appender.I.layout=org.apache.log4j.PatternLayout
log4j.appender.I.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss.SSSS} %p %t %c %m%n
log4j.appender.I.Threshold=INFO
log4j.appender.I.MaxFileSize=50MB
log4j.appender.I.File=/var/log/tomcat6/donner-ws.info
# how many 50MB files to keep
log4j.appender.I.MaxBackupIndex=5
# DEBUG Logger
log4j.appender.D=org.apache.log4j.RollingFileAppender
log4j.appender.D.layout=org.apache.log4j.PatternLayout
log4j.appender.D.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss.SSSS} %p %t %c %m%n
log4j.appender.D.Threshold=DEBUG
log4j.appender.D.MaxFileSize=50MB
log4j.appender.D.File=/var/log/tomcat6/donner-ws.debug
# how many 50MB files to keep
log4j.appender.D.MaxBackupIndex=2
# Define the graylog2 destination
log4j.appender.graylog2=org.graylog2.log.GelfAppender
log4j.appender.graylog2.graylogHost=${grayloghost}
log4j.appender.graylog2.facility=donner-ws
log4j.appender.graylog2.layout=org.apache.log4j.PatternLayout
log4j.appender.graylog2.extractStacktrace=true
log4j.appender.graylog2.addExtendedInformation=true
log4j.appender.graylog2.Threshold=INFO
log4j.appender.graylog2.additionalFields={'application': 'donner-ws'}
Not sure if this issue has been seen before or if we have a bad configuration but any help would be appreciated.
While developing gelfino (a tiny gelf server) I used your gelf client as a bench to test message send, the sequence and count fields are wrong (I get 52 53 as values) I think its related to the way int's are converted to bytes (gelf4r seems to do this right)
Iv used both gelf4r and gelf4j and they work fine,
Thanks
Ronen
Hi.
Can you add support of closing connection after a period of inactivity
Here is the simple example what I want (class GelfTCPSender):
try {
// reconnect if necessary or 5 min timeout occurred
if (socket == null || os == null || lastSendTime + 5 * 60 * 1000 < System.currentTimeMillis()) {
if (socket != null){
socket.close();
}
socket = new Socket(host, port);
os = socket.getOutputStream();
}
os.write(message.toTCPBuffer().array());
lastSendTime = System.currentTimeMillis();
return true;
} catch (IOException e) {
// if an error occours, signal failure
socket = null;
return false;
}
I have some problems with firewall and keep-alive sessions.
The current AMQP sender does not initiate the channels for confirms correctly and causes connections to be instantiated on every logging request.
The channel will be null every time because an exception will be thrown if you try to publish to the channel with out stating that you want confirms
From GelfAMQPSender
:
try {
// establish the connection the first time
if (channel == null) {
connection = factory.newConnection();
channel = connection.createChannel();
//MISSING channel.confirmSelect();
}
// SNIP property setting
channel.basicPublish(exchangeName, routingKey, properties, message.toAMQPBuffer().array());
channel.waitForConfirms();
return true;
} catch (Exception e) {
channel = null;
tries++;
}
(I tested this with RabbitMQ 3.2.3. Perhaps other versions don't require stating confirmSelect() otherwise I'm not sure how it could have worked)
Also two features need to be implemented for this to appender to be production ready:
waitForConfirms
. RabbitMQ and AMQP is already rather reliable and confirms just slow it down. In fact one fix to the above could be to just not to use confirms.AsynchGelfSender
that use Java concurrent queues and could be used by the TCP and UDP appenders.Log4j documentation warns that generating caller location information is
extremely slow and should be avoided unless execution speed is not an
issue
It is common to log exceptions without accompanying message like this:
try {
throw new IOException();
} catch (Exception ex) {
LOG.error(null, ex);
throw new RuntimeException(ex);
}
but the message is not processed by GELFJ. It is rejected by GelfMessage.isValid due to empty shortMessage field and while there is a test that seems to test this pattern is does not reveal the problem due to overriden behaviour in test sender.
Hi, I am using gelfj module in my application log4j.xml file to sending application logs to logstash which are in turn getting indexed in Elasticsearch.
I have observed that at times when my application starts, I see below error in catalina.out. If I restart my application, GELF initializes and i don't see the error. This is happening randomly. Could you help me in figuring out what could be the issue?
We are using gelf-1.1.7.jar and sending logs to remote logstash server.
Error:
log4j:ERROR Could not send GELF message
There are no sources in here...
http://repo1.maven.org/maven2/org/graylog2/gelfj/1.1.14/
Could you please publish sources to maven central which will make it much easier to use when developing?
Thanks, James
You regular jar..
http://repo1.maven.org/maven2/org/graylog2/gelfj/1.1.14/gelfj-1.1.14.jar
...seems to also contain dependencies just like this one does...
http://repo1.maven.org/maven2/org/graylog2/gelfj/1.1.14/gelfj-1.1.14-jar-with-dependencies.jar
Could you please ensure that the regular jar does not contain dependencies? This will make it easier to to work with maven and allow us to specify different log4j versions at runtime.
Thanks, James
GelfMessageFactory is assuming that GelfAppender is used directly and not wrapped in an AsyncAppender. The MDC access breaks if the used with AsyncAppender. The fix is pretty simple:
Replace
// Get MDC and add a GELF field for each key/value pair
Map<String, Object> mdc = MDC.getContext();
with:
// Get MDC and add a GELF field for each key/value pair
Map<String, Object> mdc = event.getProperties();
If there is an AsyncAppender, Log4J will have populated the mdcCopy within the LoggingEvent already. If there is no AsyncAppender, Log4J will populate the mdcCopy when getProperties() is invoked, and return the MDC.
It'd be very helpful if there was a changelog with a high-level overview of the changes and any notable upgrade steps for each release. Changelogs are very helpful in knowing when an upgrade is necessary and guarding against any potential mishaps.
It would be awesome to push a Release version into clojars, at the moment
https://clojars.org/org.graylog2/gelfj only a SNAPSHOT version is there.
Our release policy forbids to include SNAPSHOT dependencies..
I'm testing the library in a busy environment where we have around 10 JVMs on a single vmware VM and around 30 VMs total. This generates about 8k messages per second.
The last 4 bytes of the message ID of a gelf chunk is formed by last 4 bytes of the host name, but in a standard environment many hosts would be in the same domain, hence for all hosts the last 4 bytes would be the same for example ".com", ".foo", etc. In my environment these are 300 JVMs setting the same last four bytes the same for their chunks. In that regard I would prefer to avoid using "originHost" parameter to try to workaround the issue.
About the first four bytes, these are formed by the currentTimeMillis and
there are many cases where two different threads would get the same time to generate the message ID. Bottom line is that there is quite a big chance to generate the same message ID for two different messages which will make graylog server to drop the messge(s).
The duplicate chunk metric for last 12 hours on my graylog server is 2141.
I was thinking of introducing some more random way of generating those message IDs. Do anyone have any ideas of what would be a better way (than the current) to do it?
## used dependency and config log4j.xml with
<configuration status="OFF" packages="org.graylog2.log4j2"> <appenders> <Console name="Console" target="SYSTEM_OUT"> <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" /> </Console> <appender name="GrayLog2" class="org.graylog2.log.GelfAppender"> <param name="graylogHost" value="192.168.248.128"/> <param name="graylogPort" value="12201"/> <param name="extractStacktrace" value="true"/> <param name="addExtendedInformation" value="true"/> <param name="facility" value="yourfacility"/> <param name="Threshold" value="WARN"/> <param name="additionalFields" value="{'environment': 'PROD', 'application': yourapplicationname'}"/> </appender> <GELF name="gelfAppender" server="192.168.248.128" port="12201" hostName="appserver01.example.com" additionalFields="foo=bar"/> </appenders> <loggers> <root level="INFO"> <appender-ref ref="GrayLog2"/> <appender-ref ref="gelfAppender"/> <AppenderRef ref="Console" /> </root> </loggers> </configuration>
## now when I run maven project I see this errors and I can't log into graylog2 server.
2016-06-23 05:21:55,658 ERROR Error processing element appender: CLASS_NOT_FOUND
2016-06-23 05:21:56,410 ERROR Unable to locate appender GrayLog2 for logger
2016-06-23 05:21:56,411 ERROR Unable to locate appender gelfAppender for logger
In the documentation for "How to use GELFJ" it says:
"Grab latest JAR from the downloads section and drop it into your classpath. You will also need com.googlecode.json-simple, json-simple and log4j."
Forgive my ignorance, but what is the difference between "com.googlecode.json-simple" and "json-simple"? When I search for "com.googlecode.json-simple" they look like they're the same thing. If they're different, where do I find them?
Thanks!
I'm trying to use Gelfj to send custom data into Graylog2. I'm using the "addField()" method for this. According to the Gelf Protocol, and additional field can be numeric, but there is only one addField() method and it takes two strings. Should there be additional methods that take numbers as inputs and sends numeric (non-string) values to the DB?
I'm assuming I can use the setAdditonalFields() method, passing it a map full of various objects to pass in different data types? I'll test that theory.
If I'm using the GelfHandler, all logs are reported as "INFO", because they don't have the proper mapping to Syslog levels.
I made a "quick fix" in 1741c9b (in my forked repo), but my problems with that:
Please comment if you have some time...
Regards,
Zoltan
I tried just to send one simple message to graylog2 server.
I got gelfj-1.1.14.jar,json-simple-1.1.1.jar and put them into classpath.
Then tried:
GelfMessage message = new GelfMessage("Short message", "Long message", new Date(), "1");
message.setHost("origin-host");
GelfUDPSender gelfSender = new GelfUDPSender(host,port);
GelfSenderResult gelfSenderResult = gelfSender.sendMessage(message);
if (!GelfSenderResult.OK.equals(gelfSenderResult)) {
reportError("Error during sending GELF message. Error code: " + gelfSenderResult.getCode() + ".", gelfSenderResult.getException(), ErrorManager.WRITE_FAILURE);
}
GelfSenderResult.OK.equals(gelfSenderResult) has true value, but i cannot see my message on graylog server. Is it enough to start before trying to use GELFAppender?
Due to the changes from commit e43ac6a gelfj (or more precisely GelfSender) won't forward messages to the configured Graylog2 server if the originHost property is unset or empty.
The fallback (getting the local host's hostname) only kicks in when GelfMessage#toJson is being executed. Otherwise GelfMessage#isValid will return false due to GelfMessage.host being empty.
Hey t0xa,
I've noticed that the 'host' field is set to 127.0.1.1 (Ubuntu 10.04, see http://stackoverflow.com/questions/2381316/java-inetaddress-getlocalhost-returns-127-0-0-1-how-to-get-real-ip).
My ideas are 1) allow the hostname to be set explicitly in the appender, 2) allow the hostname to be set via the 'host' JSON value of the additional fields. 3) have some boolean flag to flip between InetAddress.getLocalHost().getHostAddress() and InetAddress.getLocalHost().getHostName() (although it looks like getHostName() can do a looking, which shouldn't be done for every message instance); . I think #1 probably makes the most sense.
You can see how gelf4j does it (but it's nothing special): https://github.com/pstehlik/gelf4j/blob/master/src/main/groovy/com/pstehlik/groovy/gelf4j/appender/Gelf4JAppender.groovy
Let me know if you want me to get a working solution + pull request.
Thanks,
-nick
GZipOutputStream doesn't always return good results if you don't call stream.finish() before calling stream.close(). Weird, buggy, but true.
The setAdditionalFields()
method stores the original map in this.additionalFields
and the addField()
method (and others) modifies that map later. This can lead to concurrency problems and breaks if an immutable map is passed as parameter.
public void setAdditonalFields(Map<String, Object> additonalFields) {
this.additonalFields = additonalFields;
}
public GelfMessage addField(String key, String value) {
getAdditonalFields().put(key, value);
return this;
}
It looks like both GelfUDPSender
and GelfTCPSender
caches the hostnames of the graylog server for the duration of the process, which makes it unnecessary hard to fail-over / replace the server without restarting all the clients so they can resolve the hostname again.
Please correct me if I've read the source code wrong.
Is there any reason why this cannot be done upon every reconnect on the GelfTCPSender
and periodically (i.e default every 10 min, configurable via a new setting) for the GelfUDPSender
?
Hi there,
We're using the gelf appender and the facility is always set to to be gelf-java. I'm having a look through the source code at the moment and I can see that gelf-java is the default set on a GelfMessage. My guess is that it is not being read correctly from the log4j file and therefore not being set.
I'm trying to write a unit test that would prove this either way, but I can't see the code that converts a log4j file into a GelfAppender. If you can help me out I will try to write a failing test.
Thanks!
During initialization process WebSphere Application Server initializes JUL multiple times. Bug in Websphere causes reinitialization process to leave closed handlers attached to loggers (If I remember correctly from my previous studies, WebSphere's LogManager does try to remove handler, but it does it incorrectly) Specification allows having closed handlers, so getting IBM to fix this will be hard. If GelfHandler implemented close() interface according to specification, problem would not exist.
Current GelfHandler.close closes connection (sender) and sets it to null. This is not a valid implementation, because publish method has logic that reopens connection when it is set to null.
Solution is to add a boolean flag to mark connection as closed.
Handler javadoc:
public abstract void close()
throws SecurityException
Close the Handler and free all associated resources.
The close method will perform a flush and then close the Handler. After close has been called this Handler should no longer be used. Method calls may either be silently ignored or may throw runtime exceptions.
There are three constructors in the GelfMessage class. In two of them, the Long timestamp is divided by 1000L, but in the other (the middle one, shorter of the two with the timestamp passed in as a Long), the timestamp is not divided by 1000L. Is there a reason for this that I'm missing?
is it possible that you tag the current dev code as 1.1.5 to have an available version via maven repositories?
Hey,
I'm looking for a way to specify some additional fields via the log4j.properties. In my specific use-case, I'd like to add an environment (dev/staging/production) field, as well as an application (ThisService,ThatService) field. Wondering if you had any thoughts on a general solution. Maybe there could be an 'additionalFields' property whose setter could take query-string like values ('environment=dev&application=myApp'). I could modify the appender for my needs, but this feels like a pretty common use-case. Let me know if you have any ideas. Thanks,
-nick
when data contains non latin symbols (may be non UTF strings) then it will broken on server side, because server use UTF-8.
You need to use encoding when data in string format are being converted to bytes.
One of that place is GelfMessage->gzipMessage.
try {
GZIPOutputStream stream = new GZIPOutputStream(bos);
stream.write(message.getBytes(<b>"UTF-8"</b>));
...
Patch text:
--- gelfj/src/main/java/org/graylog2/GelfMessage.java (revision 12b0e90)
+++ gelfj/src/main/java/org/graylog2/GelfMessage.java (revision )
@@ -124,7 +124,7 @@
try {
GZIPOutputStream stream = new GZIPOutputStream(bos);
- stream.write(message.getBytes());
+ stream.write(message.getBytes("UTF-8"));
stream.finish();
stream.close();
byte[] zipped = bos.toByteArray();
Instructions on using github to provide a maven repository for your project here:
http://cemerick.com/2010/08/24/hosting-maven-repos-on-github/
Hello,
I'm running the latest code (as of 31 January 2013), and I've noticed that our hosts using this appender will stop sending logs to graylog2 intermittently. Once this occurs, only a restart of the service fixes the problem. This is particularly bad for our production hosts, where a restart is simply not possible most times. I've not been able to track down the root cause, but I will comment on this issue if I get any more details.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.