Giter Club home page Giter Club logo

semantic-logging's Introduction

⚠️ This project is no longer under development. A possible alternative, depending on use case, is Microsoft.Diagnostic.EventFlow. ⚠️

What is Semantic Logging?

Semantic Logging (formerly know at the Semantic Logging Application Block or SLAB) is designed by the patterns & practices team to help .NET developers move from the unstructured logging approach towards the strongly-typed (semantic) logging approach, making it easier to consume logging information, especially when there is a large volume of log data to be analyzed. When used out-of-process, Semantic Logging uses Event Tracing for Windows (ETW), a fast, lightweight, strongly typed, extensible logging system that is built into the Windows operating system.

Semantic Logging enables you to use the EventSource class and semantic log messages in your applications without moving away from the log formats you are familiar with (such as database, text file, Azure table storage). Also, you do not need to commit to how you consume events when developing business logic; you have a unified application-specific API for logging and then you can decide later whether you want those events to go to ETW or alternative destinations.

How do I use Semantic Logging?

Official releases are available via NuGet. You can also head to msdn.com for additional information, documentation, videos, and hands-on labs.

Building

To build the solution, run msbuild.exe from the project’s build folder. You'll need to use the Visual Studio Developer Command Prompt. Some of the unit tests require a SQL database.

How do I contribute?

Please see CONTRIBUTING.md for more details.

Release notes

Release notes each release are available.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

semantic-logging's People

Contributors

aboobolaky avatar aelij avatar atoakley avatar bennage avatar dragon119 avatar expecho avatar federicoboerr avatar fsimonazzi avatar gmelnik avatar jdom avatar mattjohnsonpint avatar mekod4 avatar noocyte avatar ohads-msft avatar paulirwin avatar randylevy avatar tadams1138 avatar tanaka-takayoshi avatar trentmswanson avatar vermeeca avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

semantic-logging's Issues

Logstash Error: no such file to load -- azure

Hi,

  1. I'm trying to use the Logstash plugin in order to read from Azure table.
    When I try to run Logstash using the plugin I get the following error:

Couldn't find any input plugin named 'azurewadtable'. Are you sure this is correct? Trying to load the azurewadtable input plugin resulted in this error: no such file to load -- azure

What did I miss?

  1. I want to use this plugin to read other tables than WAD.
    Possibly Azure Storage Analytics table, is there a plugin for that as well or should I make the changes myself?

Thanks

.NET 4.6 support

Hi everyone,

We are currently discussing about this topic. Stay tuned here, we will share more information soon.

Processing a SLAB-created file holding event data

(I copied this issue from the Codeplex site; no one had responded to it there.)

The documentation describes both in-process and out-of-process mechanisms for processing events as they occur, in real time. I was planning to update my application that now writes (rather unstructured) text messages to a file for examination later (via redirected Console output) so that it also writes the same events using SLAB.

I was then planning to write an application that reads the resulting event file(s) and does different things with the different kinds of events. (That should be much easier than reading the text file and parsing the different barely-structured message formats that now exist in the pure text file.) This would be entirely asynchronous to the creation of the event files.

For example, some things that are encountered could be "data problems" with the plan of sending an email to the users who entered bad data. Other things (like a failure to access the database or getting errors from a 3rd-party API) could be "processing problems" about which administrators should be notified. Performance data (how long different things take) could also be recorded, to build a profile of application performance to check for possible issues as data size increases.

I have not seen anything that shows me how I can take a file produced by SLAB and treat it as an "event stream" for processing later. (I could surely do this if I built my own way of serializing structured copies of the event data and used it instead of SLAB.) Is that possible?

Thank you for any information.

SqlDatabaseSink stopped writing to Traces table

SqlDatabaseSink stopped writing to Traces table

I added a FlatFileSink in parallel to make sure the listener is not the issue, which writes fine to a file.
I also have a Unit Test for the SqlDatabaseSink which works fine, but when subscribing from the main project via Global.asax only the FlatFileSink works.

Screenshots:
image

image

image

Minimalist ELK setup from the doc keeps failing

Here is error.

9/15/2015 6:15:58 PM - [Configuration] Installing ElasticSearch.Exception calling "Invoke" with "1" argument(s): "Exec:
Failed to run ssh -o ConnectTimeout=360 -o
StrictHostKeychecking=no -o UserKnownHostsFile=NUL -i id_rsa -p $SshPort $SshString $UpdateCommand"
At C:\LogProject\semantic-logging-elk\semantic-logging-elk\ELK\modules\deployment.psm1:50 char:3

  • $setupFunction.Invoke($VMName)
  • CategoryInfo : NotSpecified: (:) [], MethodInvocationException
  • FullyQualifiedErrorId : RuntimeException

ObservableEventListener hangs on Dispose

I'm using the flowing code to initialize and subscribe to ObservableEventListener.

var listener = new ObservableEventListener();
            listener.EnableEvents(
            SampleEventSource.Log,
            EventLevel.LogAlways,
            Keywords.All);

            listener
                .FlushOnTrigger(entry => entry.Schema.Keywords == SampleEventSource.Keywords.Exceptions, bufferSize: 2)
                .Subscribe(new DebugObserver());

            SampleEventSource.Log.ApiTiming("test", "should_work", 1);
            SampleEventSource.Log.Exception(typeof(ArgumentException).FullName, "Wrong argument");
            listener.Dispose();
 public static IObservable<T> FlushOnTrigger<T>(this IObservable<T> stream, Func<T, bool> shouldFlush, int bufferSize)
 {
     return stream
    .Buffer(() => stream.SkipWhile(i => !shouldFlush(i)))
    .SelectMany(l => l.Reverse().Take(bufferSize + 1).Reverse());
}

It happens to hang on listener.Dispose();

With a brief investigation I've noticed that it hangs in EventEntrySubject.cs OnCompleted method.

public void OnCompleted()
{
            var currentObservers = this.TakeObserversAndFreeze();

            if (currentObservers != null)
            {
                Parallel.ForEach(currentObservers, observer => observer.OnCompleted());
            }
}

In case the parallel loop is replaced with non-parallel statement everything works well.
Before loop execution the observers collection is the flowing:
image

Please advise.

Roadmap?

Hi Guys, what are the plans for vNext of the semantic logging block. Do you plan to support more out-of-the box sinks like EventHub or DocumentDB? Can you share some plans for the future? There's not a lot activity around the semantic logging block when it comes to future plans.. Or is there another place I should look for to find this kind of information?

ETW manifests not updated with new Event details

When using the out-of-proc service, Event details are only cached in the ETW manifest the first time the event is seen. Changes to the event properties, Level, Message, etc will not propagate to the manifest and thus the old properties are used even though the new properties are compiled into the EventSource assembly.

This is sure to cause everyone who uses the out-of-proc service hours of frustration. This has been noted by others:

Issue 57 was closed due to lack of activity but should have been filed as a bug IMHO. At the very least, every SLAB document dealing with the out-of-proc service should spell this out explicitly. The only hint at the caching behavior of the manifests I've found comes from here:
Collecting events from multiple processes

The trace event service in the Out-of-Process Host automatically manages updates to event source manifests, and it will always use the highest version. Therefore, you should add version information to your event source classes, and ensure that changes are only additive, in order to avoid affecting the logging behavior of the other applications.

It's pretty easy to miss that especially since it's in the section regarding multiple processes and the issue crops up during development of a single process solution. The solution to this issue appears to be versioning. Under Versioning your EventSource Class we see:

If you do need to modify your EventSource class, you should restrict your changes to adding methods to support new log messages, and adding overloads of existing methods (that would have a new event ID). You should not delete or change the signature of existing methods.

Again it is talking about having multiple instances of the EventSource in different processes and there is no mention of actual versioning.

I've found that by applying the EventAttribute.Version to my Event (presumably works for EventSource as well), the manifest is updated as the version is incremented. This should be stated somewhere in the documentation.

Formatted message not formatted when run with out-of-proc service

I find that if my Event uses a custom format for the message, the formatting isn't processed when running through the out-of-proc service.

The following event produces a formatted message (hex value) when run in proc with the FlatFileLog, yet produces the literal value {0:X8} when run through the flatFileSink via the service. If the format is simply {0}, the actual int value is logged as one would expect.

Any ideas what's going on?

[Event(110, Message = "{0:X8}", Level = EventLevel.Informational)]
public void Event110(int value)
{
    WriteEvent(110, value);
}
<flatFileSink name="flatFileEventSink"  fileName="log.txt">
  <sources>
    <eventSource name="SystemEvents" level="Verbose"/>
  </sources>
  <eventTextFormatter header="+=========================================+"/>
</flatFileSink>

azurewabtable logstash plugin not working

I was trying out the azurewadtable input plugin with stdout output and i keep seeing the error
'result not found {:level=>:warn}'. I cross checked the WADLogsTable and it is certainly receiving logs. Any idea what is causing the malfunction?

Username issues in scripts

The mountDisks.sh script requires username to be "elasticSearch". This parameter is though setable in the powershell scripts. Please include this in the documentation.

Cant self host out-of-proc sinks

I want to self host instead of installing the existing windows service project. That way I can incorporate it into an existing service I already have.

I've found two issues preventing this

  1. certain classes need to be made public instead of internal
  2. the EventSource I'm using (from a windows 8 project) doesn't work if the session is enabled with the default arguments

COMException during initialization

I'm getting the following exception after calling TraceEventService.Start. It seems that the sendManifest parameter in EnableProvider is the problem; the exception occurs when it's set to true.

I'm actually using SLAB now with this flag always set to false, so I'm not sure why this is needed at all.

My .NET Framework version is 4.6.1.

System.Runtime.InteropServices.COMException (0x80071068): The GUID passed was not recognized as valid by a WMI data provider. (Exception from HRESULT: 0x80071068)
  at System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32 errorCode, IntPtr errorInfo)
  at System.Runtime.InteropServices.Marshal.ThrowExceptionForHR(Int32 errorCode)
  at Microsoft.Diagnostics.Tracing.Session.TraceEventSession.EnableProvider(Guid providerGuid, TraceEventLevel providerLevel, UInt64 matchAnyKeywords, TraceEventProviderOptions options)

InspectAll error with WriteEventWithRelatedActivityId

The following is producing an InspectAll error of:

The number of WriteEvent arguments and event parameters are different in event name 'Event4'

[Event(103)]
public void Event4(string name, string value)
{
    WriteEventWithRelatedActivityId(103, Guid.NewGuid(), name, value);
}

I would not expect this to cause an error. This however produces no error:

[Event(103)]
public void Event4(string name)
{
    WriteEventWithRelatedActivityId(103, Guid.NewGuid(), name);
}

Any ideas? Thanks.

.NET 4.6.1
SLAB: 2.0.1406.1

SQL database test is failing consistently

The error I'm seeing is:

TestCleanup method Microsoft.Practices.EnterpriseLibrary.SemanticLogging.Tests.EventListeners.when_receiving_many_events_with_imperative_flush.Cleanup threw exception. System.Data.SqlClient.SqlException: System.Data.SqlClient.SqlException: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding. ---&gt; System.ComponentModel.Win32Exception: The wait operation timed out.
   at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
   at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
   at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
   at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean&amp; dataReady)
   at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async, Int32 timeout, Boolean asyncWrite)
   at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, String methodName, Boolean sendToPipe, Int32 timeout, Boolean asyncWrite)
   at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
   at Microsoft.Practices.EnterpriseLibrary.SemanticLogging.Tests.TestSupport.LocalDatabaseContext.DetachDatabase() in d:\Dev\OSS\semantic-logging\source\Tests\SemanticLogging.Tests\TestSupport\LocalDatabaseContext.cs:line 76
   at Microsoft.Practices.EnterpriseLibrary.SemanticLogging.Tests.TestSupport.LocalDatabaseContext.OnCleanup() in d:\Dev\OSS\semantic-logging\source\Tests\SemanticLogging.Tests\TestSupport\LocalDatabaseContext.cs:line 58
   at Microsoft.Practices.EnterpriseLibrary.SemanticLogging.Tests.EventListeners.when_receiving_many_events_with_imperative_flush.OnCleanup() in d:\Dev\OSS\semantic-logging\source\Tests\SemanticLogging.Tests\UsingEventListener\SqlDatabaseEventListenerTests.cs:line 46
   at Microsoft.Practices.EnterpriseLibrary.SemanticLogging.Tests.TestSupport.ContextBase.Cleanup() in d:\Dev\OSS\semantic-logging\source\Tests\SemanticLogging.Tests\TestSupport\ContextBase.cs:line 20

Note that it is only failing in DetachDatabase, which is after the SET SINGLE_USER command and the first command to be executed on the master database.

Semantic Logging not working for certain keryword and eventsource name when running as a service on some machines

Hello,

Our MS rep Paul King asked us to send this issue along to you.

We're looking for assistance with the Semantic Logging Block. We have encountered a situation where when using the out of process service, logging is not occurring for keyword=”8” if the service is running as Local System. Running the service as an Administrator, or running to the console window works just fine. All other keywords work, even running as Local System.

Log files are created when the service starts, so there isn’t a security issue.
This is only happening on a couple of machines so far. Most dev boxes log this keyword just fine.
We have 6 other applications installed on the same machine (each with different EventSource name) and they all work fine for all Keywords no matter how semantic logging service is running.

_If the name of the Event source is changed for this application, the items are logged correctly._

While the last item may be what appears to be an easy fix, it will have rippling affects in our documentation, processes, monitoring and scripts for many teams.

If we delete the log files before running the service, on the first call to log info we get the following in the Windows event log:

EventId : 807, Level : Warning, Message : The loss of 1 events was detected in trace session 'Microsoft-SemanticLogging-Etw-DDP'., Payload : [sessionName : Microsoft-SemanticLogging-Etw-DDP] [eventsLost : 1] , EventName : TraceEventServiceProcessEventsLostInfo, Timestamp : 2015-11-17T21:35:37.4305157Z, ProcessId : 9400, ThreadId : 11312

EventId : 807, Level : Warning, Message : The loss of 1 events was detected in trace session 'Microsoft-SemanticLogging-Etw-DDPTraffic'., Payload : [sessionName : Microsoft-SemanticLogging-Etw-DDPTraffic] [eventsLost : 1] , EventName : TraceEventServiceProcessEventsLostInfo, Timestamp : 2015-11-17T21:35:37.4461157Z, ProcessId : 9400, ThreadId : 11312

If the log files already existed when starting the service, we don't get these Windows event log entries. Note the files are created by the service without event entries, they only show up once we try to log and only if the files didn't exist when the service started.

I did some research on this and it appears the 807 event id indicates that the ETW buffers have overflowed and it is recommended to change the buffering configuration options for the sinks to reduce the chance that the buffers overflow with your typical workloads. Unfortunately, I don’t see how to change this, not does it explain why it works with simply changing the user the service runs as.

Additionally, we ran Process Monitor and couldn’t find any failures on those machines either.

Any pointers on how to get to the bottom of why this may be happening?

Attached is a sample application that shows how we are using semantic logging. (change extension from txt to zip)
EventSourceTester.txt

We’ve found that simply changing the name of the event source allows it to work.

In the zip…
• EventSourceTester – DDP: This is the original sample, with EventSource.Name=Xerox-XGS-DDP. This works as expected on my dev box, but does not log entries with Keyword=8 on grsdeploybox8.
o Logs -> DDP: This log ends up with 2 of the 3 keywords types, the 3rd non-logged keyword being the Traffic=8.
o Logs -> DDPTraffic: This log is empty, since the only thing that was supposed to be logged here was Keyword=Traffic=8, and for some reason that doesn’t work when EventSource.Name=Xerox-GRS-DDP.
• EventSourceTester – EST: This is exactly the same as the last sample, except, with EventSource.Name=Xerox-XGS-EST. This works as expected on my dev box and grsdeploybox8.
o Logs -> DDP: This log ends up with all three entries as it should.
o Logs -> DDPTraffic: This log ends up with only the Traffic entries as it should.
• SemanticLoggingService: This is the exe, dlls, and config files for the Windows service so you can see what we have in place

EventSourceAnalyzer can't inspect Microsoft.Diagnostics.Tracing.EventSource

trying to use EventSourceAnalyzer to validate my custom EventSource, which is derived from Microsoft.Diagnostics.Tracing.EventSource

not allow, getting compile error "Cannot convert type ??? via a reference conversion ..."

the reason I adopting Microsoft.Diagnostics.Tracing.EventSource, because there is a tool (Microsoft.Diagnostics.Tracing.RegisterEvent) to generate etw manifest file.

EventSourceAnalyzer.InspectAll does not work with Microsoft.Diagnostics.Tracing.EventSource

Hi, I have an issue with EventSourceAnalyzer.InspectAll() after installing the Microsoft EventSource Library 1.1.28.

Description

I used Microsoft.Diagnostics.Tracing.EventSource from the .NET framework before and I created an event source class and everything worked just fine:

public class MyEventSource : Microsoft.Diagnostics.Tracing.EventSource()
{
  // ...
}

In my test project I created the following test to verify the correctness of the class:

  [TestClass]
  public class MyEventSourceTest
  {
    [TestMethod]
    public void InspectAllShouldSuccedOrWeHaveNoLogging()
    {
      EventSourceAnalyzer.InspectAll(MyEventSource.Default);
    }
  }

From the the signature of InspectAll I can clearly see that it only supports the internal System.Diagnostics.Tracing.EventSource class:

public static void InspectAll(System.Diagnostics.Tracing.EventSource eventSource);

Question

I actually updated to the newest EventSource library as I read about some of its improvements compared to the internal version.

  • However, I am wondering why the InspectAll method does not support this EventSource?
  • What I am I doing wrong?
  • Is this an intended behaviour?
  • How am I supposed to inspect a custom event source? Is there another way how I can achieve this?

Thanks for your feedback an reply!

Below you find the installed package versions (in an abbreviated form) for the main and the test project.

Notes

This issue somehow relates to the following issues (but I created a new issue as it does not totally fit the existing ones):

  • #41 (this is actually pretty much the problem I described, however the issue was reredirected to the issues below)
  • #27
  • #33

Packages in main project

PM> get-package | Select Id, Version | ft -autosize

Id                                                    Version        
--                                                    -------        
EnterpriseLibrary.SemanticLogging                     2.0.1406.1     
EnterpriseLibrary.SemanticLogging.EventSourceAnalyzer 2.0.1406.1     
Microsoft.Diagnostics.Tracing.EventRegister           1.1.28         
Microsoft.Diagnostics.Tracing.EventSource             1.1.28         
Microsoft.Diagnostics.Tracing.EventSource.Redist      1.1.28         

Packages in test project

PM> get-package -ProjectName TestProject | Select Id, Version | ft -autosize

Id                                                    Version        
--                                                    -------        
EnterpriseLibrary.SemanticLogging                     2.0.1406.1     
EnterpriseLibrary.SemanticLogging.EventSourceAnalyzer 2.0.1406.1     
EntityFramework                                       6.1.3          
Microsoft.Diagnostics.Tracing.EventRegister           1.1.28         
Microsoft.Diagnostics.Tracing.EventSource             1.1.28         
Microsoft.Diagnostics.Tracing.EventSource.Redist      1.1.28         
Microsoft.WindowsAzure.ConfigurationManager           2.0.0.0        

SLAB - Print below logs on same and one line

What setting I need to change for printing below log on same line?

ProviderId : 46875d78-3924-5303-f49e-65b8bc0dc289
EventId : 11
Keywords : 1024
Level : Error
Message : Unexpected
Opcode : Info
Task : 1
Version : 1
Payload : [sessionID : e3491b01-40de-4dbf-8287-2bab1c81102e] [type : OperationCanceledException] [data : ] [innerException : ] [message : The operation was canceled.] [source : mscorlib] [stackTrace : atSystem.Threading.CancellationToken.ThrowOperationCanceledException()atSystem.Threading.CancellationToken.ThrowIfCancellationRequested()atSystem.Net.Http.HttpContentExtensions.d__01.MoveNext()---Endofstacktracefrompreviouslocationwhereexceptionwasthrown---atSystem.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()atSystem.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Tasktask)atSystem.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Tasktask)atSystem.Runtime.CompilerServices.TaskAwaiter1.GetResult()atSystem.Web.Http.ModelBinding.FormatterParameterBinding.d__0.MoveNext()---Endofstacktracefrompreviouslocationwhereexceptionwasthrown---atSystem.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()atSystem.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Tasktask)atSystem.Runtime.Com

No support for "dynamic" events introduced in .Net Framework 4.6

.Net Framework 4.6 introduced the concept of "dynamic" events via the Write(String, T) method (and similar overloads). These are super handy as I can create events dynamically without declaring the events up front. It's a great way for me to build flexible logging that allows for variable payloads.

These dynamic events don't work with semantic logging because there is no event ID associated with them, they just have a name. Well, technically there is an ID but it's -1. So from an SL perspective, these events don't really exist on the EventSource. Consequently, SL fails to lookup the schema and then fails to create the EventEntry is passes to the observers.

The following console app reproduces the issue (must target Framework 4.6 to build & run). When you run this you should get a 1st-chance exception (that gets swallowed and logged by SL) in EventSourceSchemaCache.GetSchema(...)

using System;
using System.Diagnostics.Tracing;
using Microsoft.Practices.EnterpriseLibrary.SemanticLogging;

namespace Net46Test
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Let's write some logs!");

            var myLogger = new EventSource("MyComponentLogger");
            var slabListener = new ObservableEventListener();
            slabListener.EnableEvents(myLogger, EventLevel.LogAlways);
            slabListener.LogToConsole();

            // Something bad happens

            var myEvent = new MyDiagEvent { Message = "Something bad happened" };
            myLogger.Write<MyDiagEvent>("MyDiagEvent", myEvent);

            Console.ReadLine();
        }
    }

    [EventData()]
    public class MyDiagEvent
    {
        public string Message { get; set; }
    }
}

I think we need to add support for the "self describing event format" events. (There are some notes on this in the EventSource source at https://github.com/dotnet/coreclr/blob/master/src/mscorlib/src/System/Diagnostics/Eventing/EventSource.cs.)
Any chance this is on your radar for a future release?

Error installing logging service from console application

For whatever reason, I cannot install this service. Here's what I did. 1. Created a new console application in visual studio 2015.

Opened the NuGet Package Manager, and installed the EnterpriseLibrary.SemanticLogging.Service package. In the description it reads "... This package contains an out-of-proc Windows Service for SLAB.

Opened the Visual Studio Command Prompt (2010). Changed the directory to the location: C:\Users\User\Documents\visual studio 2015\Projects\SampleLoggingApp\packages\EnterpriseLibrary.SemanticLogging.Service.2.0.1406.1\tools

Ran the command: install-packages.ps1

A notepad file opened, named: install-packages.ps1

Ran the command: SemanticLogging-svc.exe -install

I got back an error message. Not really sure what it means:
"Enterprise Library Semantic Logging Service v2.0.1406.1
Microsoft Enterprise Library
Microsoft Corporation

Running a transacted installation.

Beginning the Install phase of the installation.
See the contents of the log file for the C:\Users\User\Documents\Visual Stud
io 2015\Projects\SampleLoggingApp\packages\EnterpriseLibrary.SemanticLogging.Ser
vice.2.0.1406.1\tools\SemanticLogging-svc.exe assembly's progress.
The file is located at C:\Users\User\Documents\Visual Studio 2015\Projects\S
ampleLoggingApp\packages\EnterpriseLibrary.SemanticLogging.Service.2.0.1406.1\to
ols\SemanticLogging-svc.InstallLog.
Installing assembly 'C:\Users\User\Documents\Visual Studio 2015\Projects\Sam
pleLoggingApp\packages\EnterpriseLibrary.SemanticLogging.Service.2.0.1406.1\tool
s\SemanticLogging-svc.exe'.
Affected parameters are:
logtoconsole =
assemblypath = C:\Users\User\Documents\Visual Studio 2015\Projects\Sample
LoggingApp\packages\EnterpriseLibrary.SemanticLogging.Service.2.0.1406.1\tools\S
emanticLogging-svc.exe
logfile = C:\Users\User\Documents\Visual Studio 2015\Projects\SampleLoggi
ngApp\packages\EnterpriseLibrary.SemanticLogging.Service.2.0.1406.1\tools\Semant
icLogging-svc.InstallLog
An exception occurred while trying to find the installers in the C:\Users\User\Documents\Visual Studio 2015\Projects\SampleLoggingApp\packages\EnterpriseLib
rary.SemanticLogging.Service.2.0.1406.1\tools\SemanticLogging-svc.exe assembly.
System.Reflection.ReflectionTypeLoadException: Unable to load one or more of the
requested types. Retrieve the LoaderExceptions property for more information.
Aborting installation for C:\Users\User\Documents\Visual Studio 2015\Project
s\SampleLoggingApp\packages\EnterpriseLibrary.SemanticLogging.Service.2.0.1406.1
\tools\SemanticLogging-svc.exe.

An exception occurred during the Install phase.
System.InvalidOperationException: Unable to get installer types in the C:\Users
User\Documents\Visual Studio 2015\Projects\SampleLoggingApp\packages\Enterpr
iseLibrary.SemanticLogging.Service.2.0.1406.1\tools\SemanticLogging-svc.exe asse
mbly.
The inner exception System.Reflection.ReflectionTypeLoadException was thrown wit
h the following error message: Unable to load one or more of the requested types
. Retrieve the LoaderExceptions property for more information..

The Rollback phase of the installation is beginning.
See the contents of the log file for the C:\Users\User\Documents\Visual Stud
io 2015\Projects\SampleLoggingApp\packages\EnterpriseLibrary.SemanticLogging.Ser
vice.2.0.1406.1\tools\SemanticLogging-svc.exe assembly's progress.
The file is located at C:\Users\User\Documents\Visual Studio 2015\Projects\S
ampleLoggingApp\packages\EnterpriseLibrary.SemanticLogging.Service.2.0.1406.1\to
ols\SemanticLogging-svc.InstallLog.
Rolling back assembly 'C:\Users\User\Documents\Visual Studio 2015\Projects\S
ampleLoggingApp\packages\EnterpriseLibrary.SemanticLogging.Service.2.0.1406.1\to
ols\SemanticLogging-svc.exe'.
Affected parameters are:
logtoconsole =
assemblypath = C:\Users\User\Documents\Visual Studio 2015\Projects\Sample
LoggingApp\packages\EnterpriseLibrary.SemanticLogging.Service.2.0.1406.1\tools\S
emanticLogging-svc.exe
logfile = C:\Users\User\Documents\Visual Studio 2015\Projects\SampleLoggi
ngApp\packages\EnterpriseLibrary.SemanticLogging.Service.2.0.1406.1\tools\Semant
icLogging-svc.InstallLog
An exception occurred while trying to find the installers in the C:\Users\User\Documents\Visual Studio 2015\Projects\SampleLoggingApp\packages\EnterpriseLib
rary.SemanticLogging.Service.2.0.1406.1\tools\SemanticLogging-svc.exe assembly.
System.Reflection.ReflectionTypeLoadException: Unable to load one or more of the
requested types. Retrieve the LoaderExceptions property for more information.
Aborting installation for C:\Users\User\Documents\Visual Studio 2015\Project
s\SampleLoggingApp\packages\EnterpriseLibrary.SemanticLogging.Service.2.0.1406.1
\tools\SemanticLogging-svc.exe.
An exception occurred during the Rollback phase of the System.Configuration.Inst
all.AssemblyInstaller installer.
System.InvalidOperationException: Unable to get installer types in the C:\Users
User\Documents\Visual Studio 2015\Projects\SampleLoggingApp\packages\Enterpr
iseLibrary.SemanticLogging.Service.2.0.1406.1\tools\SemanticLogging-svc.exe asse
mbly.
The inner exception System.Reflection.ReflectionTypeLoadException was thrown wit
h the following error message: Unable to load one or more of the requested types
. Retrieve the LoaderExceptions property for more information..
An exception occurred during the Rollback phase of the installation. This except
ion will be ignored and the rollback will continue. However, the machine might n
ot fully revert to its initial state after the rollback is complete.

The Rollback phase completed successfully.

The transacted install has completed.
SemanticLogging-svc Error: 2 : The installation failed, and the rollback has been performed."

Common buffering approach in all sinks to deal with internal sink failures

Currently the remote sinks ( SQL and Azure Table) make use of a buffering mechanism which drops messages when they hit an upperbound number.
The built in sinks in async mode ( Flat file and Rolling flat file) make use of a different buffering mechanism which does not set an upperbound.

  1. Follow same approach across all sinks.
  2. Investigate adding a setting to buffering mechanism in all sinks where we buffer by size in addition to count.
  3. Investigate simplifying the buffering implementation to make use of the Task Parallel Library instead of using a concurrent collection.

JsonEventTextFormatter wrong DateTime value - utc agains local

Using : EnterpriseLibrary.SemanticLogging.OutOfService

JsonEventTextFormatter has no option to log in local time with time zone information.
Expected result is Local time + timeoffset.
When giving timestamp format “yyyy-MM-ddTHH:mm:ss:fffzzz”, timeoffset information is ok, but time value is used as DateTime.UtcNow.
Should be DateTime.Now + timeoffset

Plugin/Input for Azure Tables

Are there plans to support other tables (user configurable) than the WADLogsTable? I have a specific need to support the Orleans tables (OrleansSiloMetrics, OrleansSiloStatistics). In the more general sense, if I configure logstash.conf with a storage account and a table name, can the schema of the table be reflected automatically, or at least provided in the configuration? I can help with requirements for a contribution.

Support for custom fields within the SqlSink tables

Is there any way of adding custom fields to the SQLSink table. I want to have an audit table structure like:

    [id] [bigint] IDENTITY(1,1) NOT NULL,
    __**[CustomField1] [nvarchar](10]**__,
    __**[CustomField2] [nvarchar](10]**__,
[InstanceName] [nvarchar](1000) NOT NULL,
[ProviderId] [uniqueidentifier] NOT NULL,
[ProviderName] [nvarchar](500) NOT NULL,
[EventId] [int] NOT NULL,
[EventKeywords] [bigint] NOT NULL,
[Level] [int] NOT NULL,
[Opcode] [int] NOT NULL,
[Task] [int] NOT NULL,
[Timestamp] [datetimeoffset](7) NOT NULL,
[Version] [int] NOT NULL,
[FormattedMessage] [nvarchar](4000) NULL,
[Payload] [nvarchar](4000) NULL,
[ActivityId] [uniqueidentifier],
[RelatedActivityId] [uniqueidentifier],
[ProcessId] [int],
[ThreadId] [int]

I presume that I would need to amend the TracesType custom database type to add my two custom fields, but can I write a custom event sink to construct the TracesType object?

Logstash azurewadtable plugin returning 400 One of the request inputs is not valid

I have a cloud service configured to save off custom ETW logs with Azure Diagnostics. I can browse the table fine with table explorer and it has data in it. When trying to consume the table with the azurewadtable plugin, I get: Oh My, An error occurred. {:exception=>#<Azure::Core::Http::HTTPError: InvalidInput (400): One of the request inputs is not valid. I verified these tables do have PreciseTimeStamp fields and have verified the account name/key.
table

Whats the best way to troubleshoot this?

Full debug output / stacktrace:

←[36mReading config file {:file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/agent.rb", :level=>:debug, :line=>"309", :method=>"local_config"}←[0m
←[36mCompiled pipeline code:
@inputs = []
@filters = []
@outputs = []
@periodic_flushers = []
@shutdown_flushers = []

      @input_azurewadtable_1 = plugin("input", "azurewadtable", LogStash::Util.hash_merge_many({ "account_name" => ("my_account_name") }, { "access_key" => ("my_account_key") }, { "table_name" => ("WADReceivedMessageTable") }))

      @inputs << @input_azurewadtable_1

def filter_func(event)
events = [event]
@logger.debug? && @logger.debug("filter received", :event => event.to_hash)
events
end
def output_func(event)
@logger.debug? && @logger.debug("output received", :event => event.to_hash)
end {:level=>:debug, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/pipeline.rb", :line=>"29", :method=>"initialize"}←[0m
←[36mPlugin not defined in namespace, checking for plugin file {:type=>"input", :name=>"azurewadtable", :path=>"logstash/inputs/azurewadtable", :level=>:debug, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/plugin.rb", :line=>"133", :method=>"lookup"}←[0m
←[33mazurewadtable plugin is using the 'milestone' method to declare the version of the plugin this method is deprecated in favor of declaring the version inside the gemspec. {:level=>:warn, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/config/mixin.rb", :line=>"139", :method=>"milestone"}←[0m
←[32mUsing version 0.9.x input plugin 'azurewadtable'. This plugin should work but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin. {:level=>:info, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/config/mixin.rb", :line=>"227", :method=>"print_version_notice"}←[0m
←[36mPlugin not defined in namespace, checking for plugin file {:type=>"codec", :name=>"plain", :path=>"logstash/codecs/plain", :level=>:debug, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/plugin.rb", :line=>"133", :method=>"lookup"}←[0m
←[36mconfig LogStash::Codecs::Plain/@charset = "UTF-8" {:level=>:debug, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/config/mixin.rb", :line=>"111", :method=>"config_init"}←[0m
←[36mconfig LogStash::Inputs::AzureWADTable/@account_name = "my_account_name" {:level=>:debug, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/config/mixin.rb", :line=>"111", :method=>"config_init"}←[0m
←[36mconfig LogStash::Inputs::AzureWADTable/@access_key = "my_account_key" {:level=>:debug, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/config/mixin.rb", :line=>"111", :method=>"config_init"}←[0m
←[36mconfig LogStash::Inputs::AzureWADTable/@table_name = "WADReceivedMessageTable" {:level=>:debug, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/config/mixin.rb", :line=>"111", :method=>"config_init"}←[0m
←[36mconfig LogStash::Inputs::AzureWADTable/@debug = false {:level=>:debug, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/config/mixin.rb", :line=>"111", :method=>"config_init"}←[0m
←[36mconfig LogStash::Inputs::AzureWADTable/@codec = <LogStash::Codecs::Plain charset=>"UTF-8"> {:level=>:debug, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/config/mixin.rb", :line=>"111", :method=>"config_init"}←[0m
←[36mconfig LogStash::Inputs::AzureWADTable/@add_field = {} {:level=>:debug, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/config/mixin.rb", :line=>"111", :method=>"config_init"}←[0m
←[36mconfig LogStash::Inputs::AzureWADTable/@entity_count_to_process = 100 {:level=>:debug, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/config/mixin.rb", :line=>"111", :method=>"config_init"}←[0m
←[36mconfig LogStash::Inputs::AzureWADTable/@collection_start_time_utc = "2015-08-27 15:53:36 UTC" {:level=>:debug, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/config/mixin.rb", :line=>"111", :method=>"config_init"}←[0m
←[36mconfig LogStash::Inputs::AzureWADTable/@etw_pretty_print = false {:level=>:debug, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/config/mixin.rb", :line=>"111", :method=>"config_init"}←[0m
←[36mconfig LogStash::Inputs::AzureWADTable/@idle_delay_seconds = 15 {:level=>:debug, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/config/mixin.rb", :line=>"111", :method=>"config_init"}←[0m
←[36mStarting process method @2015-08-27 08:53:36 -0700 {:level=>:debug, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-azurewadtable-0.9.2/lib/logstash/inputs/azurewadtable.rb", :line=>"41", :method=>"run"}←[0m
←[32mPipeline started {:level=>:info, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/pipeline.rb", :line=>"87", :method=>"run"}←[0m
←[36m2015-08-27 15:53:36 UTC {:level=>:debug, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-azurewadtable-0.9.2/lib/logstash/inputs/azurewadtable.rb", :line=>"53", :method=>"process"}←[0m
←[36mcollection time parsed successfully 2015-08-27 15:53:36 UTC {:level=>:debug, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-azurewadtable-0.9.2/lib/logstash/inputs/azurewadtable.rb", :line=>"109", :method=>"partitionkey_from_datetime"}←[0m
Logstash startup completed
←[36mConverting time to ticks {:level=>:debug, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-azurewadtable-0.9.2/lib/logstash/inputs/azurewadtable.rb", :line=>"121", :method=>"to_ticks"}←[0m
←[31mOh My, An error occurred. {:exception=>#<Azure::Core::Http::HTTPError: InvalidInput (400): One of the request inputs is not valid.
RequestId:ab95e112-0002-006f-1ce0-e033e2000000
Time:2015-08-27T15:53:42.3490985Z>, :level=>:error, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-azurewadtable-0.9.2/lib/logstash/inputs/azurewadtable.rb", :line=>"98", :method=>"process"}←[0m
←[31mA plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::AzureWADTable account_name=>"my_account_name", access_key=>"my_account_key", table_name=>"WADReceivedMessageTable", debug=>false, codec=><LogStash::Codecs::Plain charset=>"UTF-8">, entity_count_to_process=>100, collection_start_time_utc=>"2015-08-27 15:53:36 UTC", etw_pretty_print=>false, idle_delay_seconds=>15>
Error: InvalidInput (400): One of the request inputs is not valid.
RequestId:ab95e112-0002-006f-1ce0-e033e2000000
Time:2015-08-27T15:53:42.3490985Z
Exception: Azure::Core::Http::HTTPError
Stack: c:/elk/logstash/vendor/bundle/jruby/1.9/gems/azure-0.6.4/lib/azure/core/http/http_request.rb:151:in call' org/jruby/RubyMethod.java:116:incall'
c:/elk/logstash/vendor/bundle/jruby/1.9/gems/azure-0.6.4/lib/azure/core/http/signer_filter.rb:29:in call' c:/elk/logstash/vendor/bundle/jruby/1.9/gems/azure-0.6.4/lib/azure/core/http/http_request.rb:84:incall'
c:/elk/logstash/vendor/bundle/jruby/1.9/gems/azure-0.6.4/lib/azure/core/service.rb:47:in call' c:/elk/logstash/vendor/bundle/jruby/1.9/gems/azure-0.6.4/lib/azure/core/filtered_service.rb:33:incall'
c:/elk/logstash/vendor/bundle/jruby/1.9/gems/azure-0.6.4/lib/azure/core/signed_service.rb:39:in call' c:/elk/logstash/vendor/bundle/jruby/1.9/gems/azure-0.6.4/lib/azure/table/table_service.rb:252:inquery_entities'
c:/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-azurewadtable-0.9.2/lib/logstash/inputs/azurewadtable.rb:57:in process' c:/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-azurewadtable-0.9.2/lib/logstash/inputs/azurewadtable.rb:42:inrun'
org/jruby/RubyKernel.java:1511:in loop' c:/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-azurewadtable-0.9.2/lib/logstash/inputs/azurewadtable.rb:40:inrun'
c:/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/pipeline.rb:177:in inputworker' c:/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/pipeline.rb:171:instart_input' {:level=>:error, :file=>"/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/pipeline.rb", :line=>"182", :method=>"inputworker"}←[0m
c:/elk/logstash/vendor/bundle/jruby/1.9/gems/azure-0.6.4/lib/azure/core/http/http_request.rb:151:in call' org/jruby/RubyMethod.java:116:incall'
c:/elk/logstash/vendor/bundle/jruby/1.9/gems/azure-0.6.4/lib/azure/core/http/signer_filter.rb:29:in call' c:/elk/logstash/vendor/bundle/jruby/1.9/gems/azure-0.6.4/lib/azure/core/http/http_request.rb:84:incall'
c:/elk/logstash/vendor/bundle/jruby/1.9/gems/azure-0.6.4/lib/azure/core/service.rb:47:in call' c:/elk/logstash/vendor/bundle/jruby/1.9/gems/azure-0.6.4/lib/azure/core/filtered_service.rb:33:incall'
c:/elk/logstash/vendor/bundle/jruby/1.9/gems/azure-0.6.4/lib/azure/core/signed_service.rb:39:in call' c:/elk/logstash/vendor/bundle/jruby/1.9/gems/azure-0.6.4/lib/azure/table/table_service.rb:252:inquery_entities'
c:/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-azurewadtable-0.9.2/lib/logstash/inputs/azurewadtable.rb:57:in process' c:/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-azurewadtable-0.9.2/lib/logstash/inputs/azurewadtable.rb:42:inrun'
org/jruby/RubyKernel.java:1511:in loop' c:/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-azurewadtable-0.9.2/lib/logstash/inputs/azurewadtable.rb:40:inrun'
c:/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/pipeline.rb:177:in inputworker' c:/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/pipeline.rb:171:instart_input'

EventSourceException when using SLAB out-of-process with 4.6 runtime

On our project we are using SLAB out-of-process service with Azure Table Storage.
EnterpriseLibrary.SemanticLogging.Service.2.0.1406.1

After updating runtime to 4.6 every time when new log is written by calling WriteEventWithRelatedActivityId, an additional record appears in azure table with the next payload

{
  "message": "EventSourceException"
}

This error doesn't appear if WriteEvent is called.

Tried to reproduce on another machine with 4.5 runtime installed - no exception message.

TraceEventServiceFixture test consistently failing

The test:

given_traceEventService
when_logging_multiple_events_from_a_process_process_with_sampling
then_sampled_events_are_collected_and_processed

Is currently failing with no events being captured (EVENT_SOURCE_PACKAGE is not defined in the build).

Is this a known issue?

no events to console with minimal SemanticLogging-svc.xml

I'm trying to get any events to be printed to the console, so far no luck. I added a consoleSink with an eventSource for the CLR runtime provider.

image

SemanticLogging-svc.xml:

<?xml version="1.0" encoding="utf-8" ?>
<configuration xmlns="http://schemas.microsoft.com/practices/2013/entlib/semanticlogging/etw"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="http://schemas.microsoft.com/practices/2013/entlib/semanticlogging/etw SemanticLogging-svc.xsd">

  <!-- Optional settings for fine tuning performance and Trace Event Session identification-->
  <traceEventService/>

  <!-- Sinks reference definitons used by this host to listen ETW events -->
  <sinks>
    <!-- The service identity should have security permissions to access the resource according to each event sink -->
    <flatFileSink name="svcRuntime" fileName="SemanticLogging-svc.runtime.log" >
      <sources>
        <!-- The below settings shows a simple configuration sample for the buit-in non-transient fault tracing -->
        <!-- Remove this eventSource if you'd like, and add your own configuration according to the documentation -->
        <!-- The name attribute is from the EventSource.Name Property -->
        <eventSource name="Microsoft-SemanticLogging" level="Warning"/>
      </sources>
      <!--[Add any built-in or custom formatter here if the sink supports text formatters]-->
      <eventTextFormatter header="----------"/>
    </flatFileSink>

    <!--[Add any built-in or custom sink definition here]-->

    <consoleSink name="svcCLR">
      <sources>
        <!--<eventSource name="Microsoft-Windows-WinHttp" level="Verbose" />-->
        <!--<eventSource id="7d44233d-3055-4b9c-ba64-0d47ca40a232" level="Verbose" />-->

        <!-- CLR -->
        <eventSource id="e13c0d23-ccbc-4e12-931b-d9cc2eee27e4" level="Verbose" />
      </sources>
    </consoleSink>

  </sinks>

</configuration>

When I start a session using Microsoft Message Analyzer with the same configuration, I get a ton of event messages.
image

Any ideas about what I'm doing wrong? I just want any event message to be printed from any ETW provider.

Mismatched Event IDs in SemanticLoggingEventSource

WriteEvent should have id of 1000 to match. I founded this by running all of the unit tests in the source solution. The error message was more cryptic. 13 failed tests on my machine. 33 skipped test.

    [Event(1000, Level = EventLevel.Error, Keywords = Keywords.Sink, Message = "Parsing the manifest for provider '{0}' to handle the event with ID {1} failed. Message: {2}")]
    internal void ParsingEventSourceManifestFailed(string providerName, int eventId, string message)
    {
        if (this.IsEnabled())
        {
            this.WriteEvent(903, providerName, eventId, message);
        }
    }

Subscribing to events from Microsoft-SemanticLogging itself

I would like to have a custom sink in the standard out-of-process host which is listening to events from SemanticLoggingEventSource (i.e. itself - name = "Microsoft-SemanticLogging"). This is so I can fire off an email in case of 806/807 messages. By default the out-of-process app logs to EventLog or Console.
I can configure a sink to listen on this source name, but it doesn't get notified about these internal events.
If I start another SemanticLogging-svc.exe instance configured to listen on "Microsoft-SemanticLogging" first, then it is notified about 806/807 etc events fired from the main SemanticLogging-svc.exe instance when they are seen in the console there. However I need to start the instances in the correct order, and I don't want to have 2 instances running anyway.

Is there any way to call a custom sink inside the standard out-of-process host when the SemanticLoggingEventSource writes events?

Have ObservableEventListener see events from Microsoft.Diagnostics.Tracing

When creating writing events using Microsoft.Diagnostics.Tracing library, the ObservableEventListener does not see the events. If I switch to System.Diagnostics.Tracing, then ObservableEventListener can see the events. If there a way to get the semantic library to see events generated from Microsoft.Diagnostics.Tracing?

SQL Database Sink stripping the contents of Payload and FormattedMessage columns after 4000 characters

SQL database sink is stripping the contents of Payload and FormattedMessage column in database after 4000 characters.
When looked at the source code of the sink, it is evident that these column lengths have been capped to 4000 characters.

  1. Any idea why there is such a limitation on the number of characters?

In our application, when the payload and/or FormattedMessage is of greater than 4000 characters, the same is being trimmed and logged in the database table.

  1. Is there workaround for this one? And can we update the source code to log more than 4000 characters?

Seeing unexpected characters in some records written to Azure Table Storage by the out-of-process service.

Looking to gather some ideas to explain what might be going on with my ETW tracing.

Here's a screenshot of what I see when I look at the records written by the SLAP out-of-process service to the configured "SLAB" Azure Table:

2015-04-27_1329

I am also noticing that the malformed records are extraneous, as in they aren't the correct entries. It would appear from my screenshot that duplicate records are being written for the "OnActivate_Done" and "Configure_Done", but I believe the second entries for both events are the correct ones (outlined in red in the screenshot), while the ones outlined in orange are just plain messed up.

Any ideas for what's going on here? I can provide additional information if requested.

EnterpriseLibrary.SemanticLogging.EventSourceAnalyzer for .net4.6

Is there a version of EnterpriseLibrary.SemanticLogging.EventSourceAnalyzer targetted for .net 4.6 ?
or will the existing version="2.0.1406.1" work for .net 4.6 as well ?

I am using the EventSourceAnalyzer.InspectAll(MyEventSource) which works fine in .net 4.5.2 framework , but throws the following exception when tested against .net 4.6

"Microsoft.Practices.EnterpriseLibrary.SemanticLogging.Utility.EventSourceAnalyzerException: The specified EventSource does not have any method decorated with EventAttribute."

I am using the following two packages

Any help is appreciated

OutOfMemoryException when RollingFlatfFileSink is running out of disk space

Hi, I just registered at GitHub and I hope I register this issue correctly.

I just started to use Semantic Logging (and RollingFlatFile (Async mode) ) and I’m quite pleased with the functionality. However, before putting it into the production environment I performed some tests on how it would behave if running out of disk space.
I ran a small test application that creates a lot of events. I used the RollingFlatFile in Async mode and put the log files on a small usb memory stick.

I ended up with an OutOfMemoryException! More accurate, an EventSourceException (with the inner exception OutOfMemoryException).

I’m not an expert on ETW but after a quick look in the code for RollingFlatFileSink I made some reflections:

Reflection 1:
In the OnNext method, the EventEntry always gets added to the pendingEntries collection (BlockingCollection) even if the worker thread (Task: WriteEntries) some reason has terminated. Possibly it would be more robust to track if the worker thread terminates and prevent adding entries to pendingEntries (possible clear it, to free memory as well)

Reflection 2:
In the worker thread (WriteEntries() ) all exception are caught when writing to the stream.
try
{
this.writer.Write(formattedEntry);
} catch (Exception e)
{
SemanticLoggingEventSource.Log.RollingFlatFileSinkWriteFailed(e.ToString());
}
However, when the stream is flushed (in the beginning of WriteEntries() ), only OperationCancelledException and ObjectDisposedException are caught. I guess there is a major risk for the worker thread to terminate if there are some other Exceptions.

Help Needed 1:
Are there some patterns on how to manage errors and error states in Semantic Logging? In “Reflection 2“ above, errors result in the generation of event RollingFlatFileSinkWriteFailed.
And if I understood correctly I’m supposed to call
SemanticLoggingEventSource.Log.CustomFormatterUnhandledFault(ex.Message);
in case of error in my own TextFormatter.
Need help on the best way for my application to get aware of these errors.
In case of a GUI-application I would like to inform the user about the “disk-full error”.
In case of a server application I would like the server to send an E-mail about the “disk-full error”.

However… my main concern is the OutOfMemoryException.

Best Regards
/Stefan Vestin

Logstash azuretopic input plugin performance

Im getting less than 5 messages per second read with the azuretopic plugin. Have you done any performance tests with this one? Any settings that could affect performance here?
Running three ES nodes in azure on ubuntu machines.

Semantic Logging displays incorrect Timestamp

There are 3 timestamps (in Red below) in each exception log message and 1 of them always seems off. We are assuming the 1st 2 timestamps are different due to GMT even though neither of them have Timezone mentioned.
To know which is the correct one, we started writing the ExceptionTime (in Green below) in the exception itself.

Here is an example of an exception written using SemanticLogging:

image

Oracle support

Hi,
I’m in a need for Oracle sink and I was thinking for implementing it myself using MS SQL Server sink as a starting point unless there are official plans for adding Oracle support to SLAB. Are there any plans for that? If there aren't any plans and I implement my own sink I was wandering if you'll consider Oracle sink for pull request?
Best regards

Service Bus Input Plugin

It would really nice to also have a Logstash plugin for Azure Service Bus that let you pull log-messages from a queue or topic.

we i am installing i am getting error

when I am running New-SampleELKInstance.ps1, I am getting the below error.
One or more of the follwoing commands cannot be found on ENV:PATH : ssh, scp, ssh-keygen, sed.

I have configured the PATH correctly I am able to access all the commands

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.