Giter Club home page Giter Club logo

nanolog's Introduction

NanoLog

  • Low Latency C++11 Logging Library.
  • It's fast. Very fast. See Latency benchmark
  • NanoLog only uses standard headers so it should work with any C++11 compliant compiler.
  • Supports typical logger features namely multiple log levels, log file rolling and asynchronous writing to file.

Design highlights

  • Zero copying of string literals.
  • Lazy conversion of integers and doubles to ascii.
  • No heap memory allocation for log lines representable in less than ~256 bytes.
  • Minimalistic header includes. Avoids common pattern of header only library. Helps in compilation times of projects.

Guaranteed and Non Guaranteed logging

  • Nanolog supports Guaranteed logging i.e. log messages are never dropped even at extreme logging rates.
  • Nanolog also supports Non Guaranteed logging. Uses a ring buffer to hold log lines. In case of extreme logging rate when the ring gets full (i.e. the consumer thread cannot pop items fast enough), the previous log line in the slot will be dropped. Does not block producer even if the ring buffer is full.

Usage

#include "NanoLog.hpp"

int main()
{
  // Ensure initialize is called once prior to logging.
  // This will create log files like /tmp/nanolog1.txt, /tmp/nanolog2.txt etc.
  // Log will roll to the next file after every 1MB.
  // This will initialize the guaranteed logger.
  nanolog::initialize(nanolog::GuaranteedLogger(), "/tmp/", "nanolog", 1);
  
  // Or if you want to use the non guaranteed logger -
  // ring_buffer_size_mb - LogLines are pushed into a mpsc ring buffer whose size
  // is determined by this parameter. Since each LogLine is 256 bytes,
  // ring_buffer_size = ring_buffer_size_mb * 1024 * 1024 / 256
  // In this example ring_buffer_size_mb = 3.
  // nanolog::initialize(nanolog::NonGuaranteedLogger(3), "/tmp/", "nanolog", 1);
  
  for (int i = 0; i < 5; ++i)
  {
    LOG_INFO << "Sample NanoLog: " << i;
  }
  
  // Change log level at run-time.
  nanolog::set_log_level(nanolog::LogLevel::CRIT);
  LOG_WARN << "This log line will not be logged since we are at loglevel = CRIT";
  
  return 0;
}

Latency benchmark of Guaranteed logger

Thread Count 1 - percentile latency numbers in microseconds (lower number means better performance)

Logger 50th 75th 90th 99th 99.9th Worst Average
nanolog_guaranteed 0 1 1 4 8 68 0.347930
spdlog 3 3 3 5 11 129 2.588590
g3log 5 6 6 10 19 186 5.206230
reckless 0 0 1 1 175 1861 1.829760

Thread Count 2 - percentile latency numbers in microseconds (lower number means better performance)

Logger 50th 75th 90th 99th 99.9th Worst Average
nanolog_guaranteed 0 1 1 2 5 55 0.457240
nanolog_guaranteed 0 1 1 2 5 81 0.459090
spdlog 2 3 3 3 5 25 2.449580
spdlog 2 3 3 3 6 21 2.457150
g3log 4 5 6 12 18 64 4.574850
g3log 4 5 6 12 20 84 4.586590
reckless 0 1 1 11 417 1592 4.412750
reckless 0 1 1 12 417 2138 4.427810

Thread Count 3 - percentile latency numbers in microseconds (lower number means better performance)

Logger 50th 75th 90th 99th 99.9th Worst Average
nanolog_guaranteed 0 1 1 3 6 91 0.450700
nanolog_guaranteed 0 1 2 3 7 90 0.676050
nanolog_guaranteed 0 1 2 3 7 262 0.680430
spdlog 2 2 2 4 6 6729 1.803570
spdlog 3 3 3 5 8 25 2.679420
spdlog 3 3 3 5 10 50 2.685230
g3log 4 4 6 17 27 53 4.385530
g3log 4 4 6 16 26 55 4.435680
g3log 6 7 8 19 29 1031 5.896250
reckless 1 1 1 298 1643 3070 11.208420
reckless 1 1 1 382 2266 3006 12.310360
reckless 1 1 1 167 2839 3249 12.754520

Thread Count 4 - percentile latency numbers in microseconds (lower number means better performance)

Logger 50th 75th 90th 99th 99.9th Worst Average
nanolog_guaranteed 0 1 2 3 6 53 0.582140
nanolog_guaranteed 0 1 2 3 7 70 0.608980
nanolog_guaranteed 0 1 2 3 7 62 0.803630
nanolog_guaranteed 0 1 2 3 7 61 0.797270
spdlog 2 2 2 3 5 40 1.767930
spdlog 2 2 2 3 6 21 1.768640
spdlog 3 3 3 4 8 24 2.676170
spdlog 3 3 3 5 10 31 2.698580
g3log 4 4 5 17 30 7766 4.620760
g3log 6 7 9 21 35 8478 6.368940
g3log 6 7 8 22 32 1327 7.023880
g3log 7 8 9 23 36 8470 7.831750
reckless 1 1 1 506 3477 9224 18.959310
reckless 1 1 1 479 3636 8471 19.181160
reckless 1 1 1 530 2990 11658 19.245110
reckless 1 1 1 436 3641 8626 19.342780

Latency benchmark of Non guaranteed logger

  • Take a look at non_guaranteed_nanolog_benchmark.cpp for the code used to generate the latency numbers.
  • Benchmark was compiled with g++ 4.8.4 running Linux Mint 17 on Intel(R) Core(TM) i7-2630QM CPU @ 2.00GHz
Thread count: 1
	Average NanoLog Latency = 131 nanoseconds
Thread count: 2
	Average NanoLog Latency = 182 nanoseconds
	Average NanoLog Latency = 272 nanoseconds
Thread count: 3
	Average NanoLog Latency = 216 nanoseconds
	Average NanoLog Latency = 209 nanoseconds
	Average NanoLog Latency = 315 nanoseconds
Thread count: 4
	Average NanoLog Latency = 229 nanoseconds
	Average NanoLog Latency = 221 nanoseconds
	Average NanoLog Latency = 233 nanoseconds
	Average NanoLog Latency = 332 nanoseconds
Thread count: 5
	Average NanoLog Latency = 247 nanoseconds
	Average NanoLog Latency = 240 nanoseconds
	Average NanoLog Latency = 320 nanoseconds
	Average NanoLog Latency = 345 nanoseconds
	Average NanoLog Latency = 383 nanoseconds

Crash handling

  • g3log has support for crash handling. I do not see the point in re-inventing the wheel. Have a look at that what's done there and if it works for you, give Kjell credit and use his crash handling code.

Tips to make it faster!

  • NanoLog uses standard library chrono timestamps. Your platform / os may have non-standard but faster timestamps. Use them!

nanolog's People

Contributors

derekxgl avatar hellium666 avatar iyengar111 avatar stephanusvictor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nanolog's Issues

Support for dll builds in Win

Is this something that would interest you ?
If yes I can make a pull request or I can paste the code here as it is only minor changes in the header file.

Please show also worst case latencies

Sweet library! Nice work

I found the bench mark lacking in a couple of aspects

  1. It doesn't show how the logger performce under congestion. I.e when the buffer gets full
  2. Could you please also add worst case latencies and not only average?
    For time critical operations one cares less about the average as how the slowest log call does (especially when the ring buffer gets full).

Thanks
Kjell

VS 2015 - ATOMIC_FLAG_INIT, ...

Hi,
I am not sure if this is the place or method, but I would like to report some issues.

I am trying out this lib in Visual Studio 15 (v140) and had to make three modifications to get it to compile and run.

  1. Initialisation of the atomic_flag. (2x locations)
    I changed "flag(ATOMIC_FLAG_INIT )" to "flag{ ATOMIC_FLAG_INIT }"
    ATOMIC_FLAG_INIT is defined as {0} and flag(ATOMIC_FLAG_INIT ) tries to invoke the copy constructor for the flag.
  2. Initialise m_current_read_buffer to nullptr in class QueueBuffer.
  3. #include in cpp for format_timestamp() to work.

regards
Stephanus

nanolog probably have a problem with char array

Hi,

I have to use an old library which returns me information as char arrays, for example char[9] to represent the time. All char arrays have trailing '\0', and the array space may not be fully used. For example , sometimes it returns char[9] with 9 '\0'. However when I log the char array using LOG_INFO<<time;, I get garbage log, especially for debug version. I temporarily walk around it by LOG_INFO<<std::string(time), so that I can get correct result for both debug and release versions, however this slows down the logger.
Could you suggest what could be wrong?

Thanks,

Please publish bench results with more threads (e.g. 30) - spdlog is much faster than nanolog in those cases

For example, I tried with 30 threads:

spdlog:

Thread count: 30
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 2| 2008| 5363| 7.327570|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2034| 4904| 7.465930|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2121| 4601| 7.421450|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2104| 6542| 7.751890|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2182| 6650| 7.929030|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2069| 5047| 7.685910|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2162| 6604| 7.818460|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 2| 2136| 6884| 8.048350|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2122| 5321| 8.257710|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2049| 5449| 7.846350|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2197| 6425| 8.230680|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2153| 6359| 7.982550|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2157| 5757| 8.220220|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2095| 6169| 7.888390|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2186| 7179| 8.334880|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2069| 4141| 8.039100|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2147| 5234| 8.304780|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2095| 5316| 8.281490|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2171| 5917| 8.673500|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2148| 5867| 8.482180|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2166| 5445| 8.655320|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 4| 2134| 5746| 8.466010|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2094| 5471| 8.202370|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2061| 5310| 7.812070|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2146| 5571| 8.565020|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 10| 2032| 4975| 8.168500|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2162| 5578| 8.285580|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2069| 5720| 8.202470|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2022| 5511| 8.259040|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2011| 4762| 8.241690|

Nanolog:

Thread count: 30
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 4310| 178761|50.629680|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 43| 264004|52.347730|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 8004| 311883|52.771020|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 12005| 147729|54.932310|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 14102| 164018|57.345880|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 7267| 192764|57.332810|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 12005| 163999|57.516070|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 3960| 198338|59.072780|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 12006| 180000|58.336400|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 17070| 208002|58.790100|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 12005| 267919|58.528500|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 15592| 196010|58.751410|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 16004| 192001|57.880670|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 23810| 172006|59.647280|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 16015| 200010|59.560980|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 16003| 235997|59.704740|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 18501| 179637|60.219780|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 20006| 168489|60.772730|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 12004| 224008|60.729360|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 15729| 134244|60.417340|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 16016| 219633|60.976690|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 19583| 212665|59.847950|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 15125| 240006|61.047630|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 17420| 167854|61.279070|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 14772| 188001|60.885090|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 18502| 171997|61.185230|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 19992| 204001|60.691190|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 11461| 231877|60.039350|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 12007| 160004|60.264890|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 15697| 212002|60.925000|

gcc 4.8 build

I'm trying to build NanoLog using gcc 4.8 on CentOS 7.3 64bit and it fails. I receive 2 errors:

NanoLog.cpp:357:86: error: size of array ‘padding’ is too large
char padding[256 - sizeof(std::atomic_flag) - sizeof(char) - sizeof(NanoLogLine)];
^
NanoLog.cpp: In constructor ‘nanolog::RingBuffer::RingBuffer(size_t)’:
NanoLog.cpp:371:6: error: static assertion failed: Unexpected size != 256
static_assert(sizeof(Item) == 256, "Unexpected size != 256");

The sizeof(NanoLogLine) is 256 so that seems to be the issue. Do I need a newer gcc to build this?

Fix warning messages during compilation

Hi, I have few warning messages during compilation (clang++). It would be nice to fix them.


NanoLog/NanoLog.cpp:57:34: warning: format specifies type 'unsigned long long' but the argument has type 'unsigned long' [-Wformat]
sprintf(microseconds, "%06llu", timestamp % 1000000);
                       ~~~~~~   ^~~~~~~~~~~~~~~~~~~
                       %06lu
NanoLog/NanoLog.cpp:349:11: warning: braces around scalar initializer [-Wbraced-scalar-init]
: flag{ATOMIC_FLAG_INIT}
       ^~~~~~~~~~~~~~~~
.../include/c++/6.3.0/bits/atomic_base.h:157:26: note: expanded from macro 'ATOMIC_FLAG_INIT'
#define ATOMIC_FLAG_INIT { 0 }
                         ^~~~~
NanoLog/NanoLog.cpp:484:15: warning: braces around scalar initializer [-Wbraced-scalar-init]
, m_flag{ATOMIC_FLAG_INIT}
         ^~~~~~~~~~~~~~~~
.../include/c++/6.3.0/bits/atomic_base.h:157:26: note: expanded from macro 'ATOMIC_FLAG_INIT'
#define ATOMIC_FLAG_INIT { 0 }
                         ^~~~~

Remove full path from __FILE__ and only keep filename

I feel it would preferable and cleaner to remove the full path of a file from the log.
Depending on how deep you are, the path can take a lot of space and clutter the log.

Assuming that you have seen g3log, this is also done there.

   std::string splitFileName(const std::string& str) {
      size_t found;
      found = str.find_last_of("(/\\");
      return str.substr(found + 1);
   }

Scalability

How does this library respond to scalability? One example is multi-process logging (Possibly, multiple processes writing on the same file).

Thanks.

Console sink?

(I have swapped from boostlog, to spdlog, to g3log, and now im using nanolog)

I like the simple interface of nanolog, but have lots of code that expects INFO and above logs to end up on console. How do I do this in nanolog?

Integration with a crash handling library

NanoLog does not contain crash handling code. I think this is the right decision, as it is the task of a crash handling library. But it would be nice if NanoLog had a function that can be called when a crash happened to be sure that all log messages were written. Something like:

CrashHandling::setCrashCallback([] {
    nanolog::flush();
});

a better ring buffer

1st of all, I think ring buffer should be compile time fixed sized so it's trivial to do modulus op.

I'm thinking about a better ring buffer which can be multiple producers & multiple consumers without using spinlock. It's kind of like https://en.wikipedia.org/wiki/Seqlock. Say a ring buffer of size 1024 * 1024. It has a seq number member variable (default 1). Each item in the ring buffer has a seq number atomic var (instead of your atomic_flag) as well. When ring buffer gets created, all items have default seq no 0. When a producer wants to add an item, ringbuffer::seqNo.fetch_add(1...) return the seq no of the item to be written by this producer. Obviously the index of item (or called bucket) is seq no % SIZE. Even all producers want to write items at same time, all got unique bucket to work on without race. So far it's similar to what you are doing now. I think it's fair to assume it's super fast to copy the data to bucket, so it won't happen that a producer logs so fast that it can compete same bucket with another producer. Basically it should be safe to assume that it's not possible that a producer or a few producers can log the whole ring buffer while one producer is still working on a particular bucket. Actually before producer writes data to bucket, it set bucket seq no to seq No - 1 indicating data might be dirty. Once finishing writing, update bucket seq no to seq No.

Now back to the consumer (or consumers). Consumer knows that 1st item is seq no 1. So it read the seq no atomic variable of the bucket (index 1 of the ring buffer). If atomic var has value less than the seq no consumer is expecting, it means data is not ready. So consumer keeps trying to read the atomic var. If var is greater than seq no consumer is expecting, then consumer is too slow. Assume consumer is fast, and atomic var is 1. Consumer copies the data, and read bucket seq no again to make sure data is not corrupted(it's possible that producer is updating same buffer while consumer is reading, which also means consumer is too slow). Suppose all good. Then consumer moves on to seq no 2, 3, ... . Consumer never updates anything on ring buffer, so multiple consumers are supported (assuming all read same data).

What do you think? It's similar to what i have done in the past (single producer though), but I think this should work, and it's not very complicated). I'm happy to write the code, but i think you probably have more free time than me.

Unnecessary/unintentional flushing of the log

It seems like you're using std::endl instead of \n here which includes an implicit flush and then doing a flush a couple of lines below for critical log entries. I'm guessing this use of std::endl was unintentional otherwise the flushing of the critical log entries is entirely pointless because you're flushing every single line already.

some type mismatches

Hi,

I see following type mismatch warnings when compiling nanolog:
NanoLog.cpp(232): warning C4267: '=': conversion from 'size_t' to 'uint32_t', possible loss of data NanoLog.cpp(239): warning C4267: '=': conversion from 'size_t' to 'uint32_t', possible loss of data NanoLog.cpp(268): warning C4267: '+=': conversion from 'size_t' to 'uint32_t', possible loss of data NanoLog.cpp(571): warning C4244: '+=': conversion from 'std::streamoff' to 'uint32_t', possible loss of data
According to the code, they may cause unexpected behaviors when uint32_t variable is overflow. To be safe, I can not simply ignore them.

Thanks,

Add an setting to append to the existing log file

Hi, It would be nice to have an "append" setting to the nanolog::initialize.
Currently the file is just truncated on initialization. The problem is when one uses supervisor and if application crashes, supervisor restarts it and all the previous logs get lost.

benchmark

Hi,

Something I thought of. The higher number of percentile the worse the performance, right? (unless my skim reading was bad).

I think it would be even clearer if you explained that in your benchmark result table. Also maybe adding the numbers from "0 - 50 percentile" in one percentile "bucket" to even more highlight the kick-ass Nanolog :)

Try out linked list based data structure

struct LinkedList : public BufferBase
{
    LinkedList() : m_head(nullptr)
    {
    }

    struct Item
    {
        Item(NanoLogLine && logline_) : next(nullptr), logline(std::move(logline_))
        {}
        std::atomic < Item * > next;
        NanoLogLine logline;
    };

    void push(NanoLogLine && logline) override
    {
        Item * item = new Item(std::move(logline));
        Item * head = m_head.load(std::memory_order_relaxed);
        do
        {
            item->next = head;
        } while(!m_head.compare_exchange_weak(head, item, std::memory_order_release, std::memory_order_relaxed));
    }

    bool try_pop(NanoLogLine & logline) override
    {
        if (!m_read_buffer.empty())
        {
            logline = std::move(m_read_buffer.front());
            m_read_buffer.pop_front();
            return true;
        }
        Item * head = get_current_head();
        while (head != nullptr)
        {
            Item * next = head->next.load(std::memory_order_acquire);
            m_read_buffer.push_front(std::move(head->logline));
            delete head;
            head = next;
        };
        if (m_read_buffer.empty())
            return false;
        else
            return try_pop(logline);
    }

    Item * get_current_head()
    {
        Item * current_head = m_head.load(std::memory_order_acquire);
        while (!m_head.compare_exchange_weak(current_head, nullptr, std::memory_order_release, std::memory_order_relaxed));
        return current_head;
    }


private:
    std::atomic < Item * > m_head;
    std::deque < NanoLogLine > m_read_buffer;
};

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.