A few days ago I took an impromptu snapshot of one of the Azure machines I have running Seq. Cloud storage being what it is, an I/O problem during the snapshot process crashed the Seq service, but I didn’t notice until the following day. The machine was running horrifically slowly at this point, so I restarted it and all came good.
When I logged into Seq to check some queries, I expected to see a big gap during the outage. It took me a few seconds to realize why there wasn’t one – all of the events raised during the outage period were there! – and that made me think this feature of the Seq client for Serilog might not be as well-known as I’d assumed.
When you set up Serilog to write events to Seq, you use code typically like:
Log.Logger = new LoggerConfiguration() .WriteTo.Seq("http://my-seq") .CreateLogger();
This will send events in batches directly using HTTP; network problems and hard app crashes can result in event loss.
Some time ago, another parameter became available:
Log.Logger = new LoggerConfiguration() .WriteTo.Seq("http://my-seq", bufferBaseFilename: @"C:\Logs\myapp") .CreateLogger();
If a buffer location is specified, the Seq client will write events to a set of rolling files like
"myapp-2014-06-27.jsnl" (newline-separated JSON) and then ship them to the server when it’s possible. Almost nothing to configure, and voila! Reliable log collection.
There’s still no substitute for a redundant, transactional data store for truly critical events: machines can be lost, disks fill up and so-on. For general application monitoring and diagnostics however, I’m really pleased with how much “bang for the buck” this feature provides.