About Nicholas Blumhardt

Bio:
I'm a software engineer living in Brisbane, Australia. In my spare time I take photos, surf like mad, and run the Autofac and Stateless open source projects, which I founded.
Website:
http://nblumhardt.com
Jabber / Google Talk:
admin

Posts by Nicholas Blumhardt

serilog-generator.exe and fun with key-value settings

Some things are just fun to build, and without much concern for its general utility I thought I’d share one of them :-)

serilog-generator.exe is a handy little app for exercising Serilog sinks and log servers without needing a test harness project for each one. It’s published on NuGet as a ‘tools’ package and can be installed into a sink project like any other NuGet package:

Install-Package serilog-generator

(It’s also on Chocolatey but currently fails because of some command-line handling that Chocolatey shims seem to do.)

At a command prompt in a folder containing a Serilog sink assembly, the sink can be loaded, configured, and sample data pushed through it with a command-line like this one that exercises the Seq sink:

serilog-generator --using="Serilog.Sinks.Seq" --write-to:Seq.serverUrl="http://localhost:5341"

Unless --quiet is specified, the log will also be written to the console so you can compare it with what shows up in your sink:

Generator

You might have noticed that the command-line syntax for configuring the sink is very similar to the <appSettings> format supported by Serilog 1.5. Compare the command-line above with:

<appSettings>
  <add key="serilog:using" value="Serilog.Sinks.Seq" />
  <add key="serilog:write-to:Seq.serverUrl" value="http://localhost:5341" />
</appSettings>

The XML version uses a serilog: prefix to disambiguate from other settings often found in config files, but apart from that you can see the basic using/write-to directives (as well as the minimum-level and enrich options not shown here) work the same way.

That’s because the underlying Serilog configuration support is key-value based, not specific to XML settings files. You can read Serilog configuration from any medium that can represent key-value pairs.

Adding a new configuration provider is as easy as finding a way to load the pairs:

var pairs = new Dictionary<string, string>
{
  { "using", "Serilog.Sinks.Seq" },
  { "write-to:Seq.serverUrl", "http://localhost:5341" }
});

Log.Logger = new LoggerConfiguration()
  .ReadFrom.KeyValuePairs(pairs)
  .CreateLogger();

You can see how serilog-generator.exe uses this general support to implement ReadFrom.CommandLine() in its source. There are a couple of subtle points in there but it’s really only a few lines of code.

Reading settings from the command-line might not be particularly useful for many (most!) apps, but if you’re using a different configuration system and want to tap into Serilog’s built-in settings support it can come together quite smoothly.

A Monster Serilog Update for April 2015

Serilog is an open-source logging framework for .NET that seamlessly integrates text-based logging and fully-structured events. You can write events to regular text files, or use Serilog in combination with a structured log server for the easiest log filtering and navigation on the planet!

TL;DR we’ve been busy – Serilog 1.5 is out with new external configuration support; the project has been reorganized; and if you’re in Portland next week you can learn more about Serilog at .NET/FRINGE/.

First up: 1.5 is live!

Serilog 1.4 has been a long-lived and stable release. In fact, well over half of Serilog’s 69,304 downloads have been of the 1.4 series!

Update-Package Serilog

Well – we hope it’s been worth the wait: you can now pick up Serilog 1.5.1 hot off the NuGet press :-)

What’s new? Core support for external configuration sources.

The feature responsible for a minor-version bump is integrated “external settings” support.

Serilog was, is, and always will remain a primarily code-driven library. For a long while however the project offered a simple <appSettings>-based configuration option that could be used to specify the minimum logging level, enriching properties, and sink parameters in XML App.config and Web.config files.

We found this package, Serilog.Extras.AppSettings, popular and useful enough in practice that some support for this functionality in the core is warranted.

If you install the Serilog package into a .NET project with XML configuration support, you can now use the .ReadFrom.AppSettings() method to pull logger configuration directly from an XML configuration file:

var log = new LoggerConfiguration()
  .ReadFrom.AppSettings()
  .WriteTo.ColoredConsole()
  .CreateLogger();

XML configuration is very simple and relies on the <appSettings> element found in standard config files:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <appSettings>
    <add key="serilog:minimum-level" value="Debug" />
    <add key="serilog:write-to:File.path" value="C:\ProgramData\MyApp\my-log.txt" />
    <add key="serilog:enrich:with-property:MachineName" value="%COMPUTERNAME%" />
  </appSettings>
</configuration>

But, we haven’t simply shoehorned the old “extras” package into the core. Serilog’s new configuration support is open and will be fitted to the new ASP.NET 5 configuration system when it’s finalized. There’s already a simple key-value pair setting implementation, .ReadFrom.KeyValuePairs(), that you may find useful if XML isn’t your style.

You can read more about using this feature in the documentation.

New Community Projects

Though the split has been in progress for some time, 1.5 marks the end of a big push from Serilog’s core contributors to break out many of the useful “extras” projects into their own top-level organisations and repositories.

Michiel van Oudheusden led the charge with SerilogMetrics, a Metrics.NET-inspired project that adds timer, gauge, and counter features on top of Serilog. This replaces the original Serilog.Extras.Timing package.

serilog-metrics-256

Destructurama! makes Serilog able to accurately serialize exotic things like F# discriminated unions, or JSON.NET dynamic objects. These packages replace their earlier versions that were contributed and baked as “extras”.

Finally, consistent support for logging web application requests is the goal of the SerilogWeb project. Currently the ASP.NET and OWIN support libraries have moved over, and I have high hopes for these finding some interesting synergy with an ASP.NET 5 integration there when the time comes.

If you’re building something that adds to Serilog, let us know – we’d love to list you on the community projects wiki page we’re maintaining for this purpose.

The Great Sink Split

We’ve bunkered down to make sure Serilog “the project” stays lean and healthy. It’s easy for open source projects that integrate with many technologies to grow and sprawl, and as it evolved Serilog hit nearly fifty interrelated projects in its core solution.

Starting from Serilog 1.5, all of the various provided sinks live in standalone GitHub repositories. Hat tip to Matthew Erbs for helping push this huge task to completion! Combined with the move to establish our new community projects, this brings the Serilog solution itself back to a pair of projects – the portable core, and additional full .NET framework support.

SolutionExplorer

The 1.x series of releases is slowing down, but we’re making sure that we have the right project structure for Serilog to stay fresh and current well into the future.

Who contributed to this release?

We now count 57 individuals on our contributors page. This number isn’t an accurate representation any longer since new (sink) code now lives outside the Serilog repository, so there are even more wonderful folk not listed here.

Anything else?

Yes!!! If you’re lucky enough to have a ticket Daniel Plaisted will be speaking on Serilog at .NET/FRINGE/ next Monday, 13th April in Portland, OR.

DotNetFringe

The conference is about breaking the rules and pushing the boundaries of .NET, so I’m especially excited to hear Daniel will be presenting Serilog there.

Serilog 1.5 is now on NuGet, go get it!

C# 6 string interpolation and Serilog

It’s easy to spot the similarity between Serilog message templates:

// Serilog
var loggedInUserId = "nblumhardt";
Log.Information("Logged in {UserId}", loggedInUserId);

and C# 6 interpolated strings:

// C# 6
Log.Information($"Logged in {loggedInUserId}");

This is naturally because of their shared common ancestor, the .NET format string. Under the hood, interpolated strings in C# are converted by the compiler into a struct with the classic format string and argument values, logically along the lines of:

// C# 6 desugared
Log.Information(new FormattableString
  {
    Format = "Logged in {0}",
    Args = new object[] { loggedInUserId }
  });

The similarity has led a few of us, since seeing the new C# feature, to wonder whether a few keystrokes might be shaved off of the Serilog API if only the C# compiler would preserve the names of the “holes” in the interpolated string, i.e. loggedInUserId in this example.

There was initially mention of Serilog on the design thread for the feature, and more recently Andrey Shchekin most graciously raised a ticket on the Roslyn issue tracker as a prompt for some support in C# 7.

I’ve had some time to think about this feature now, and while the concision is appealing, I’m no longer convinced string interpolation (name preservation or not) is a good fit for Serilog.

1. A good variable name is not necessarily a good property name

Let’s take the example above; in our hypothetical C# 6 example the complete snippet would look like:

var loggedInUserId = "nblumhardt";
Log.Information($"Logged in {loggedInUserId}");

The variable name loggedInUserId obviously reflects what this value is being used for by the program. But, if that name was carried through to the log event, e.g.:

{ { "Properties": { "loggedInUserId": "nblumhardt",

This would lead us to query the log with filters like:

loggedInUserId == "nblumhardt"

Fine if we only want to find events where the user was logged in, but in the context of a larger and more complex application, we’ll often want to find as many events as possible that are related to a specific user (who might be having some technical trouble!).

In that case, a more general property name is a better fit:

Log.Information("Logged in {UserId}", loggedInUserId);

// Elsewhere
Log.Information("Password for {UserId} successfully changed", loggedInUserId);

This small change allows us to query UserId == "nblumhardt" and find both matching events.

Log event property names are important enough for diagnostics that they deserve their own identity, separate from the variable names around them.

2. Holes don’t always have obvious names

What does the property name for this statement look like?

Log.Information($"Enabling categories {new[]{1, 2, 3}}");

Here the hole is filled with an expression that’s not a simple name, so some kind of munging would be necessary in order to turn it into a valid log event property name (a JSON identifier or similar).

Going down this path would definitely reduce the overall quality of log events, and elevating log events to first-class status is what Serilog’s all about.

So what’s the relationship between Serilog and C# 6, then?

String interpolation is a great feature, one I’ve looked forward to in C# for a long time. The idea of providing direct support for Serilog is a very interesting one and worth exploring, but I’m increasingly convinced it’s unnecessary.

Interpolation is nice when it keeps code DRY, cutting out the redundant clutter of {0} and {1} and preventing parameter mismatches.

In the case of Serilog, I think it’s incorrect to consider the property names like {UserId} as redundant; in a well-implemented logging strategy they’re an incredibly important part of the picture that deserve their own consideration. You wouldn’t let variable names determine the table and column names in a relational database – it’s exactly the same trade-off being considered here.

What do you think? I’d love to be convinced there’s more synergy to be found here, but I’m not seeing it at present.

Tagging log events for effective correlation

Correlation, at least when we’re talking diagnostic logs, is the process of finding events that are related to each other in some way. It’s typical to correlate events raised during a transaction, on a specific thread, on a machine or from steps in some kind of process.

Correlation is king in the world of operating complex, distributed and asynchronous apps: it’s how the signal is picked out of the noise – the wheat from the irrelevant chaff – the diamonds that will help you solve a critical issue from the mud of day-to-day happenings.

I gave a presentation a few weeks back at DDD Brisbane on building better apps by taking an active approach to logging. The last post should give you an idea of the flavor of it. After the talk some nice folks approached me and said I’d filled in a missing link for them – it made me realize I’ve been using a little strategy for correlation without really thinking about it, and you might find this useful too.

Contextual correlation

For a long time, diagnostic logs have incorporated contextual information into events. Context tells you what was happening when an event was raised, or where the event was raised from. In its earliest form it might be the thread ID you sometimes see formatted into text logs:

2015-01-22 10:55:05.776 INFO [22] Serving catalog index
2015-01-22 10:55:05.902 WARN [8] Query took 109 ms
2015-01-22 10:55:05.902 INFO [22] Responding with OK

The thread ID in this case is included in each log message between brackets after the level. It tells us that the warning on the second line was raised on thread 8, which isn’t the same thread as was concurrently serving the product catalog (thread 22), so the source of the warning was elsewhere.

Searching a simple text log like this we could grep for /[8]/ to see what else was happening on thread 8 and hopefully narrow down on the source of the slow query.

Serilog does contextual correlation in a few different ways.

Enrichers can be used to attach different contextual properties to events:

var log = new LoggerConfiguration()
  .Enrich.With<ThreadIdEnricher>()
   // Other config here, then…
  .CreateLogger();

Enrichers work well when the context of an event can be determined statically – i.e. it’s something ambient like the thread id, machine name, the logged-in user, and so-on.

For context that’s more localized, there’s a special enricher that can be added to enable the Log Context:

  .Enrich.FromLogContext()

Once the log context is enabled, a block of related messages can be tagged using the methods on the LogContext class:

using (LogContext.PushProperty("MessageId", GetCurrentMessageId()))
{
  Log.Information("Processing {OrderId}", message.Order.Id);

  // Later
  Log.Information("Order processing complete");
}

If we look at these logs in Seq, we’ll see an MessageId property attached to each event within the using block:

1-MessageId

This can be used as a filter, e.g. MessageId == 12345, to view the events raised when processing the message.

Contextual correlation is bread-and-butter stuff. If your application processes work asynchronously, you need to be able to filter down to a single task, and properties tagging events from a context make this as painless as possible.

It’s worth noting that contexts overlap – an event can belong to a thread as well as to a message. (This is one of the arguments for structured logs – it’s much easier to work with events with real, named properties than it is to devise readable, unambiguous ways to embed all of these into the format of a text log message.)

Setting up and using the log context does add some noise to application code, so it’s something to be used judiciously. To tag events for correlation without additional machinery, another technique is useful.

Lateral correlation

If you think about all the contexts in your running application, you might come up with a mental picture like this:

Contextual

The green lines show sequences of events tagged with the same RequestId property, while blue shows a MessageId property that might be added to events during back-end processing. There’s some simplification here, given the way contexts overlap, but essentially each context is a temporal sequence of correlated events.

When it comes time to use these logs however, scenarios frequently cross these little islands of navigability, so something else is needed.

Let’s say we’ve received an issue report tagged with an exception message. We find the exception using the text:

2-InvOrder

And then follow the MessageId contextual property to see what happened in the message handler that failed.

3-Mid

Now, we know somehow we were sent an order without any items – where did it come from? This is where a lateral correlation identifier comes in handy. The second event in the three above implements this:

Log.Information("Processing {OrderId}", message.Order.Id);

OrderId is a business-domain identifier included in the message itself that links the event across lines of causality to others related to the same order.

Once we jump across contexts using the OrderId:

4-Find

We can now see an event from the HTTP request that initiated the order.

5-Req

(A keen-eyed reader might note here that searching on ItemCount == 0 would have narrowed things down a little faster – but identifying this opportunity still requires some navigation through the logs from the point of failure too.)

Correlating by HTTP request Id we get all of the events from the request and should (on a real event stream) be able to get to the bottom of the issue:

6-Final

Patching this into our mental model, OrderId let us navigate laterally between the two sequences of events.

Lateral

Logging in two dimensions

Armed with a tool like structured logging that can easily be overused and create clutter, the last thing anyone wants to do is bung every possible property onto every event in an attempt ease diagnostics later on.

Keeping a clear distinction between the contextual and lateral correlation links makes it possible to navigate effectively through events raised by the app with a minimum of syntactic noise.

It turns out that in most situations I’ve encountered, architectural and infrastructural concerns dominate the notion of context. HTTP request id, thread id, message id – these are all mechanical concerns that get buried into plumbing and attached using enrichers or the log context. This is pretty convenient – the noisier APIs for adding contextual properties are best kept in infrastructural code when possible.

Identifiers from the business domain provide lateral navigation through logs – customer ids, order ids and the like – letting you skip between the various contexts in which an entity was referenced. Using structured logging these links can be created without much fanfare – just logging an event with the identifier in its message text is enough to make the connection.

It’s a simple perspective that helps me use structured logs more effectively, and it can be a nice foundation for guidelines shared with a team. If you use this strategy already, or another similar one, I’d love to hear more about your experiences!

Using Serilog with F# Discriminated Unions

Serilog

This week GitHub user vlaci called us out on awful F# support in Serilog. In their issue report they show how a nice clean discriminated union type like “Option” is represented hideously in structured data:

type Shape =
    | Circle of Radius : double
    | Rectangle of Width : double *  Height : double
    | Arc // ...
let shape = Circle 5.
Log.Information("Drawing a {@Shape}", shape)

Prints:

2015-01-14 16:52:06 [Information] Drawing a Circle { Radius: 5, Tag: 0, IsCircle:
   True, IsRectangle: False, IsArc: False }

The verbosity gets worse for types with more values.

I took this as a chance to crank up Visual Studio and get reacquainted with F#, using the example from @vlaci as a starting point. Using Serilog.Extras.FSharp you can now enjoy your discriminated unions encoded pleasantly like:

2015-01-14 16:58:31 [Information] Drawing a Circle { Radius: 5 }

Depending on the log storage you use you’ll be able to query on the tag as well as the fields (like Radius) from the union.

You can enable this by installing the package and adding Destructure.FSharpTypes() to your logger configuration:

open Serilog

[<EntryPoint>]
let main argv =

    Log.Logger <- LoggerConfiguration()
        .Destructure.FSharpTypes()
        .WriteTo.ColoredConsole()
        .CreateLogger()

    Log.Information("Drawing a {@Shape}", Circle 5.)

    0

Thanks again to vlaci for kicking this off. I was especially keen to see this in because I’m sure someone with better F# skills than mine can add a lot to the Serilog/F# experience. If you have ideas we’d love to hear from you on the project site!

Give your instrumentation some love in 2015!

This is the year for better instrumentation.

  • On premises, applications are more integrated, distributed and asynchronous than ever
  • The move to the cloud is driving architectures with fine-grained modularity and scale-out across many remote machines

If you still treat logging as an afterthought, operating and debugging apps in 2015 is looking like a nightmare.

Thankfully, tools and practices are catching up. And, just as it was with requirements, builds, testing and deployment, many improvements are coming from an integrated approach to what was once “somebody else’s problem” or a siloed responsibility.

If your instrumentation isn’t all it could be, here are three suggestions to get you off to a new start this year.

1. Take another look at Structured Logs

First, a new generation of tooling has come online. Text logs from distributed apps are an ergonomic disaster, requiring complex, cumbersome tools to manipulate in all but the most trivial scenarios, so alternatives have been baking away for a long time and are finally hitting the mainstream.

In .NET, we’ve got SLAB from Microsoft Patterns and Practices, and more recently the open-source Serilog, that make structured logging as quick, easy and painless to implement as text logging (think log4net, NLog) but with miraculous advantages when it comes time to consume events for diagnostic purposes.

Unlike text logs, when you write a structured log all of the data associated with an event are kept in a structured format (think “JSON document”) that can be rendered out to a more traditional text message, but also sorted, searched and filtered just like documents in a NoSQL database might be.

What’s the difference between this (log4net):

log.Warn("Cache reached {0} items", cache.Count);

And this? (Serilog):

log.Warning ("Cache reached {CacheSize} items", cache.Count);

To write, very little – and when viewed at the console, not much either. But with appropriate storage, the structured version supports filtering with queries like:

CacheSize > 1000

… and not a regex in sight.

With structured logs you can use clean data types and familiar operators to quickly answer questions that would be time-consuming and awkward using plain text.

If you’re not using structured logs already, give this technique a try this year – it’ll change the way you look at logs for the better.

2. Use Logs to validate new Behavior

Automated tests are great – they verify that a chunk of code in isolation does what you expect. If you’re like me though, you spend a good amount of time smoke testing your work by hand, too – putting features through their paces the old fashioned way and trying to break them.

While you do this, logs are a great way to peer into the internals of an application. Sometimes, below the surface, all is not what it seems – that warning you added: "This code should only be called on even numbered years"? Well, how do you know it isn’t triggering right now? Watching each step in a process logged out clearly is a great way to understand your system better, and it really does improve the quality of the code you check in.

Think about it this way – unit tests validate your expectations, but logs will show you things you didn’t expect.

For this strategy to be effective, you need to have logs written to a log viewer on your development machine. Console terminals and text files are under-powered and quickly become a blur when things heat up. There are lots of great tools for this; our log server Seq is free for development use, or check some of the other options you can find with a quick web search. What tooling you use is a secondary consideration though – the important thing is to make feedback from logs a first-class part of your basic development process.

And, here’s the trick: while you’re using logs in this way, you’re implicitly shaping them to be useful for diagnosing issues in production. If your logs show you clearly what’s happening during development, there’s a good chance they’ll be useful later down the track too.

3. Match Issues with relevant Logs

Here’s another great strategy I’ve seen used to brilliant effect.

When features are formally tested, pipe logs from the test environment to a log server. Whenever someone raises an issue, ask them to include a link to relevant events from the log. (This will quickly validate the quality of your error messages!)

For example, if the user is raising purchase order #12345, they can search the log from the test environment for “purchase order 12345” and should get relevant results that they can link to from an issue.

Again, here’s the trick: how will the tester always find relevant events, you ask? Well – that’s a good question! When a real customer hits an issue raising a PO, you’ll need to do the same thing. This makes test-time issues a drill for the real thing – one that validates the quality of your instrumentation and improves your readiness for dealing with issues in production.

As a bonus, you’ll soon be able to link to logs for issues in the production environment, too, which is absolutely golden if you’re the developer in the hot seat.

Conclusion

That’s it for this post. Three things you can try to get your instrumentation kicked off in the right direction this year:

  • Write structured logs
  • Use them to validate behavior during development
  • Link relevant logs from your test team’s issue reports

Once we take instrumentation back into the realm of our responsibility as developers and stop just throwing it over the wall, there are lots more interesting things to consider about writing smarter, more effective logs in the first place. I’d love to hear what works for you.

Uncorking the Seq API

The web interface of Seq connects to the server via a simple API using JSON/HTTP. (I’d call it REST, but it’s not fully IEEE 1337.00 REST-1.1 compliant ;-))

Quite a few customers integrate with the API, either for querying events or for automating administrative tasks like adding users, creating/modifying queries and views, etc.

Though it’s possible to just jump right in using JSON, it’s a pain to figure out the dynamic structure of the entities that are sent and received and hand-code C# types that serialize appropriately using JSON.NET.

To fix this, we just published the entire set of Seq API types, as well as higher-level wrappers, to the new seq-api project on GitHub and published via NuGet:

Install-Package Seq.Api

You can use the package to connect and work with items from your Seq instance:

var connection = new SeqConnection("http://my-seq/prd");

var views = await connection.Views.ListAsync();
foreach (var view in views)
{
  Console.WriteLine(view.Title);
}

Things should be pretty self-explanatory – just dot your way through the properties on SeqConnection to find your way around.

As a brand new project we fully expect some rough edges in there, for example you may need to upgrade to Seq 1.6.7 in order for some features to work. You can already do some interesting things with it though!

The seq-tail.exe app that’s included as a sample in the repository implements something like the Unix tail for logs from your remote Seq server.

seq-tail

Syntax is:

seq-tail.exe <server> [--filter=<f>] [--apikey=<k>] [--window=<w>]

(Nerdy note – if you haven’t seen docopt.net in action before, check out how the command-line parsing in the example works.)

Now that the API is published I’m looking forward to actively improving it – it’s crying out for some Rx love, for starters. If you crank it up I’d love to hear about your experiences.

Viewing logs from C# mobile apps

I’ve recently been writing some code for Windows Phone, cross-compiled for iOS and Android using the Xamarin tools.

The app does quite a lot of background processing, but during development and testing it started to feel like a black box. A few lines of code got events posting from Serilog via HTTP to a Seq instance running on my development machine.

I liked the results so much tonight I decided make a quick experimental port of Serilog.Sinks.Seq to a portable class library:

Seq.Client.Portable

You can install it in your Windows Phone or Xamarin project directly from NuGet:

Install-Package Seq.Client.Portable

Configure the logger in portable code:

Log.Logger = new LoggerConfiguration()
  .WriteTo.SeqPortable("http://169.254.80.80:5341")
  .CreateLogger();

And write log messages from your mobile app straight back to Seq running on your development machine:

Log.Information("Hello, Seq!");

Single-user installs of Seq are free, so you can use this during development even if your team uses something else for production logging.

Emulator Networking

You’ll need to use the address of your development machine assigned by the emulator – don’t try connecting to Seq as localhost – this will loop back to the emulator itself. Windows Phone lets me get at my desktop via "http://169.254.80.80:5341" while Android works with "http://10.0.2.2:5341". (5341 is the default port Seq listens on.)

I haven’t tested this on iOS yet – it’s possible you’ll find issues there if you beat me to it.

Why ‘Experimental’?

It’s somewhat uncharted territory for me – Serilog is still working the last kinks out of iOS/Android support, and the Serilog features that the sink relies on aren’t all portable because of missing features like System.Threading.Timer. Eventually I hope the code here can be feed back into Serilog and improve this.

Where’s the code?

Here, on GitHub. Please send a PR if you find any improvements to make. Enjoy!

Dynamically changing the Serilog level

In the Serilog project we’re frequently asked how to adjust the logging level at runtime, and until recently we haven’t had a great answer.

Many larger/distributed apps need to run at a fairly restricted level of logging, say, Information (my preference) or Warning, and only turn up the instrumentation to Debug or Verbose when a problem has been detected and the overhead of collecting a bit more data is justified.

Setting the level to its minimum (Verbose) and dynamically filtering events has been our suggestion so far, and it does save on network or disk I/O bandwidth, but the overhead of generating the events in the first place is still a significant consideration.

In Serilog 1.4.11, there’s a new type, LoggingLevelSwitch that provides this feature with very minimal performance overhead.

Using LoggingLevelSwitch

If an app needs dynamic level switching, the first thing to create is an instance of this type:

var levelSwitch = new LoggingLevelSwitch();

You need to keep the switch around in an accessible place.

This object defaults the current minimum level to Information, so if you want to be more restricted, set its minimum level up-front:

levelSwitch.MinimumLevel = LogEventLevel.Warning;

Now, when configuring the logger, provide the switch using MinimumLevel.ControlledBy():

var log = new LoggerConfiguration()
  .MinimumLevel.ControlledBy(levelSwitch)
  .WriteTo.ColoredConsole()
  .CreateLogger();

Now, events written to the logger will be filtered according to the switch’s MinimumLevel property.

If you need to turn the level up or down at runtime, perhaps in response to a command sent over the network, it’s as easy as:

levelSwitch.MinimumLevel = LogEventLevel.Verbose;

log.Verbose("This will now be logged");

Querying collection properties in Seq 1.5

Seq implements a filtering syntax that’s loosely based on C#. The following expression matches the events you’d expect it to, if you’re used to working with C# or a similar language:

Tag == "seq"

Which is to say: “Match events where the Tag property has the value "seq". Sub-properties of objects can be accessed using the familiar dotted syntax (Property.Subproperty.Subproperty) and so-on.

Seq 1.5, currently a preview, steps into some new territory.

What about a collection of tags?

It’s not uncommon for Serilog events to carry properties that are collections. For example we might log events on a Q&A site like:

Log.Information("User {Username} posted a question tagged {Tags}", username, tags);

Resulting in events like:

Event with Collection Property

Instead of being marked with a single tag, each event carries a collection of zero-or-more.

Matt Hamilton asks:

This is where Seq 1.4 draws a blank. Translating this to the Q&A example, about the best we can manage is a text search for "seq" and requiring the existence of the Tags property using Has(Tags).

This is a bit of a shame – the promise of Seq and structured logging in general is to make querying log events precise – so why aren’t collections covered?

The C# way

Seq isn’t only about precise queries; the whole foundation of the app is the idea that it should also be easy and natural to query log data.

I’d thought about collection queries quite a lot when putting the original filtering syntax together. C#-style the query would look something like:

Any(Tags, tag => tag == "seq")

Seq doesn’t use extension method-style calls, but this is pretty close to the Enumerable.Any() call you’d write using Linq.

I don’t like it!

In a full-featured programming language this syntax makes sense, but it’s too noisy, complex, and easy to mistype to make a good filtering syntax for Seq. And so, that’s where things have sat for the last twelve months.

Pattern matching

With a bit more thought, Matt and I both ended up thinking the problem might be better suited to a pattern-matching syntax like:

Tags[?] == "seq"

So, this is what Seq 1.5 provides. The question mark wildcard ? matches any element in the collection, while an asterisk * wildcard only matches if all elements satisfy the condition.

Questions tagged seq

Wildcards work in any comparison where one side is a property path (including indexers and dotted sub-properties). Multiple wildcards along the path are supported.

You can write the following expression to find questions where all answers are "Yes!":

Answers[*].Content == "Yes!"

(Under the hood, Seq rewrites these expressions into the equivalent lambda-style syntax – much the same way as the C# compiler transforms Linq query comprehensions. Working through implementing this reminded me just how fantastic the C# compiler really is!)

Try it out!

You can download a preview build of Seq 1.5 here. It’s not ready for production use yet, but should be just fine for dev/non-mission-critical servers. We’d love to hear what you think!