Serilog 2.0 Release Plans

.NET Core and ASP.NET Core are expected to release-to-web by the end of June 2016. There have been a few related announcments recently, surrounding the split of the runtime release (end of June) from the tooling release and return to MSBuild (tied to a later Visual Studio release).

This post is just a quick note to clarify what’s happening with Serilog.

What’s in 2.0?

As many of you noticed, we made an early leap into the DNX tooling and related libraries. Version 1.5 of Serilog, the most recent master release, has been sitting stable for quite a while now, while our long-lived dev branch has been tracking .NET Core for the last year, collecting changes and improvements in readiness for the new .NET.

Here’s what we’ve been focusing on in Serilog 2.0.

  • Packaging — the core Serilog package is now a lightweight, low-dependency package with core types like Log, ILogger and LoggerConfiguration; sinks, enrichers, and facilities like <appSettings> support have moved out.
  • Portability — much more of Serilog now works across the board; for example, we can support LogContext on all target platforms; there’s no more Serilog.FullNetFx.dll.
  • Usability — a few rough spots in the API, especially concerning logger shut-down & disposal, have been cleaned up.
  • Performance — for example Serilog.Sinks.PeriodicBatching now accepts and enqueues writes without lock contention.
  • IntegrationSerilog.Extensions.Logging is a high-fidelity logging provider for ASP.NET Core that gets structured log data from your apps and the platform out to a huge variety of mature and supported sinks of all kinds – databases, log servers, the cloud, rolling files, …

There are plenty more goodies in there. (One of my favourites is this PR which landed support for fixed-length level names. In 2.0 you can use {Level,3:u} in output templates to get output like DBG, INF, WRN and so-on.) When we flick the switch on the final version we’ll publish full release notes.

What do the recent announcements mean for Serilog?

By far the biggest change has been the packaging work. Serilog used to include its own file, console and trace sinks, for example. These now live in their own repositories and packages: Serilog.Sinks.RollingFile lives in here on GitHub for example. This doesn’t just reduce dependency bloat – it’s also an important milestone in scaling out the Serilog project to support growth and community contributions (like this excellent alternative rolling file package which can now stand on an equal footing with the original).

Re-packaging Seriog has meant a lot of grappling with the in-progress tooling. DNX + DNU (soon dotnet) and project.json have brought game-changing improvements for us – especially the package building capabilities that mean a single build can compile (and then test!) for multiple target platforms. The earlier build system we’ve used to emulate this in Serilog 1.5 has been miserable. I’m sad to see project.json go, but happy that the major benefits are being carried through to the final version (even if we’re stuck with XML). I’m not thrilled about going through another round of churn, but I trust the .NET and ASP.NET engineers in their commitment to deliver a fully-baked, high-quality tooling release so however we have to get there – we’ll get there!

Roadmap to 2.0

Anyway, I digress 🙂 . Here’s how things will go for Serilog:

  1. We’re going all-in on RC2. It’s going to be a close, if not identical, reflection of the RTM runtime, so once it drops the Serilog core and as many sinks as possible will be ported. This also means nailing down the target framework monikers that the packages support, and paring back our declared dependencies to the bare minumum. The Serilog package will move to 2.0.0-rc-* versioning to reflect this.
  2. Serilog 2.0 will RTW as soon as .NET Core does, in June. The Serilog package will drop first (ASAP) followed by the supported sinks in roughly the order of usage.
  3. We’ll continue building/shipping Serilog with the RC2 version and later previews of the project tooling until the Visual Studio release when we can move forwards with it. From RC2, we’ll treat the tooling as if it’s RTM, and make sure Serilog and its sink/enricher/configuration packages work flawlessly with it, as well as the released Visual Studio 2013 and 2015 tools.

The Serilog contributors and community have put in a lot of work to get us to where we are today. Serilog’s the most mature and full-featured logging solution you’ll find on .NET Core, thanks to a huge effort tracking the platform changes to date. When .NET Core hits RTM, Serilog is going to be there, as we’ve always planned! I can’t wait for June.

When should I upgrade?

RC2. We’ve treated RC1 as a “best-effort” target so far, and supporting Serilog 2.0.0-beta-* has focused mostly on unblocking early adopters of the new runtime. From RC2 I’ll personally consider .NET Core a supported platform and pursue any Serilog issues on it as if they’re in an RTM. I think the promise of a .NET Core/ASP.NET Core RTW shortly afterwards will mean a lot of Serilog users and contributors will be moving forwards, so Serilog 2.0.0-rc-* is going with us.

How can I help?

If your favorite Serilog sink isn’t ported to .NET Core yet, or isn’t ready at RTW, please consider how you can chip some effort in – most porting work is a straightforward project file change and recompile. There are a lot of people in the Serilog community ready to help out; the best way to get started is to jump onto Gitter and mention you’re interested in sending a PR.

Which logging abstraction should I use?

If you’re writing .NET apps or libraries for your own use, well, you can choose and use whatever logging facility you like.

If you want to write packages for others to consume, your best bet is to avoid API designs that call for logging at all. Strong invariants, exceptions and unsurprising behavior will inform users a lot more than log messages, which they may not always collect or see.

Some kinds of libraries, especially framework-style ones that call user code rather than the other way around, can’t make this simplification. If you’re writing the next NancyFx, Topshelf, or MassTransit, you’ll face the problem of how to write log events for developers to consume.

Applications get to choose the logging subsystem, and your library has play well with that.

LibLog1

In the past, we’ve dealt with this through abstractions like Common.Logging. The abstraction defines an interface, and a framework-specific adapter implements this in terms of whatever logging library is in use:

LibLog2

While logging abstractions have been a great asset to the ecosystem, they suffer some serious downsides.

First, abstractions target the lowest common denominator. Why do you choose one logging library over another? Often it’s because you value some power that its API affords. Abstractions usually have to wipe away these advantages in order to fit a variety of implementations.

Second, abstractions bring dependencies. A single-assembly or single-package library that supports a shared abstraction now becomes three or more dependencies in actual usage: the library, the abstraction library, the binding between the abstraction and the logging library, and the logging library itself. All of these need to be maintained and versioned.

Third, abstractions proliferate. Because of the first problem, someone’s always looking to create a better abstraction, and because of the second problem, this leaves applications peppered with implementations of multiple similar abstractions.

Given this unsatisfactory situation, most popular libraries end up defining an interface that they expose for the user to plug logging in. The library can define the abstraction at the level it considers “ideal”, and since the abstraction is part of the library at least some dependency proliferation is avoided.

Still, this isn’t a great place for applications to be. Libraries get to stay relatively “clean”, but now since every library potentially defines its own logging interface, the job of binding all of these and wiring the result up is a chore. Making matters worse, IntelliSense for `ILog…` in Visual Studio now presents a dozen suggested types to choose from. Every time. Ick.

Enter LibLog.

LibLog is not Serilog, so why am I writing about it? Well, it’s a bit cynical of you to ask 😉 but since you did: LibLog is different. Serilog is for apps. LibLog is built specifically for libraries, to address the pain points I mentioned above. I’m an admirer because we’ve lived with this problem for so long, I’d given up on an elegant solution ever emerging.

When you install the LibLog package, a source file is embedded directly into your project. There’s no third-party assembly or runtime dependency at all. The source file defines an ILog that is internal to your project: there’s nothing polluting your public API surface. The source file also includes some reflection-based code to sniff out which concrete logging library is actually in use, and some Expression-based code generation to efficiently connect to it.

LibLog3

All of this happens automatically, behind the scenes, without a user of your library having to know or care how things are wired up. It’s like magic! Its the future.

LibLog is also smart: you can use message templates with it to do structured logging a-la Serilog, Microsoft.Framework.Logging, Logary 4, and maybe even a future version of NLog.

using ClassLibrary1.Logging;

namespace ClassLibrary1
{
    public class Class1
    {
        public static void SayHello(int number)
        {
            var log = LogProvider.For<Class1>();
            log.InfoFormat("Hello, the number is {Number}", number);
        }
    }
}

Used in an application with Serilog, this event has the Number property and its value attached. If the underlying logger doesn’t support it, the message is translated into a regular .NET format string and passes through with full fidelity.

If you want something more from your logging interface – say, you want to be able to pass an integer event type with every message to cater to some new framework – it’s just a matter of tweaking the code file that’s already part of your app. Admittedly there’s got to be a common denominator, but you get to choose what it is, and it doesn’t have to be the lowest.

What are the downsides of using LibLog? Your library gains a few kilobytes of adapter code internally that need to be carried around. The weight of the code is comparable to what an external abstraction would cost, so it’s zero-sum in reality.

Why is LibLog the future? From where we stand right now, amidst a mature ecosystem of .NET software, no one – not Microsoft, nor you, nor me – can define the one logging abstraction to rule them all. We can’t put the genie back into the bottle! We have so much code between us in libraries and applications that all of the logging libraries we have, the abstractions, adapters and other gunk, will live on practically forever. Any new abstraction, no matter how clean, will just add one more to the long list we already drag along with us.

So, how about letting your library’s users off the treadmill? LibLog for libraries, Serilog (if you like it) for apps. Couldn’t have said it better myself :-).

To get started with LibLog check out the wiki or this brief introduction from Christos Matskas.

Serilog Timings

TL;DR: I’ve just kicked off development of Serilog Timings, a tiny package that adds streamlined operation timings on top of Serilog.

Though there are a million ways to collect timing data from applications, it’s often very useful to include it in regular logs:

  • Logs are visible all the time, even in development, while there isn’t always a metrics server handy
  • Logs make it easy to trace individual timings back to the conditions and processes that caused them
  • Structured logs events already carry the contextual properties around which the timings can be sliced and diced

It will come as no surprise that there are already a few options for recording timing information through Serilog. At its most basic, System.Diagnostics.Stopwatch and a handwritten log event can suffice. I’ve used SerilogMetrics, which includes timings along with counters and gauges, and I’ve also written several hand-rolled implementations for various apps.

As I’ve done this, the picture of what I want from operation timings has become a lot clearer:

Limited noise – when timings are recorded, it’s easy to end up with a crowd of log events: one declaring the operation started and assigning it an identifier, one describing the operation with a bunch of app-specific data, and finally the event completing operation with the elapsed time. Serilog Metrics was built with the goal of compressing all of this into a single event.

Natural messages – related to the first goal, I don’t really want generic “operation started” and “operation ended” messages in the log. Logs need to be optimized for human pattern recognition – the message printed by one operation should be visually distinct from any other so that they’re easy to pick out from a large body of text.

Stable event types – furthermore, the event type (essentially a hash of the message template) should be stable for a particular operation, regardless of whether the operation succeeded, failed, or completed with a custom result. Stable (and unique) event types make it much easier to find all instances of an operation, even though “successful” ones may vastly outnumber and hence obscure the unsuccessful ones.

A compact syntax, .NET Core support, LogContext integration, custom results, … okay, I’m particular in the extreme … you kinda get the picture 🙂

I won’t promise it’s the last attempt I’ll ever make to cover all of this, but Serilog Timings ticks all of the boxes right now. Here’s how it’s used.

Once you’ve configured Serilog and installed the NuGet Package, wrap the code you want to time with a using block and describe the Operation:

using (Operation.Time("Submitting payment for {ClaimId}", claim.Id))
{
    // Timed code goes here...
}

When the using block is completed, an event will be written to the log that looks like this one:

11:20:01 [INF] Submitting payment for claim-41546 completed in 11.2 ms

The description of the operation is a regular Serilog message template, which Serilog Timings extends with " {Outcome} in {Elapsed:0.0} ms". The properties from the description, plus the Outcome and Elapsed properties, are all included on the event if it’s sent to a structured log collector.

The completion event and any other events written within the block will be tagged with an OperationId property as well, if the Serilog LogContext class is enabled.

Operation.Time() assumes the operation always succeeds, so the message will be identical regardless of whether or not an exception is thrown.

Operations that need to distinguish between timings for successful completion vs failures use Operation.Begin() and Complete():

using (var op = Operation.Begin("Saving invoice {Filename} to blob storage", filename))
{
    // Timed code goes here...
   
    op.Complete();
}

Successfully completing the operation will produce an event exactly like the one in the first example. If Complete() is not called on an operation started with Begin(), perhaps because an exception was thrown, then it will be logged as abandoned:

11:29:37 [WRN] Saving claim-41456 invoice INF_CX7890.pdf to blob storage abandoned in 30978.8 ms

Notice that although the Outcome property is "abandoned" instead of "completed", and the level is warning, the message template (and thus the event type) is identical to the one from the success case. This means that while completed and abandoned timings can still be differentiated, given one of them it’s always simple to find the other.

To bail out of an operation altogether and suppress the warning message, an operation can be Cancel()ed.

Complete() can optionally accept a result property name and value to added to the event, so for example if you’re timing a network request, you can tag the event with BytesReceived, running a SQL query you might tag the event with Rows and so-on. More detailed instructions and documentation are in the project README.

Events generated by Serilog Timings display nicely in log files or the console. If you collect logs using SQL Server or another database you might elevate the Elapsed property to column-level for some interesting analysis opportunities (the Serilog sink for SQL Server recently acquired property-to-column mapping capability).

This screenshot from Seq shows a timing event from my test app with all of its attached information:

Data

The stable event type (B7B24E3B shown as Type in the screenshot above) lets us drill down to just the timings of the operation we’re interested in:

ByType

And once we’ve pushed this over into a signal we can run more interesting queries to follow the Elapsed value over time:

Aggregate

(Seq 3.1 just added the timeseries charting shown here, so my current obsession is trawling through historical log data to see what kinds of trends I can detect 😉 .)

What do you think? I’d love to hear from you if you give Serilog Timings a try, and issue reports or PRs to the GitHub project are welcome. Have fun!

Exploring Rust

For the past few years I’ve followed the evolution of Rust with interest. Rust is a new systems programming language (like C or C++) with greater emphasis on safety. I’ll spare you my reasons and opinions on its relevance since the Internet is replete with those 🙂

Instead, I thought I’d write this brief post to share my perspective on learning Rust as a developer who’s otherwise mostly writing C#. The post assumes you’ll jump over and check out the Rust documentation where topics are necessarily summarised here.

No, this isn’t one of those “I’m leaving .NET! …for Rust!” kinds of posts. Rust and C# are different enough to be complimentary, in much the same way C# and JavaScript are. Learning Rust also involves digesting concepts that have (so far) escaped C#’s all-consuming multi-paradigm expansion. Perhaps there are some ideas to bring back and apply in .NET, too?

So, you’re comfortable with C# – how does the knowledge you already have map onto this new thing? Here are my notes after spending a few days writing some Rust.

Projects, builds and compilation

The first thing you’ll probably write in Rust is not in the language itself, but at the command-line:

> cargo new --bin hello
...
> cd hello
> cargo run
...
Hello, world!

Cargo is Rust’s package manager, and combined with the repository at crates.io, it plays the role of NuGet in the Rust ecosystem.

Unlike C# and .NET, Rust came of age in the post-NPM world. The Rust standard library is smaller than the original .NET one, and feels a lot more like the upcoming .NET Core. HTTP clients, web frameworks, async I/O and JSON serialization live out in the crates.io ecosystem where they can evolve and improve through Darwinian pressures.

You’ll also probably note that NuGet.exe doesn’t have a run command like the one above. This is another parallel between Rust and .NET Core: the cargo command-line (like dotnet) provides a standardised interface to the whole toolchain including dependency management, builds, a test runner and so-on.

Testing

In C#, adding unit tests to a project is a decision point. In Rust, the decision is made for you, and every (Cargo) project gets testing built in. Tests sit right alongside the code being tested, often in the same source file.

Here’s an example:

fn add(a: i32, b: i32) -> i32 {
    a + b
}

#[test]
fn two_plus_two_makes_five() {
    let result = add(2, 2);
    assert_eq!(result, 5);
}

Side-by-side tests are not just a feel-good novelty: including tests in the same compilation unit as the code-under-test means there are no visibility issues to worry about. No need for internal and [InternalsVisibleTo] just to drive unit tests!

Code and its tests also always move together – no more parallel folder hierarchies in test projects to keep in sync. I like this enough to miss it in C# projects already. Conditional compilation in Rust means there’s no issue with code or dependency bloat, something that might be trickier to handle cleanly in the .NET world.

The baked-in assertion macros are very basic – I’m sure there are alternatives but in my limited exploration so far I haven’t felt the need for anything more.

Syntax

Rust has an expression-oriented curly-braces syntax. If you stare at C# (or C/C++, or Java, or JavaScript) all day, Rust isn’t that surprising. It’s not a language you’d immediately recognize by its syntax, like Python, Lisp or Smalltalk.

Coming at Rust from C# you might find the name: Type declaration order, -> Type return type declarations and implicit “return” unusual, but the tiny add() function above uses all of these and the meaning is pretty clear.

Approaching the Rust syntax is very much like approaching the syntax of any new language – it takes some time. So far nothing has left me scratching my head for too long.

Move semantics, lifetimes and borrowing

Here’s where Rust begins to diverge from “nice, generic, modern language” to strike out on its own. Two of Rust’s big selling points – memory safety without garbage collection, and freedom from data races – rely on statically determining the “owner” of an object as well as possible references to it (aliases).

let c1 = vec![1, 2, 3];
let c2 = c1;

// Fails to compile - the value has been moved out of c1 into c2
println!("Count: {}", c1.len());

You get the general idea in the example above: assignments in Rust are moves by default. This takes a bit of getting used to: in C#, assigments are always copies either of a simple value or a reference to an object, so after assignments in C#, both the source and destination variables are still usable.

For memory safety, the advantage of move semantics is that the compiler knows exactly when a value goes “out of scope” and can insert code to deallocate it at precisely the right moment, without needing a GC to scan for other references.

Now, this is obviously a pretty limited model, so Rust has a clever system of lifetimes and borrowing to make more complicated patterns expressible. A proper treatment of the idea is well beyond what I could attempt in this post (already starting to spill beyond the initial goal of “brief”), but with the risk of inadvertently bungling the analogy I’d like to try to translate the concept of a “lifetime” into C# terms, since I haven’t seen this attempted elsewhere.

Ready? Read the ownership, borrowing, and lifetimes chapters of the Rust book? Ok, here it is.

Dispose-safety in C#

Let’s imagine a typical use of the using statement in C#:

using (var archive = OpenZipArchive(@"Cbackup.zip"))
{
    using (var index = archive.OpenFile("index.json"))
    {
        var contents = ReadBackup(archive, index);
        // Read files from contents
    }
}

It’s a bit contrived, but you can imagine the object returned from ReadBackup(archive, index) may have a reference to either archive or index.

That’s fine in the example above, since we know at the time we // Read files from contents that nothing has been disposed yet, so all is well.

How about this code?

using (var archive = OpenZipArchive(@"Cbackup.zip"))
{
    BackupReader contents;
    using (var index = archive.OpenFile("index.json"))
    {
        contents = ReadBackup(archive, index);
    }
    // Read files from contents
}

Here, we might have a problem. The lifetime represented by the second using statement has finished, and index has been disposed. If the BackupReader holds a reference to index then we’ll have trouble, probably in the form of an ObjectDisposedException when we try to use it.

How can we tell if this code is safe? In C#, we can’t – statically – without examining the code of ReadBackup() and potentially any code it calls.

The analogy is not precise, since in Rust it’s deallocating the memory rather than calling Dispose() that we’re concerned with, but otherwise, this is the safety issue Rust lifetimes solve. The Rust compiler must know, when examining this code, whether the second usage of contents is valid, because allowing it otherwise could lead to reading from memory that’s already been freed.

Before translating the example into Rust, let’s invent a hypothetical dispose-safety checker for C#, where we can represent these kinds of things in method signatures using attributes. Here’s the original:

static BackupReader ReadBackup(Archive archive, Index index)
{
    var manifestFilename = index.GetManifestFilename();
    return new BackupReader(archive, manifestFilename);
}

Ah – so the index argument is only used to determine a filename, and the result only depends on index. Here’s the “dispose-safe C#” that expresses this:

[return: In("a")]
static BackupReader ReadBackup([In("a")] Archive archive, [In("b")] Index index)
{
    var manifestFilename = index.GetManifestFilename();
    return new BackupReader(archive, manifestFilename);
}

We’ve added some labels like [In("a")] to the arguments. These represent which using block the value came from. “Dispose-safe C#” requires these whenever a method accepts an IDisposable argument.

Annotating the return value with [In("a")] means that the result is valid in the same scope that the archive parameter is valid in – both are tagged with a.

The return value doesn’t have any relationship at all to index, so the b tag doesn’t appear anywhere else.

Back at the call-site, the checker does the mapping of labels to using blocks for us implicitly:

// 1
using (var archive = OpenZipArchive(@"Cbackup.zip"))
{
    BackupReader contents;

    // 2
    using (var index = archive.OpenIndex()))
    {
        // Infers that "a" is block 1 and "b" is block 2
        contents = ReadBackup(archive, index);
    }

    // Thus, contents depends only on block 1, so this is fine
    // Read files from contents
}

Now, what would have happened if ReadBackup() had a different implementation?

[return: In("b")]
static BackupReader ReadBackup([In("a")] Archive archive, [In("b")] Index index)
{
    var files = archive.LoadAllFiles();
    return new BackupReader(files, index);
}

In this version of the method, the returned BackupReader is declared to depend on b, the lifetime of the index argument. The mapping of the using statements is the same, but the code is no longer valid and the checker will reject it:

// 1
using (var archive = OpenZipArchive(@"Cbackup.zip"))
{
    BackupReader contents;

    // 2
    using (var index = archive.OpenIndex())
    {
        // Infers that a is block 1 and b is block 2
        contents = ReadBackup(archive, index);
    }

    // Error: contents depends on block 2, which has ended
    // Read files from contents
}

By giving the checker a bit more information about the intentions of the code, it’s helpfully saved us from an ObjectDisposedException at runtime — awesome!

Lifetime variables in Rust

I hope this little “dispose-safe C#” thought experiment sheds some light now on the corresponding (memory-safe) Rust:

fn read_backup<'a,'b>(archive: &'a Archive, index: &'b Index) -> BackupReader<'a> {
    let manifest_filename = index.manifest_filename();
    BackupReader::new(archive, manifest_filename)
}

Don’t let the reuse of the angle-brackets “generic” syntax throw you. The little 'a and 'b annotations in there are lifetime variables, exactly the same as the [In("a")] and [In("b")] examples in C#.

At the call-site, there’s no special using-like statement to delineate lifetimes in Rust, which are implied by the natural boundaries of blocks:

// 1
let archive = open_zip_archive("C\backup.zip");
let contents: BackupReader;

// 2
{
    let index = archive.open_index();
    contents = read_backup(&archive, &index);
}

// Read files from contents

When the compatible implementation of read_backup() is used, this compiles fine; when the second implementation of read_backup() is used here, the Rust compiler rejects the code with error: index does not live long enough.

I hope my analogy hasn’t stretched too far and led you astray, as my own understanding of Rust is far from perfect. My hope is that it makes the whole lifetime concept a bit more approachable. Ownership, borrowing and lifetimes are interesting concepts that it are worth spending some time on.

The stack and the heap

There’s one more place I’ve spotted where C# and Rust differ fundamentally, and that’s how memory is allocated between the stack and the heap.

C# has two families of data structures that dermine allocation behaviour: structs and classes. If a type is a struct then values of that type will be allocated on the stack:

// Here, now is the actual DateTime value
var now = new DateTime(2016, 03, 23);

If a type is a class, then values of that type are allocated on the heap, and we work with it through references:

// Here, user is a pointer to some memory holding a User
var user = new User("nblumhardt");

C# also has an “escape hatch” for moving structs to the heap as well, in the form of ‘boxing’:

// Here, now is a pointer to memory holding the DateTime
var now = (object)new DateTime(2016, 03, 23);

Boxed structs in C# however are a bit of a second-class citizen – there’s no static type to represent them, so they end up as objects.

After the last section, you might feel some relief to hear that Rust is simpler on this point. In Rust, all values are essentially C# structs:

// Here, user is the actual User value
let user = User::new("nblumhardt");

To move a value onto the heap, as in C#, the value is boxed:

// Here, user is a Box, pointing to memory on the heap
let user = Box::new(User::new("nblumhardt"));

Boxed values are first-class in Rust, so in practice they’re more like C#’s classes in usage, enjoying the benefits of static typing for example.

The declaration and construction syntax for structs in Rust is quite a bit different from the C# equivalent, but I’ve found that getting clear about the differences in the allocation model has made the biggest difference to my understanding of Rust code.

Language machinery

Rust is a modern language. Its designers have drawn widely from existing languages and experience to exclude the worst footguns (nullable references, mutability-by-default) and include the best available tools for abstraction (closures, generics, traits, pattern matching, algebraic data types and more).

One of the refreshing aspects of Rust is that like C#, it’s a multi-paradigm language with a functional features. Many of the new languages I’ve encountered in recent years are presented as functional languages, which is nice enough, but there’s a place in the future for variety.

I’d love to continue poking at different corners of Rust with this post, but writing it’s already pushing at the limits of my attention, so before I lose your interest too I’d better wrap up.

In closing

I’m having a great time exploring Rust. If you’re working in one of the currently popular industrial languages like C#, and want to learn something that’s different in more than just syntax, Rust could be what you’re looking for. Have fun!

Reading logger configuration from appsettings.json

ASP.NET Core has a completely new configuration subsystem that adds JSON, environment variables and other sources in addition to the classic XML format we’ve had since .NET 1.0. The appeal of Microsoft.Extensions.Configuration for me is that it’s easier to work with than the System.Configuration APIs.

Having the flexible JSON format is also a big plus, and it makes Serilog configuration on .NET Core quite a bit more natural than the XML <appSettings> support that ships in 1.5.

This post is about Serilog 2.0, currently in beta. Unless you’re targeting ASP.NET Core, you probably need the Serilog 1.5 package. But, all of this stuff will land later this year, so read on!

Here’s the Serilog configuration for colored console logging:

Log.Logger = new LoggerConfiguration()
  .WriteTo.ColoredConsole()
  .CreateLogger();

With the new Serilog.Settings.Configuration package, ReadFrom.Configuration() lets us pull the same information from appsettings.json.

var configuration = new ConfigurationBuilder()
  .AddJsonFile("appsettings.json")
  .Build();

Log.Logger = new LoggerConfiguration()
  .ReadFrom.Configuration(configuration)
  .CreateLogger();

Log.Information("Hello, world!");

In the JSON file itself, the relevant section is called "Serilog":

{
  "Serilog": {
    "WriteTo": ["ColoredConsole"]
  }
}

Simple, hey? As long as the sink package, in this case Serilog.Sinks.ColoredConsole is installed, the right assemblies will be found. The names of sinks (and the arguments they might accept) are the same as the method names in the C# API, because behind the scenes the JSON configuration is translated into exactly the same method calls as you’d make directly in C#.

Let’s look at a more complete example:

{
  "Serilog": {
    "MinimumLevel": "Debug",
    "WriteTo": [
      { "Name": "ColoredConsole" },
      { "Name": "Seq", "Args": { "serverUrl": "http://localhost:5341" } }
    ],
    "Enrich": ["FromLogContext", "WithMachineName", "WithThreadId"],
    "Properties": {
        "Application": "Sample",
        "Environment": "Test"
    }
  }
}

This one specifies a minimum level, and shows how "WriteTo" works to specify more complex sink configuration. Instead of just the name of the sink some arguments are included too. Seq is one of the network-based sinks for Serilog, and WriteTo.Seq() accepts a serverUrl argument in the C# API, so that’s passed under "Args" in the JSON version.

"Enrich" works the same way as "WriteTo": the options here are the same ones that you’ll see on the Enrich configuration API in C#, and arguments can be passed in the same way if they’re needed.

As a syntactic shortcut, Enrich.WithProperty("Application", "Sample") in C# is specified using the JSON "Properties" collection. (This has the side-effect of meshing nicely with environment variable configuration support, which we get “for free” by supporting the IConfiguration interface. More on that another day…)

What do you think? Let me know in the comments here, or join the Gitter chat we’ve just opened up for the Serilog project.

Remote Level Control in Serilog using Seq

Logging is indispensable when it’s needed, but too much logging can be a drain on performance and resources when everything is going smoothly and you don’t need really fine-grained log data.

Rather than adding more machines and buying fatter pipes, the largest apps I’ve worked on have all implemented some form of dynamic level control. In this system, logging usually defaults to something sensible (my preference is Information) and can be turned up or down from a central control panel when bugs need to be shaken out.

A service bus is usually the tool of choice for implementing it: publish a message like new LoggingLevelChange { App = "Ordering", Level = "Debug" } and the participating apps will notice, say “Hey, I do ordering!” and update their logging level accordingly.

There are always a few challenges to overcome before it all works smoothly. Ideally, every app would get this capability, but it’s not always high enough on the backlog to get done early on.

Along with the next Serilog version is a small change in the Seq sink that I’m really excited about. By sharing a LoggingLevelSwitch between the logging pipeline and Seq sink, it’s now possible to control the logging levels of connected apps easily — efficiently — from the Seq server.

var levelSwitch = new LoggingLevelSwitch();

Log.Logger = new LoggerConfiguration()
    .MinimumLevel.ControlledBy(levelSwitch)
    .WriteTo.LiterateConsole()
    .WriteTo.Seq("http://localhost:5341",
                 apiKey: "yeEZyL3SMcxEKUijBjN",
                 controlLevelSwitch: levelSwitch)
    .CreateLogger();

In this configuration, levelSwitch is passed to the MinimumLevel.ControlledBy() method and the WriteTo.Seq() statement.

Now, whenever a batch of events is sent to Seq, the logging level associated with the corresponding API key will be returned by Seq (or at two minute intervals, when things are quiet). The Seq sink will apply this to controlLevelSwitch, so the whole logging pipeline will be adjusted up or down.

DynamicLevelControl

Why is this so much better than what was possible before? Seq has had per-API key level control for a while, but this was only used as a filter: log events had to be generated at the highest required level and then discarded for the scheme to work effectively, using more CPU and memory in the client application than was strictly required. Now that the Seq sink can control the level of the logging pipeline, events that aren’t required are not generated in the first place.

Pre-release versions of the Seq sink including this feature work today on .NET 4.5+, but a pre-release Serilog 2.0 version is required as well. Depending on which other Serilog packages you use, this may or may not be possible to try out in your application. Some details of Serilog 2.0 status can be found in this post.

If you’re able to check it out I’d love to hear from you.

Serilog Tip – Don’t Serialize Arbitrary Objects

Serilog is built around the idea of structured event data, so complex structures are given first-class support.

var identity = new { Name = "Alice", Email = "alice@example.com" };
Log.Information("Started a new session for {@Identity}", identity);

If you’ve spent some time with Serilog you will have encountered the @ ‘destructuring’ operator. This tells Serilog that instead of calling ToString() on identity, the properties of the object should be serialized and stored in structured form.

{"Identity": {"Name": "Alice", "Email": "alice@example.com"}}

You might also have wondered – since Serilog is built around structured data, why isn’t serialization the default – or, why is @ required at all?

If you consider the process of serialization in general, this starts to make sense. Have you ever tried to convert an entity from EntityFramework to JSON with a general-purpose serializer? A System.Type? Give it a try! If you’re lucky enough to get a result, you’ll probably find it’s a very big blob of text indeed. Some serializers will bail out when circular references are detected, others will chew up RAM indefinitely. Most objects aren’t designed for serilalization, so they’re interconnected with many others that themselves link to yet more objects, and so-on.

Serializers – Serilog included – are not made for serializing arbitrarily-connected object graphs. Serilog has some safeguards to make sure your application survives a “mass serialization” event, but the effects on the health of your logging pipeline won’t be pretty.

In a logging library there’s a delicate balance to make between safety and runtime cost, so the best that most Serilog sinks can do is drop or reject events that are too large before trying to ship them elsewhere (the v2 version of the Seq sink now defaults to a 256 KB cap on event bodies).

What’s the TL;DR? When you serialize data as part of a Serilog event, make sure you know exactly what will be included. The @ operator is a powerful but sharp tool that needs to be used with precision.

Serilog 2.0 Progress Update

TL;DR: There are now 2.0.0-rc2-final-x Serilog packages on NuGet. These are primarily for the adventurers building apps for .NET Core and ASP.NET Core (previously ASP.NET 5). If you’re not pushing forwards into the next version of .NET just yet, then Serilog 1.5.x is the version you should use.

Blazing a trail, or curious to hear what we’ve been up to? Great! Read on…

Why Serilog v2?

serilog-nuget

.NET is changing – that’s possibly an understatement. The days of the big, bulky, slow-moving, preinstalled framework are drawing to a close, and in its place we’ll soon be building on a pay-for-play framework that is light enough to ship alongside your app. Targeting .NET Core means your app only has to carry with it the packages that it needs.

This spells trouble for the “batteries-included” packaging approach we took in Serilog 1.0. Including a bunch of sinks, enrichers and extensions in the core Serilog package means that the transitive closure of dependencies is large. Most apps only use one or two logging sinks, so shipping every package required by every core sink is a waste. In a year’s time, no one will want to use a logging framework that pulls in multiple megabytes of unnecessary junk.

So, fundamentally, v2 of Seriog is about packaging. Instead of the Serilog package carrying the rolling file and console sinks for example, those will now live in Serilog.Sinks.RollingFile and Serilog.Sinks.Console, respectively. Without going too crazy, other bits and pieces like <appSettings> support will also be moving out.

Since a good package factoring will mean a few breaking changes, we’ll capitalize on the opportunity to make a number of smaller improvements that couldn’t otherwise be included in 1.5.

Who is working on this?

There are dozens of active contributors to the various Serilog sub-projects. I’d love to list them here This Week in Rust-style, but I’m not organized enough to get that right. Everything that’s great about Serilog at this point is the product of a dedicated bunch of contributors, and all of that will be brought across into Serilog v2. We’ve also landed many PRs in the v2 branch that will ship along with the v2 release – thanks everyone who has sent one or more of these!

On v2 specifically though, I have to mention the work done by @khellang on the platform mechanics. .NET Core has been a moving target, and over the course of almost a year we’ve had to adjust to the twists and turns that the new platform has taken while keeping an eye on the eventual end goal. Kristian has helped the project navigate all of this and I don’t think we’d even have a working build right now if not for his guidance and hard work.

The flip-side – maintaining a usable API and intelligible packaging story, has been in large part the work of @MatthewErbs, who’s landed some of the broadest and most detailed PRs we’ve yet attempted in the last few months. Likewise, v2 would be nowhere without Matthew’s efforts – and bulletproof patience! 🙂

What’s done, what’s remaining?

At the time of writing, the v2 Serilog package as well as the previously-built-in sinks work on .NET Core identically to the full .NET Framework. On the packaging side, there’s still some work to go, but things are coming together quickly.

Beyond that set of basic sinks, along with a few extras like the Seq sink and my favourite Literate Console, practically nothing works. If you are working on a .NET Core app and the sinks you prefer aren’t available, helping to update them is a great way to support the project.

We’ve also shipped Serilog.Extensions.Logging, a provider for the logging abstractions used throughout ASP.NET Core. You can read more about it and see some example output in this article on the Seq blog.

Remaining is a large amount of cleanup and polish. One task in the midst of this is removal of .NET 4.0 support from the codebase. 4.0 is a respectable platform, but it is a static one. New features and improvements going into Serilog are mostly in support of new platform features and don’t carry as much value for .NET 4 apps. Couple this with the observation that .NET apps aren’t likely to adopt new technology (including Serilog) in significant numbers, and it is hard to justify carrying forward the #if spaghetti to the new version.

This doesn’t mean Serilog is abandoning .NET 4 – only that Serilog 1.5 will remain the recommended version for this platform. Platform-specific dependency support in NuGet also means that projects wishing to actively support .NET 4 with Serilog can still do so by selecting different required Serilog versions per target framework version in project.json.

When will Serilog 2.0 ship?

The core Serilog package will ship in line with .NET Core/ASP.NET Core. There’s no solid date on this yet, but the ASP.NET Roadmap is the location to watch.

I would guess this post raises as many questions as it answers – please ask away! If you’d like to help us cross the last mile (or two) then the best way to get involved is to grab an issue by leaving a comment, or raising a new issue for work you might want to complete such as porting one of the many remaining sinks.

2015 in Review

Another year is almost over! I’ve looked back through the posts on this blog from the past year and made some notes in case you’re looking for some holiday reading.

This was a huge year for Serilog. January kicked of with a few tens of thousands on the NuGet package download counter, but closing up the year it’s over 250k and accelerating. With Serilog-style message template support set to land in ASP.NET 5, I think it is safe to say 2016 is the year we’ll see structured logging hit mainstream in .NET.

Seq has also seen huge (triple-digit) growth this year, especially since v2 shipped in June. Keeping up has been a challenge! Along with a new major version in the first quarter next year, there’s a lot coming for us in 2016 – stay tuned for some updates after the break.

2015

  • Give your instrumentation some love in 2015! — I started this year aware that the vast majority of .NET teams are still writing plain-text logs, collecting them with Remote Desktop and reading them in Notepad. It feels like this is improving but there’s still a long way to go before we’re all using the current generation of tools effectively.
  • Using Serilog with F# Discriminated Unions — Serilog gained some better F# support this year. (Also on the F# front, Adam Chester’s implementation of Message Templates in F# has opened up some possibilities with that language. Logary 4 also promises some Serilog-style structured goodness for F# users sometime in the coming year.)
  • Tagging log events for effective correlation — Some tips for tracing related paths of execution through your application logs.
  • Diagnostic logging in DNX/ASP.NET 5 — The ASP.NET 5/CoreCLR platform has changed significantly since this first tentative post describing Serilog support went out in May, but the fundamentals are still pretty well summed-up here. ASP.NET 5 and CoreCLR are the bit focus areas for Serilog’s upcoming 2.0 release, which has been ticking away on GitHub for a few months now. The platform reset going on in .NET right now is going to take some getting used to, but in a few years we’ll be able to thank the current ASP.NET and CoreFX teams, as well as the mass of community contributors, for the continued relevance and growth of .NET. 2016’s going to be a year for us all to rally and show some support for this work.
  • Seq/2 Update, Seq/2 Beta and Announcing Seq 2 RTW — It’s hard to believe Seq 2 has only been out since June. These few posts track the release of Seq 2, which was a complete UI rewrite and major overhaul of Seq v1. (2.1 followed, as did 2.2 and 2.3. Seq is now at version 2.4).
  • Filtering with Signals in Seq 2 — Explains the new filtering system in Seq 2.
  • Contender for .NET’s Prettiest Console? — If you’re not using Serilog’s Literate Console sink, you need to check out this post.
  • Contextual logger injection for Autofac — If you prefer to inject ILogger using your IoC container, this post is for you.
  • Assigning event types to Serilog events — Seq’s “event type” system can be implemented in simple flat-file logs too, for improved searching/filtering.
  • Aggregate Queries in Seq Part 1: Goals — The first of a series of posts documenting a spike through Seq v3’s SQL Query interface. (Parts 2, 3, 4, 5 and 6.)
  • How to notify Slack using logs from your .NET app — Seq + Slack = <3.

Thanks for visiting and commenting this year. Have a happy and safe holiday season, and see you in 2016!

How to notify Slack using logs from your .NET app

If your team uses Slack, it’s a great place to centralize notifications that might otherwise end up cluttering your email inbox. Commits, build results, deployments, incoming bug reports – Slack keeps your team informed without making everyone feel overloaded with information, which I why I think I like it so much.

The next step down the road to notification heaven, after setting up integrations for the third party apps and services you use, is to integrate your own apps into Slack.

Doing this directly – posting to the Slack API when interesting things happen – is a bit too much coupling, but if your app writes log events these can be used to trigger notifications in Slack with very little effort.

EventInSlack

So that Slack isn’t flooded with irrelevant events, we’ll forward them to Seq first so that interesting events can be selected from the stream.

1. Write and send the log events

In the demo app below, Serilog is configured to send events both to Seq and the local console.

First up install some prerequisite packages:

Install-Package Serilog
Install-Package Serilog.Sinks.Seq
Install-Package Serilog.Sinks.Literate

We’re using Serilog’s Literate Console sink because it reveals more about the structure of the log events (using colour) than the standard console can.

Here’s the code:

class Program
{
    static void Main()
    {
        Log.Logger = new LoggerConfiguration()
            .WriteTo.Seq("http://localhost:5341")
            .WriteTo.LiterateConsole()
            .CreateLogger();

        Log.Information("Starting up");

        var rng = new Random();
        while (true)
        {
            var amount = rng.Next() % 100 + 1;

            Log.Information("Received an order totalling ${Amount:0.00}", amount);

            Thread.Sleep(1000);
        }
    }
}

This program generates a random integer between 1 and 100 each second, and writes this to the log as a property called Amount. Let’s imagine we’re processing sales on a website, or reading measurements from a sensor – the same approach covers a lot of scenarios.

Console

2. Install Seq

If you don’t have Seq running on your development machine, install it from the Seq downloads page – just click through the installer dialog and choose “Browse Seq” at the end to view these log events.

Seq

3. Choose events to notify on

Here’s a twist; so that we’re not overwhelmed with notifications we’ll only raise one if the value of the “sale” is above $90. To find these events in Seq, we first write the filter Amount >= 90. Once we’re confident the right events are selected, the filter can be saved as a signal.

CreatingSignal

The name of the signal is important since we’ll use it to configure the Slack integration in a moment.

4. Add the Slack integration for Seq

The Slack integration for Seq is developed and is maintained on GitHub by David Pfeffer. Seq plug-in apps are published on NuGet – this one has the package id Seq.App.Slack.

To install it into your Seq instance go to Settings, then Apps, and choose Install from NuGet. Enter the package name Seq.App.Slack and install.

Installing

A few seconds later the app should appear in your app list and be ready to configure. To the right of the app name, choose Start new instance….

Installed

5. Configure the WebHook

Give the instance a name like “Big Sales Incoming!” and un-check Only trigger the app manually. The name of the signal we created earlier should now be there in a drop down to select.

AppSetup

The last thing to set is the WebHook URL setting. The easiest way to get one of these is to open the channel you’re posting to in Slack and choose + Add an app or custom integration. This will take you to a Slack page which at the time of writing has just gone through a major overhaul. The current path through the site to add a webhook is:

  1. Choose Build your Own in the top right-hand corner of the page
  2. Under Something just for my team choose Make a Custom Integration
  3. Here’s where you can choose Incoming WebHooks and follow a couple of prompts to get the URL

It’s a bit of an obscure way to do something that’s fairly common – I’m hopeful this will be improved once the redesign settles in 🙂

Back to Seq, paste in the URL, Save Changes and you’re good to go! Events will now start appearing in Slack whenever the Amount is $90 or more.

EventInSlack

Happy logging!