Aggregate Queries in Seq Part 4: Planning

Seq is a log server designed to collect structured log events from .NET apps. This month I’m working on adding support for aggregate queries and blogging my progress as a diary here. This is the fourth installment – you can catch up on Goals, Syntax, and Parsing in the first three posts.

So, this post and the next are about “planning” and “execution”. We left off a week ago having turned a query like:

select count(*)
where ApplicationName == "Admissions"
group by time(1d), ExceptionType

Into an expression tree:


Pretty as it looks, it’s not obvious how to take a tree structure like this, run it over the event stream, and output a rowset. We’ll break the problem into two parts – creating an execution plan, then running it. The first task is the topic of this post.


In a relational database, “planning” is the process of taking a query and converting it into internal data structures that describe a series of executable operations to perform against the tables and indexes, eventually producing a result. The hard parts, if you were to implement one, seem mostly to revolve around choosing an optimal plan given heuristics that estimate the cost of each step.

Things are much simpler in Seq’s data model, where there’s just a single stream of events indexed by (timestamp, arrival order), and given the absence of joins in our query language so far. The goal of planning our aggregate queries is pretty much the same, but the target data structures (the “plans”) only need to describe a small set of pre-baked execution strategies. Here they are.

Simple projections

Let’s take just about the simplest query the engine will support:

select MachineName, ThreadId

This query isn’t an aggregation at all: it doesn’t have any aggregate operators in the list of columns, so the whole rowset can be computed by running along the event stream and plucking out the two values from each event. We’ll refer to this as the “simple projection” plan.

A simple projection plan is little more than a filter (in the case that there’s a where clause present) and a list of (expression, label) pairs representing the columns. In Seq this looks much like:

class SimpleProjectionPlan : QueryPlan
    public FilterExecutionPlan Filter { get; }
    public ProjectedColumn[] Columns { get; }

    public SimpleProjectionPlan(
        ProjectedColumn[] columns,
        FilterExecutionPlan filter = null)
        if (columns == null) throw new ArgumentNullException(nameof(columns));
        Columns = columns;
        Filter = filter;

We won’t concern ourselves much with FilterExecutionPlan right now; it’s shared with Seq’s current filter-based queries and holds things like the range in the event stream to search, a predicate expression, and some information allowing events to be efficiently skipped if the filter specifies any required or excluded event types.

Within the plan, expressions can be stored in their compiled forms. Compilation can’t be done any earlier because of the ambiguity posed by a construct like max(Items): syntactically this could be either an aggregate operator or a scalar function call (like length(Items) would be). Once the planner has decided what the call represents, it can be converted into the right representation. Expression compilation is another piece of the existing Seq filtering infrastructure that can be conveniently reused.


Stepping up the level of complexity one notch:

select distinct(MachineName) group by Environment

Now we’re firmly into aggregation territory. There are two parts to an aggregate query – the aggregates to compute, like distinct(MachineName), and the groupings over which the aggregates are computed, like Environment. If there’s no grouping specified, then a single group containing all events is implied.

class AggregationPlan : QueryPlan
    public FilterExecutionPlan Filter { get; }
    public AggregatedColumn[] Columns { get; }
    public GroupingInstruction[] Groupings { get; set; }

    public AggregationPlan(
        AggregatedColumn[] columns,
        GroupingInstruction[] groupings,
        FilterExecutionPlan filter = null)
        if (columns == null) throw new ArgumentNullException(nameof(columns));
        if (groupings == null) throw new ArgumentNullException(nameof(groupings));
        Filter = filter;
        Columns = columns;
        Groupings = groupings;

This kind of plan can be implemented (naiively perhaps, but that’s fine for a first-cut implementation) by using the groupings to create “buckets” for each group, and in each bucket keeping the intermediate state for the required aggregates until a final result can be produced.

Aggregated columns, in addition to the expression and a label, carry what’s effectively the constructor parameters for creating the bucket to compute the aggregate. This isn’t immediately obvious based on the example of distinct, but given another example the purpose of this becomes clearer:

percentile(Elapsed, 95)

This expression is an aggregation producing the 95th percentile for the Elapsed property. An AggregatedColumn representing this computation has to carry the name of the aggregate ("percentile") and the argument 95.

Time slicing

Finally, the example we began with:

select count(*)
where ApplicationName == "Admissions"
group by time(1d), ExceptionType

Planning this thing out reveals a subtlety around time slices in queries. You’ll note that the time(1d) group is in the first (dominant) position among the grouped columns. It turns out the kind of plan we need is completely different depending on the position of the time grouping.

In the time-dominant example here, the query first breaks the stream up into time slices, then computes an aggregate on each group. Let’s refer to this as the “time slicing plan”.

class TimeSlicingPlan : QueryPlan
    public TimeSpan Interval { get; }
    public AggregationPlan Aggregation { get; }

    public TimeSlicingPlan(
        TimeSpan interval,
        AggregationPlan aggregation)
        if (aggregation == null) throw new ArgumentNullException(nameof(aggregation));
        Interval = interval;
        Aggregation = aggregation;

The plan is straightforward – there’s an interval over which the time groupings will be created, and an aggregation plan to run on the result.

The output from this query will be a single time series at one-day resolution, where each element in the series is a rowset containing (exception type, count) pairs for that day.

The alternative formulation, where time is specified last, would produce a completely different result.

select count(*)
where ApplicationName == "Admissions"
group by ExceptionType, time(1d)

The output of this query would be a result set where each element contains an exception type and a timeseries with counts of that exception type each day. We’ll refer to this as the “timeseries plan”.

Both data sets contain the same information, but the first form is more efficient when exploring sparse data, while the second is more efficient for retrieving a limited set of timeseries for graphing or analytics.

To keep things simple (this month!) I’m not going to tackle the timeseries formulation of this query, instead working on the time slicing one because I think this is closer to the way aggregations on time will be used in the initial log exploration scenarios that the feature is targeting.

Putting it all together

So, to recap – what’s the planning component? For our purposes, planning will take the syntax tree of a query and figure out which of the three plans above – simple projection, aggregation, or time slicing – should be used to execute it.

The planner itself is a few hundred lines of fairly uninteresting code; I’ll leave you with one of the tests for it which, like many of the tests for Seq, is heavily data-driven.

[TestCase("select MachineName", typeof(SimpleProjectionPlan))]
[TestCase("select max(Elapsed)", typeof(AggregationPlan))]
[TestCase("select MachineName where Elapsed > 10", typeof(SimpleProjectionPlan))]
[TestCase("select StartsWith(MachineName, \"m\")", typeof(SimpleProjectionPlan))]
[TestCase("select max(Elapsed) group by MachineName", typeof(AggregationPlan))]
[TestCase("select percentile(Elapsed, 90) group by MachineName", typeof(AggregationPlan))]
[TestCase("select max(Elapsed) group by MachineName, Environment", typeof(AggregationPlan))]
[TestCase("select distinct(ProcessName) group by MachineName", typeof(AggregationPlan))]
[TestCase("select max(Elapsed) group by time(1s)", typeof(TimeSlicingPlan))]
[TestCase("select max(Elapsed) group by time(1s), MachineName", typeof(TimeSlicingPlan))]
[TestCase("select count(*)", typeof(AggregationPlan))]
public void QueryPlansAreDetermined(string query, Type planType)
    var tree = QueryParser.ParseExact(query);

    QueryPlan plan;
    string[] errors;
    Assert.IsTrue(QueryPlanner.Plan(tree, out plan, out errors));
    Assert.IsInstanceOf(planType, plan);

Part five will look at what has to be done to turn the plan into a rowset – the last thing to do before the API can be hooked up!

Aggregate Queries in Seq Part 3: An Opportunistic Parser

It turns out the parser wasn’t a huge departure from Seq’s existing filter parser. Seq already uses Sprache to parse filter expressions, and Sprache parsers compose very nicely.

After making the current FilterExpressionParser “root” expression public, and defining some new AST nodes like Projection and so-on, things just get bolted together:

static readonly Parser<ExpressionValue> ExpressionValue =
    FilterExpressionParser.Expr.Token().Select(e => new ExpressionValue(e));

static readonly Parser<Projection> Projection =
    from v in AggregateValue.Or(ExpressionValue)
    from l in Label.Optional()
    select new Projection(v, l.GetOrDefault());

Here you can see the way a projection like the count(*) as Total column is constructed from a parser for values, and a parser for optional ‘as’ labels. I had to define a separate parser for some aggregations, like count(*) that aren’t otherwise valid Seq filter syntax, but any existing expression that FilterExpressionParser supports can be used as the value of a projected column.

Heading further up towards the root of the grammar, we get something like:

static readonly Parser<Query> Query =
    from @select in Select
    from @where in Where.XOptional()
    from groupBy in GroupBy.XOptional()
    select new Query(@select, @where.GetOrDefault(), groupBy.GetOrDefault());

The resulting Query parser can take some text input and give back a tree of objects representing the parts of the query. Success!

There was one subtle problem here that you can spot by way of the oddly-named XOptional combinator. Sprache on Github provides Optional, which works as advertised, but upon failing a match will backtrack and return success regardless of whether a partial parse was possible or not.

This leads to error messages without a lot of information, for example:

select distinct(ExceptionType) group ApplicationName

is missing the ‘by’ required next to ‘group’. Using Optional the parser reports:

Syntax error (col 31): unexpected 'g'.

Hmmm. Not so good – there’s nothing at all wrong with that ‘g’! The problem is that upon failing to parse the ‘by’, Sprache’s Optional returned a zero-length successful parse, so parsing picks back up at that position and fails because there are no more tokens to match.

The ‘X’ in XOptional is for eXclusive, meaning that the token is optional, but, only if it parses no input whatsoever. As soon as ‘group’ is parsed, the optional branch is considred “taken”, and failures will propagate up. (Sprache ships ‘X’ versions of several parsers already, such as an exclusive Many called XMany.)

Here it is:

public static Parser<IOption<T>> XOptional<T>(this Parser<T> parser)
    if (parser == null) throw new ArgumentNullException(nameof(parser));
    return i =>
        var result = parser(i);
        if (result.WasSuccessful)
            return Result.Success(new Some<T>(result.Value), result.Remainder);

        if (result.Remainder.Equals(i))
            return Result.Success(new None<T>(), i);

        return Result.Failure<IOption<T>>(result.Remainder, result.Message, result.Expectations);

The divergence from the built-in optional is only succeeding with a zero-length parse if (result.Remainder.Equals(i)).

Using XOptional:

Syntax error (col 38): unexpected 'A', expected keyword 'by'.


If you haven’t used parser combinators before this whole thing might be a bit surprising – where’s the EBNF? The esoteric command line tools with animal names? It turns out that combinators make parsing into a regular (somewhat imperative) programming task without a lot of mystery surrounding it.

There are some limitations in Sprache’s implementation I’d like to address someday – for example, the error reported on ‘A’ above rather than ‘ApplicationName’ is the result of parsing the raw character stream instead of a tokenised one – but these are minor inconveniences that can be worked around if need be.

If you haven’t looked into combinator-based parsing, there are some great tutorials and examples linked from Sprache’s README. It’s a technique worth adding to your tool belt regardless of the kind of programming you usually do. Little languages are everywhere, waiting to be cracked open!

The most enjoyable and challenging part of any language processing task for me is not so much the parsing though, but taking a tree of syntactic nodes like we have here, and turning it into something executable. That’s coming up next :-)

Seq 2.4

Hot off the press and ready to download from the Seq site.

Seq 2.4 is largely a maintenance release with a host of bug fixes, but amongst those are some substantial improvements.

Filtering performance improvements

Seq implements its own query engine (soon to get some interesting new capabilities ;-)) optimized for the kind of messy, ad-hoc filtering that we do over log data. Based on the kinds of queries generated through real-world use of the 2.0 “signals” feature, two great little optimizations have landed in 2.4.

First, an overdue implementation of short-circuiting && (AND) and || (OR). This just means that the expression @Level == "Error" && Environment == "UAT" won’t bother evaluating the second comparison if the first one is false. Seq has always done some short-circuiting in queries, but only in certain cases. 2.4 extends this to all logical operations.

Second, and closely related, is term reordering based on expected execution cost. Some predicates are extremely cheap and others costly, for example an event type comparison ($FEE7C01D) can be evaluated several thousand times faster than a full-document text search (Contains(@Document, "penguin"). This means that, given short-circuiting operations, $FEE7C01D && Contains(@Document, "penguin") is much more efficient than the reverse Contains(@Document, "penguin") && $FEE7C01D in the common case of mostly-negative results. Seq 2.4 uses heuristics to weigh up the relative execution cost of each branch of an expression to make sure the fastest comparisons are performed first.

Both of these changes add up to substantial gains when using signals with a large number of applied filters.

Restricted signals

Since Seq uses the signal mechanism for retention processing, it’s possible that an accidental change to a signal used in retention processing could lead to data loss. For this reason Seq 2.4 introduces lockable signals, requiring administrative privileges to modify.


Online compaction

Seq uses the ESENT storage engine to manage files on disk. It’s an amazing piece of technology and very mature, however until recently was unable to support compaction of data files during operation. Although retention policies would remove events from the file, Seq would periodically need to take a slice of the event stream offline to free the empty space in the file, and this process copies the old file into a new one. Mostly this background operation would be quick and transparent, but on heavily loaded servers the disk space and I/O required would sometimes significantly impact performance.

The new version 2.4, when running on a capable operating system (Windows 8.1+ or Server 2012 R2+), now takes advantage of ESENT’s sparse file support to perform compaction in real time, spreading out the load and avoiding additional disk usage and I/O spikes.

You can download the new version here on the Seq website.

Happy logging! :-)

Set the asterisk in project.json version numbers

I have a feeling I’ve bothered the friendly people on Jabbr twice now about how to set a value for the * (‘wildcard’) placeholder in DNX’s project.json files, so here it is for next time… :-)

DNX project.json files use a version syntax that makes it easy to set a tag (e.g. the branch name) in the JSON file itself, while adding a unique numeric suffix at build time.

Here’s one in the first property below:

    "version": "1.0.0-beta-*",
    "description": "Serilog provider for Microsoft.Framework.Logging",

(I’m setting up CI for a Serilog provider for Microsoft.Framework.Logging that the ASP.NET team put together and contributed to the project.)

These work both in the package’s own version field, and in dependency versions, so multiple projects being built at the same time can depend on each other this way.

However, if you build a project versioned this way in Visual Studio, or at the command line with dnu build you’ll get a version number like 1.0.0-beta. That’s not what you’re after.

The key is setting the DNX_BUILD_VERSION environment variable, e.g. to a counter supplied by your build server:


With this done you’ll get nice unique package versions like 1.0.0-beta-234.

Thanks David Fowler and Alex Koeplinger for the pointer.

Seq 2.2: Memory Efficiency, One-click Auto-refresh, Filter History

It took some restraint to get Seq 2.0 over the line: there are so many possibilities that there’s really no way everything I’d like to have done could make the cut. Now the “big bang” is over, it’s great to be able to make more regular point releases like Seq 2.2, which this post is about.

Improved Memory Efficiency

Seq is a disk-backed data store that uses memory caching extensively. When data from older events does spill over from RAM, I/O is needed and queries slow down.

Seq 2.2 performs additional de-duplication of string data to represent more events in the same cache space. This reduces the need for I/O and places less overall burden on the CPU. The net effect is that queries can run noticeably faster on busy servers.

One-click “Auto-refresh on”

The much-used “Auto-refresh on” option has been promoted from a drop-down menu item to a top-level button. It’s represented by the little “infinity” icon in the image above.


Recent Filter History

Seq 1.6 used deep-linking to tie the current filter expression to the web browser’s history. The basic idea was sound – it’s nice to enter a filter, try another, then press “Back” to go to the previous one.

In practice log navigation is so fluid that what you thought was the last filter often turns out to be a few clicks back, which ends up being a clunky back-back-back experience. Seq 2.0 therefore booted out back button support for filters, instead providing coarse-grained history between the dash, settings, and events screens (deep-linking of filters is still supported, but they don’t go on the browser’s back-stack).

Seq 2.2 brings back the notion of history in the guise of a “recent filters” drop-down you can see in the right of the filter box. Clicking on one of the history entries will set the filter to that text:


The release includes other small enhancements and bug fixes, listed in the complete release notes.

There will be more point releases in the 2.x cycle at about the same 2-4 week interval, but 2.2 really is one not to miss. You can download it now from the Seq website — let us know what you think!

Seq 2.1 Refresh

Since Seq 2.0 shipped a couple of weeks ago, enough small fixes and improvements have been collected to make a new point release.

Seq 2.1 is ready to download, and includes:

  • Send to app is back (#301)
  • Find with adjacent works in locales that use comma as a digit separator; enfin! (#194)
  • Downgrading to the free version now completes successfully even if the admin user has been deleted (#302)
  • TotalMilliseconds() enables parsing of .NET TimeSpan properties that have been serialized to strings (#152)
  • Refreshing the /login page now works as expected (#303)
  • Seq will now stop recording events when a (configurable) minimum disk space limit is reached – by default, 128 MB (#304)

There are a some minor performance and usability tweaks in there too, including some pixel shuffling for better in-browser scaling at 50, 75 and 90% on the events screen.

Hope you’re enjoying the new release! Comments and feedback welcome as always :-)

Announcing Seq 2 RTW

It’s with pleasure I can announce that the RTW build of Seq 2.0 is now available.

What is Seq?

Seq is a log server designed specifically to take advantage of modern structured logging techniques. Libraries like Serilog and SLAB produce log events that are not only readable messages, but richly structured objects that can be searched and correlated with ease. Seq is the fastest way for .NET teams to capture and investigate these logs when debugging code in complex environments.

With Seq it’s trivially easy to find all events raised in a particular HTTP request, from a particular machine, or when processing a particular order, for example. Not only that, but queries unimaginable before the rise of structured logging – “show me all the orders processed with an item over $100” – can be expressed in a clean C#-style syntax and answered directly from logs.

What’s New?

The centrepiece of the 2.0 release is a completely redesigned query interface, featuring auto-completion and syntax help for filtering, click-to-apply signals, and easier sharing for users working in larger teams.

Seq 2.0

Seq 2.0 also paves the way to much easier temporal navigation with a new timeline view and to-the-millisecond date and time filtering.

In addition, almost every corner of the interface has been re-thought, and many small improvements made to ergonomics and usability.

Installing and Upgrading

If you’re not using Seq today, you can install it for free under the complimentary single-user license, or get a trial key that enables authentication and other team-oriented features on our website.

If you purchased Seq in the last year, you can upgrade under your existing license: just install the new version and your data will be migrated in place. Easy!

If you’ve used up your 12 months included upgrades already – you’re in luck: to celebrate the release of the new version we’re offering existing customers discounted upgrade pricing this month. Email Support and we’ll organize it for you and make sure everything goes seamlessly.

Documentation and Support

With this release we’re launching a new combined documentation and forum site at Check out the new Getting Started page and keep an eye out for new content as we migrate it from the older Confluence site.

Thanks and Acknowledgements

This release was made possible by our wonderful customers, and a brave bunch of early adopters who’ve given us invaluable feedback and bug reports during the beta period. Thank you all!

Download Seq 2.0 now.

Filtering with Signals in Seq 2

Structured logs are designed for easy filtering and correlation. If you monitor apps in multiple environments for example, you’ll pretty quickly turn to structured properties like Environment as a way to distinguish between logs from different locations:

Log.Logger = new LoggerConfiguration()
  .Enrich.WithProperty("Environment", "Production")
  // Other configuration, then...

When you collect and view these logs with Seq, you might end up writing the filter Environment == "Production" a lot.


Instead of typing this a hundred and twenty times a day, Seq 2 lets you save your filter as a signal. Press the “move to signal” (») button in the filter bar, and one will be created for you:


It’s easy to build up more complex filters by adding them to the signal with ».

The various “exclude” options on event properties and types makes it quick to eliminate noise:


Once you’ve given the signal a name, saved, and closed it, you’ll see it in the signal list ready to activate at the click of a mouse:


(Here’s where Signals in Seq 2 start to shine – to apply multiple signals like “Production” and “Errors”, just click to apply each one, and only events matching all of the applied signals will be displayed.)

Signals are really fundamental to Seq, so busy teams quickly create a lot. To see the list of signals everyone has created, and add them to your own view, use the Create or add signal… link to search the list:


There’s a fresh beta build now available so you can get started with Seq 2. It’d be great to have your feedback as we work through the beta process – please feel free to post here, contact our support address or raise tickets on our issue tracker.

Seq/2 Beta

Last month I posted the first sneak peek of what you can expect from the upcoming Seq “v2” release.

Since then, the final pieces of the UI have been put (back!) into place, bugs have been squashed, and we’ve incorporated feedback from some helpful early adopters.

Today, though there remains plenty to do, we’re ready to open up the beta to a wider audience. You can now download the latest installer from the Seq homepage.


Shown above is the fledgling “Timeline” view, that displays the distribution of matches for any Seq query inline.

What’s inside?

Recapping from the earlier post, the new release is strongly focused on user experience, both by simplifying the UI and underlying model, as well as improving common scenarios and considering the needs of larger projects and teams.

A few of the more noticeable differences:

  • The UI is much more responsive in basic scenarios
  • Queries and views have been combined into a much simpler and more powerful “signal” concept
  • Each user gets their own view of the available signals in the system – signals created by other users can be added explicitly
  • Property and built-in function names are auto-completed in the filter bar
  • Filter syntax help is shown under the filter box when an expression isn’t recognized
  • The interface is much friendlier towards touch devices
  • The dash layout is more compact, getting more info onto the screen in a more useful format
  • Filtering by date and time is easier
  • You can see the distribution of events across a time span without adding a dash item
  • It’s simpler to create a CSV export from all of the currently tagged properties (no more “Properties of Interest”)
  • Oh, and it’s now much easier to select text from log messages

A lot of small bugs and annoyances have been fixed too, and the theme updated to be clearer and more consistent throughout the settings screens.

Installing Seq/2

The goal of the beta release is to get your feedback, and from that angle we’d love you to install it. Here’s what you need to know first.


The current release is a beta. It’s not perfect – we wouldn’t recommend upgrading critical servers just yet – but it’s usable enough for real-world installations.

Bug reports are greatly appreciated – you can post those on the issue tracker or email us at our support address. Some known issues are listed below as well.


As before, Seq/2 comes with a free single-user license that you can use out of the box.

If you’ve purchased Seq in the last twelve months, your license will cover your upgrade to version 2 for free; just go right ahead.

If you have an older license, or just want to try Seq for the first time, you can grab a trial key from the Seq website or contact our support address for help.

Known issues

At the time of writing:

  • The date/time pickers in the timeline view don’t show popup calendars
  • Substantial problems have been reported when using Google Chrome “Canary”
  • Manual event deletion is not available
  • Deep linking into a missing/deleted signal will leave the UI stalled
  • The API client has not been updated to the latest version

What’s next?

We’re hard at work to put the finishing touches on the final version. We plan to post another beta in the next week or two, and at present we’re aiming for a mid-year release.

Thanks for your help – go get it! :-)

Diagnostic logging in DNX/ASP.NET 5

If you’re writing ASP.NET apps with the latest tooling, you will inevitably encounter the Microsoft.Framework.Logging package. It’s a dependency of ASP.NET MVC, WebForms and much of the related infrastructure. “What is this thing?” you ask. Great question!

The Framework writes log events

When an MVC controller is selected, when a static file is served, when an error is caught… ASP.NET and other framework libraries helpfully log this information.

In past .NET versions, System.Diagnostics.Trace was the output of choice, but the world has moved on and the newer package is an alternative with a more modern-feeling API.

What does it look like?

Like ASP.NET 5, your applications and libraries can write events to the framework’s logging pipeline:

var loggerFactory = new LoggerFactory().AddConsole();
var logger = loggerFactory.CreateLogger(typeof(Program).FullName);

logger.LogInformation("Handled in {ExecutionTime} ms", executionTime);

An ILoggerFactory creates ILoggers, and these provide the typical methods you’d expect for logging errors, warnings, debug events and so-on.

You might be surprised (or, you might not!) that just like Serilog, Microsoft.Framework.Logging supports {Named} holes in its format strings, in addition to the typical {0} and {1}. This technique enables structured loggers to provide first-class search operations on not just the raw text of the log message, but the individual property values as well:

Execution Time gt 300 ms

The great thing is that if you’re not using a structured approach, messages will just come through the pipeline as strings – everyone’s looked after.

Microsoft.Framework.Logging vs. Serilog, NLog, SLAB, log4net…

The designers of ASP.NET faced the same dilemma as other library authors in .NET: which logging library should they use to write events? The choice of logger is typically one the application author will make, so frameworks and libraries need a way to support more than one option.

Over time we’ve had a few solutions to this – Common.Logging has been popular, and more recently LibLog does a good of providing a dependency-free API on top of several loggers.

Microsoft.Framework.Logging today is another of these, though it’s somewhat more capable (and in many ways it’s a very nice one).

Although out-of-the-box implementations are provided for basic console logging and a few other targets, you’ll need a logging back-end like Serilog or NLog to gain the kind of functionality expected by non-trivial apps.

So, how does that work?

Getting started

We’ll assume you want to use Serilog. (Instructions for NLog are pretty similar, and an example that uses it can be found here.) The ASP.NET Team wrote the original Serilog back-end for Microsoft.Framework.Logging, and transferred it to the Serilog project where it’s currently supported and maintained.

First, install the Serilog.Framework.Logging NuGet packag into your web or console app.

Next, in your application’s Startup() method, configure Serilog first:

using Serilog;

public class Startup
  public Startup(IHostingEnvironment env)
    Log.Logger = new LoggerConfiguration()
    // Other startup code

The conditional compilation (#if) is necessary if you’re targeting the CoreCLR runtime, for which there are currently few Serilog sinks. If you’re targeting the full .NET framework you can just use .WriteTo.Trace(), or any other available sink.

Finally, in your Startup class’s Configure() method, call AddSerilog() on the provided loggerFactory.

  public void Configure(IApplicationBuilder app, IHostingEnvironment env,
                        ILoggerFactory loggerfactory)

That’s it! With the level bumped up a little you should see log output like:

2015-05-15 22:14:44.646 +10:00 [Debug] RouteCollection.RouteAsync
    Handled? True
2015-05-15 22:14:44.647 +10:00 [Debug] RouterMiddleware.Invoke
    Handled? True
2015-05-15 22:14:45.706 +10:00 [Debug] /lib/jquery/jquery.js not modified
2015-05-15 22:14:45.706 +10:00 [Debug] /css/site.css not modified
2015-05-15 22:14:45.741 +10:00 [Debug] Handled. Status code: 304 File: /css/site.css

An important note on levels

If you want to get more or less information from the log you’ll need to change the level.

  • You need to set MinimumLevel on both the Serilog LoggerConfiguration and the ILoggerFactory
  • Serilog and ASP.NET assign different priorities to the Debug and Trace levels; Serilog’s Debug is ASP.NET’s Trace, and vice-versa

Which API should my application use?

Personally, I’m content to let ASP.NET write events with the Microsoft.Framework.Logging API, and to use Serilog’s native API in my application code. The designers of the framework API have gone out of their way to support a wide range of logging styles, so using either is reasonable. I’m obviously somewhat biased, but I prefer the more focused API of Serilog to the more general one from the framework. Your preferences might differ :-).

The important thing is – don’t miss the opportunity posed by ASP.NET 5 to take your logging to the next level and get some structured events through your logging pipeline.

Getting help

If you have questions or comments please feel free to raise an issue on the tracker. We’re also frequently in the Serilog Jabbr chat and happy to help if you get stuck!