Last updated at Mon, 06 Nov 2017 20:35:50 GMT


Application logging is the software world’s version of archeology. At runtime, your application lives in a rich, colorful, 3-dimensional world of flowing aqueducts, packed coliseums, and bustling streets.   There’s more going on than can possibly be captured.

When you’re trying to reproduce and correct a reported issue, you play archeologist. The vibrant, live world is gone, and you’re left to piece reality back together using only decorated pots, spearheads, and fragments of frescoes. In other words, the log file is a set of artifacts from which you must extrapolate a much richer reality.

Log All The Things!

It is for this reason that just about every developer, sooner or later, has a “log all the things!” epiphany. Someone excavating Roman ruins can’t just go tell the ancient Romans to be sure to write more things down. But an application archeologist has this power.

The result of this epiphany is log data. Massive amounts of log data. Reams and gobs of data, filling files, folders, and disks. Megabytes, gigabytes and, if you’re really bitten by the logging bug, terabytes.

As this reality sets in, the joy of the epiphany is replaced by a lesson in the Law of Unintended Consequences. “Log all the things!” does a great job of ensuring that nothing is missed. But it also creates a ridiculously low signal to noise ratio. The missing data problem is replaced by the problem of finding the data that is actually relevant in a sea of “entered method GetFoo() at 2015-12-13.13:25:22:133”

Parse All The Logs!

If the programmer is to arrive back in a place of joy, another epiphany is needed. And, usually, that comes in the form of, “let’s write some code that parses and mines the log files.” Properly executed, it’s a winning strategy, because it lets you have the best of both worlds – you can avoid lost data and, at the same time, crank up the signal to noise ratio.

The code you write toward this end is predictable. I bet you can picture it clearly right now, in your language of choice. It’s a medley of for, while, if and else. If the command line flag is “—timestamp” and there are two arguments, look for entries between those two times. While newline is not null, read in a line. For entry 0 to entry total, see if each entry is an exception code. If it is, display it. You get the idea.

In this fashion, you wind up with an internal utility to which you’re constantly adding. It takes one of your log files as input and, according to a wide range of options, performs selection and projection for you, returning a narrow set of information to you.

Declarative and Imperative

If you’re familiar with RDBMS theory, the terms selection and *projection *may be familiar. If you’re not, they mean, loosely, “filtering the resultant rows” and “filtering the shown columns.” They’re means of removing noise and leaving only signal.

But they’re also database constructs, so you may wonder why I’m talking about them alongside mention of control flow statements and utility code that iterates through file lines. Well, the reason I’m offering this juxtaposition is to highlight the difference between declarative and imperative programming.

Imperative programming is what you’re rolling up your sleeves to do when you parse a log file. It involves telling the compiler/interpreter how to do things. Iterate through this loop, check this boundary condition, perform this action if the condition is met, but this other action if the condition is not met. Imperative programming is essentially micromanagement.

Declarative programming, on the other hand, is delegation. It involves telling the compiler/interpreter, “I want a list of entries that occurred between 8 and 9 AM and I don’t care how you do it.” In other words, declarative programming is a paradigm in which you specify what but not how.

In the context of these logging epiphanies, this distinction is subtle and yet incredibly important. What you gear up to write is imperative code for traversing the log files. But what you actually want is the ability to issue ad-hoc, declarative queries. Do you actually want to write and maintain internal application code for parsing log files? Or do you just want to tell some utility, “get me entries that meet this description?”

I’m betting it’s the latter.

Declare All The Queries!

I’ve mentioned two epiphanies, but there’s a third, fairly rare one, to be had. Namely, you can capture massive amounts of data and traverse it to avoid massive amounts of noise. But you can also do so declaratively and without getting mired in application log parsing code. This — a query language for your logs — is truly the best of all worlds.

Log Entries Query Language (LEQL) is a means by which you can achieve this. When letting Logentries manage your log files, you get access to this declarative construct. The purpose of this post is neither documentation nor tutorial, so I won’t go into much detail, but let’s take a look at an example of what’s possible.

Let’s say that I had a web server access log and I wanted to get a sense of server errors that have occurred over the lifespan of the log in question.  I don’t need to use regex in my search.  I don’t need to write any shell scripts or build any little applications to do parsing.  All I need to do is use LEQL to say this.

where(status>=500) groupby(status) calculate(count) sort(desc)

That’s it.  Just that bit of code to say, “I want a count, in descending order, of how many 5xx (server) errors have occurred.”  And if I want to change it, I have a whole arsenal of LEQL constructs at my disposal.

As you can see, it’s possible to dump an awful lot of log data, and then search through it in a hurry. And, best of all, you don’t need to write supporting log parsing code. They’ve done that for you, and they’ve abstracted it to be a declarative engine. Armed with these tools, you can leverage all three of these logging epiphanies and give yourself an enormous advantage in the field of code archeology.