Pedantic Orderliness
Pedantic Orderliness

Visibility is Mission Critical

📝 2019-02-12T00:26:27Z

When building services, one of the best developer ergonomics improvements is logging. The value of well-structured logs cannot be understated. Quality logs make issue investigation, exploration, and iterative system improvements much easier. Without logs, it’s pretty much impossible to reason about what your service is doing or has done.

Essentials

Logs are journals that services keep about what it’s doing. On the first cut, a developer usually writes unstructured text to stdout & stderr or a file. After a while, a logging library is introduced to the project and slowly the writes to stdout/stderr are replaced with calls to the logging library. Save yourself the time and use a logging library from the start.

The contents of the log vary depending on the purpose of the log. It’s not uncommon for services to keep multiple logs, writing them out to different files or a centralized logging system. Start-up details (ports, DB connection status, values of crucial environment variables), errors, stack traces, scheduled tasks, etc… go in the service log. Requests can go in the service log, but most services write them to a separate file. Writing slow performance information (slow operations, queries, requests) is also common.

What entries are written into the log is often configurable. The configuration varies depending on the environment (dev, testing, staging, production, etc…). A person working in their development environment wants to see detailed information relevant to their current task. However, someone investigating a problem in production is looking mostly for errors. To support the various use cases logging libraries provide a way of specifying a severity level. A minimum severity level can be set, and only entrie of that level or higher are written to the logs.

Another key feature of logging libraries is tagging additional context information to the log entry. It’s common to include time, hostname, PID (process id), environment, and more. Different kinds of logs will include different contextual details. Fi

Logging libraries also support writing entries in one or more formats. When writing to stdout, the format should be human-readable (possibly with color). However, when writing to a central logging system the format should be structured for indexing of key details/tags.

Below are logs from a project written in Go using Logrus. Note the level, seconds from start, message, and tag (port). These logs are written to stdout and are intended to be human-readable.

1INFO[0000] Starting...
2WARN[0000] PG_URL not provided, using default
3INFO[0000] Listening... port=8080
4INFO[0001] Metric request

Below are logs from this blog. It’s the same library as above (Logrus), but it’s been configured to write structured logs in JSON with more details. Some of the logs entries are from requests and contain an Apache formatted HTTP request log entry.

1{"env":"local","host":"desktop-gkdctup","level":"info","msg":"starting server","time":"2019-02-05t20:03:29-08:00"}
2{"env":"local","host":"desktop-gkdctup","level":"info","msg":"::1 - - [05/feb/2019:20:03:32 -0800] \"get / http/1.1\" 200 5682","time":"2019-02-05t20:03:32-08:00"}
3{"env":"local","host":"desktop-gkdctup","level":"info","msg":"::1 - - [05/feb/2019:20:03:32 -0800] \"get /static/style.css?m=c9c82ed84e35f71f9533b81494a6f2a6 http/1.1\" 200 3931","time":"2019-02-05t20:03:32-08:00"}
4{"env":"local","host":"desktop-gkdctup","level":"info","msg":"::1 - - [05/feb/2019:20:03:32 -0800] \"get /static/logo.png?m=30fac5d7c5602071f356c220903432f4 http/1.1\" 200 12525","time":"2019-02-05t20:03:32-08:00"}
5{"env":"local","host":"desktop-gkdctup","level":"info","msg":"::1 - - [05/feb/2019:20:03:32 -0800] \"get /static/ryanolds.jpg?m=213e216ee736fcf35e6216e901f2947f http/1.1\" 200 158955","time":"2019-02-05t20:03:32-08:00"}

Implementations

Let’s look over logging implementations for Go (Logrus), Node.js (Winston), and Python (Logmatic). Each example will write the message, environment, hostname, and time to stdout. They all look pretty similar, get the logger, set the logging level, set a new formatter that outputs JSON, add some metadata (hostname and environment name), and then write an info level log.

Go w/ Logrus

 1package main
 2
 3import (
 4  "os"
 5  "github.com/sirupsen/logrus"
 6)
 7
 8func main() {
 9  log := logrus.New()
10  log.SetLevel(logrus.InfoLevel)
11  log.SetFormatter(&logrus.JSONFormatter{})
12  log.SetOutput(os.Stdout)
13  log = logrus.NewEntry(log)
14
15  env := os.Getenv("ENV")
16  if env == "" {
17    env = "local"
18  }
19
20  hostname, err := os.Hostname()
21  if err != nil {
22    log.WithError(err).Error("Problem getting hostname")
23    hostname = "unknown"
24  }
25
26  log = log.WithFields(logrus.Fields{
27    "env":  env,
28    "host": hostname,
29  })
30
31  log.WithField("foo", "bar").Info("Starting....")
32  // {"env":"local","host":"desktop-gkdctup","level":"info","msg":"starting server","foo":"bar","time":"2019-02-05t20:03:29-08:00"}
33
34  ...
35}

Node.js w/ Winston

 1os = require("os");
 2winston = require('winston');
 3
 4env = process.env.ENV;
 5if (!env) {
 6  env = "local";
 7}
 8
 9hostname = os.hostname();
10
11log = winston.createLogger({
12  "level": "info",
13  "defaultMeta": {
14    "env": env,
15    "host": hostname
16  },
17  "format": winston.format.json(),
18  "transports": [
19     new winston.transports.Console()
20  ]
21});
22
23log.info("Starting...", {"foo": "bar"});
24// {"message":"Starting...","level":"info","env":"local","host":"DESKTOP-GKDCTUP", "foo": "bar", "timestamp":"2019-02-10T23:17:02.615Z"}
25
26...

Python w/ Logmatic

 1import os
 2import platform
 3import logging
 4import logmatic
 5
 6env = os.getenv("ENV", "local")
 7host = platform.node()
 8
 9log = logging.getLogger(__name__)
10log.setLevel(logging.INFO)
11
12handler = logging.StreamHandler()
13formatter = logmatic.JsonFormatter(extra={
14  "host": host,
15  "env": env
16})
17handler.setFormatter(formatter)
18log.addHandler(handler)
19
20log.info("Starting...", extra={"foo": "bar"})
21# {"asctime": "2019-02-10T15:54:14Z-0800", "name": "__main__", "processName": "MainProcess", "filename": "log.py", "funcName": "<module>", "levelname": "INFO", "lineno": 20, "module": "log", "threadName": "MainThread", "message": "Starting...", "foo": "bar", "timestamp": "2019-02-10T15:54:14Z-0800", "host": "DESKTOP-GKDCTUP", "env": "local"}
22
23...

Wrap-up

The quality of the data in the logs has a direct impact on issue investigation and development speed. Putting a little thought into how you’re logging, the metadata you’re including, and where they are stored will payoff manyfold. As we saw in the last section, they are not difficult to set up or use. When combined with centralized logging your team will be able to monitor, debug, and explore behavior across your entire system from a single place; Reducing downtime, making customers happier, and improving developers/operations quality of life.