Which Java Logging Framework Has the Best Performance?

Java is blessed with three big Java logging frameworks: Java Util Logging, Log4j 2, and Logback. When picking one for your project, did you ever wonder about their performance? After all, it would be silly to slow down the application just because you picked a sluggish logging framework or a suboptimal configuration.

Whenever the subject of logging in Java comes up on Twitter, you can be sure you’ll be in for a lot of ranting. This is perhaps best summarised by this tweet from Dan Allen:

There are actually quite a lot of people who are “lucky” enough to remain isolated from the question of logging in Java:

  • Junior developers are typically not empowered with making this type of choice.
  • Intermediate developers are often working on a code base where the choice has already been made.
  • Senior developers can have the choice forced on them by their CTO or some other external decision maker.

Sooner or later, however, you will reach the point where you have to make the choice of logging framework. In 2017 we can break down the choice into 4 options:

  1. Refuse to make a choice and use a logging shim
  2. Use Java Util Logging.
  3. Use Log4j 2.x
  4. Use Logback

Logging Shims

If you are…

  • developing an API
  • developing a shared library
  • developing an application that will be deployed to an execution environment that is not your responsibility (for example a .war or .ear file that gets deployed in a shared application container)

… then the choice of logging framework is not your responsibility and you should use a logging shim.

You would be forgiven for thinking that there is a choice:

  1. Apache Commons Logging
  2. SLF4J

But the Apache Commons logging API is so basic and restricted that the only recommendation is Do not use unless you like to write your logging statements like this:

if (log.isDebugEnabled()) {
    log.debug("Applying widget id " + id + " flange color " + color);
}

Recommendation: SLF4J

Logging Frameworks

If you are developing a microservice, or some other application where you have control over the execution environment, you are probably responsible for selecting the logging framework. There may be cases where you decide to defer picking one by using a logging shim, but you are still going to have to make a choice eventually.

Changing from one logging framework to another is typically a simple search and replace, or worst case a more advanced “structural” search and replace available in more advanced IDEs. If logging performance is important to you, the logging framework’s idiomatic logging API will always out-perform a shim and given that the cost of switching is lower than with other APIs, you are probably better off coding direct to the logging framework.

I should also mention that there is a philosophical difference between the frameworks:

  • Java Util Logging comes from the school of thought that logging should be the exception and not the norm.
  • Log4j 2.x and Logback both show their shared heritage, and place importance on performance when logging

All three frameworks are hierarchical logging frameworks that allow for runtime modification of configuration and various output destinations. To confuse users, the different frameworks sometimes use different terminology for the same things, though:

  • Handler vs Appender – these are the same thing, putting the log message into the log file.
  • Formatter vs Layout – these are the same thing, responsible for formatting the log message into a specific layout.

We could make a feature comparison, but in my opinion the only feature worth differentiating on is mapped diagnostic contexts (MDCs), as all other feature differences can be coded around relatively easily. Mapped diagnostic contexts provide a way to enhance the log messages with information that would not be accessible to the code where the logging actually occurs. For example, a web application might inject the requesting user and some other details of a request into the MDC, which allows any log statements on the request handling thread to include those details.

There are some cases where a lack of MDC support may not be as important, for example:

  • In a microservice architecture, there is no shared JVM so you have to build equivalent functionality into the service APIs, typically by explicitly passing the state that would be in the MDC.
  • In a single user application, there typically is only ever one context and typically not much concurrency, so there is less need for MDCs.

We could compare based on ease of configuration. Sadly, all three are equally bad at documenting how to configure logging output. None provide a sample “best-practice” configuration file for common use cases:

  • Sending logging to a log file, with rotation
  • Sending logging to operating system level logging systems, e.g. syslog, windows event log, etc

The information is typically there, but hidden in JavaDocs and couched in such a way as to leave the software engineer or operations engineer wondering if the configuration is compromising the application.

This leaves us with performance.

Logging Framework Performance

Ok, let’s benchmark the different logging frameworks then. There is a lot that you can mess up when trying to write micro-benchmarks in Java. The wise Java engineer knows to use JMH for micro-benchmarking.

One of the things that is often missed in logging benchmarks is the cost of not logging. In the ideal world, a logging statement for a logger that is not going to log should be completely eliminated from the execution code path. One of the features of a good logging framework is the ability to change logging levels of a running application. In reality we need at least a periodical check for a change in the logger’s level, so we cannot have the logging statement completely eliminated.

Another aspect of logging benchmarks, is selective configuration. Each of the different logging frameworks come with their own different default output configuration. If we do not configure all the loggers to use the same output format, we will not be comparing like with like. Conversely, some logging frameworks may have special case optimisations that work for specific logging formats which could distort the results.

So here are the cases we want to test:

  • logging configured to not log
  • logging configured to output in the Java Util Logging default style
  • logging configured to output in the Log4j 2.x default style
  • logging configured to output in the Logback default style

We will just look at the performance writing the logs to a rolling log file. We could test performance of things like syslog handlers / appenders or logging to the console, but the risk there is that there is some additional side-effects from the external system (syslogd or the terminal) may compromise the test.

The source code for the benchmarks is available on GitHub.

Structure of the Benchmark

Each benchmark is essentially the following:

@State(Scope.Thread)
public class LoggingBenchmark {
    private static final Logger LOGGER = ...;
    private int i;

    @Benchmark
    public void benchmark() {
        LOGGER.info("Simple {} Param String", new Integer(i++));
    }
}
  • The reference to the logger is stored as a static final field.
  • Each thread has independent state i. This is to prevent false sharing accessing the state field.
  • In order to prevent optimization of the string concatenation, the parameter is mutated with each invocation of the benchmark method.
  • The construction with new Integer(i++) is necessary to counteract the autoboxing cache. Because i is very rarely in the range -128..127, we need to use explicit boxing as the implicit Integer.valueOf(int) would pollute the benchmark results with checking the cache and we expect a cache miss.

JMH provides a number of different modes to run in, the two we are most interested in are:

  • Throughput mode – which measures how many calls to the target method can be completed per second. Higher is better.
  • Sample mode – which measures how long each call takes to execute (including percentiles). Lower is better.

The Point of the Strawman

When benchmarking and comparing differences between vendors it can sometimes be helpful to know how far performance can be pushed. For this reason, my benchmarks contain a number of different strawman implementations. In the graphs that follow, I have selected the best strawman for each scenario. In providing the strawman, my aim is not to attack the performance of the different vendor solutions. My aim is to provide a context within which we can compare the performance of the different logging frameworks.

My strawman shows as high a performance as I could achieve to perform the equivalent operation. The cost of this higher performance is the loss of general utility. By providing the strawman, however, we can see what the test machine is capable of.

By way of an example, the strawman for logging to nowhere does the following:

  • Sets up a background daemon thread that wakes up once every second and sets an enabled field based on the presence of a file that does not exist.
  • Each log statement checks the enabled field and only if true will it proceed to log.

It would be easy to just have an empty log statement that didn’t even check an enabled field, but the JVM would inline that empty method and completely remove the logging. If we didn’t have the background thread periodically writing to the field (the fact that it will always write false is unknowable to the JVM) then the JVM could conclude that nobody will ever write to that field and inline the resulting constant condition. Where the strawman can gain performance, for this case, is by dropping hierarchical configuration.

The other strawman implementations all have to write the same format log messages and the performance gains are from hard-coding the log formatter / layout for things like the date formatting and string concatenation. The strawmen all parse the parameterized logging message and format that on the fly, but without some of the safety checks of the frameworks.

Benchmark Results

Continue reading %Which Java Logging Framework Has the Best Performance?%


Source: Sitepoint