日志框架简单的单线程性能测试

简介

最近项目中需要对计算任务进行性能优化,从各个方面入手,当考虑到日志性能的时候,从网上了解到的目前常用的日志框架有Logging、Logback、Log4j、Log4j2等等。

据反映Logback和Log4j2的性能明显优于其他两个,而Log4j2性能更可能优于Logback。

纸上得来终觉浅,绝知此事要躬行!!!

下面就对这四个日志,做一下简单的单线程性能测试

测试工具:jmh
测量模型:平均时间
测量策略:每个框架分别在单线程运行100次、1000次、10000次,打印日志到日志文件,最后对比消耗时间的平均值

Logging

java.util.logging,该包是jdk自带的日志框架。上代码

性能测试类

package com.smxknife.log.java.demo03;

import org.openjdk.jmh.annotations.*;
import org.openjdk.jmh.infra.Blackhole;

import java.io.IOException;
import java.util.concurrent.TimeUnit;
import java.util.logging.Level;
import java.util.logging.*;

/** * @author smxknife * 2018-12-19 */
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
@State(Scope.Thread)
public class PerformLogTest {

	private static final Logger logger = Logger.getLogger(PerformLogTest.class.getName());

	@Param({"100", "1000", "10000"})
	int loopNum;

	int[] loopIdx;

	@Setup
	public void setup() {
		loopIdx = new int[loopNum];
		for (int i = 0; i < loopNum; i++) {
			loopIdx[i] = i;
		}

		try {
			logger.setLevel(java.util.logging.Level.FINEST);
			FileHandler fileHandler = new FileHandler("./target/perform.log");
			fileHandler.setLevel(Level.ALL);
			fileHandler.setFormatter(new Formatter() {
				@Override
				public String format(LogRecord record) {
					return record.getLevel().getName()
							+ " : " + record.getThreadID()
							+ " " + record.getSourceClassName()
							+ "." + record.getSourceMethodName()
							+ " " + record.getMessage()
							+ "\r\n";
				}
			});
			logger.addHandler(fileHandler);
		} catch (IOException e) {
			e.printStackTrace();
		}
	}

	@Benchmark
	public void test(Blackhole blackhole) {
		int tmp = 0;
		for (int i = 0; i < loopIdx.length; i++) {
			logger.finer("test " + i);
		}
	}
}

jmh部分直接这里不是重点,先忽略,看日志部分,test方法,就是测试类,分别循环100、1000、10000次,将日志输出到日志文件中。因为这里没有采用spring等应用框架,而Logging默认是输出到控制台,这里加了一个FileHandler用于将日志输出到日志文件

调用类

public class Main {
	public static void main(String[] args) throws RunnerException {
		Options options = new OptionsBuilder()
				.include(PerformLogTest.class.getName())
				.warmupIterations(5)
				.measurementIterations(5)
				.forks(1)
				.build();
		new Runner(options).run();
	}
}

从这里看,预热5次,执行5次,单线程,执行的结果如下(忽略其他输出,只看最后总结):

Benchmark            (loopNum)  Mode  Cnt    Score    Error  Units
PerformLogTest.test        100  avgt    5    2.383 ±  0.818  ms/op  
PerformLogTest.test       1000  avgt    5   23.277 ±  5.684  ms/op  
PerformLogTest.test      10000  avgt    5  228.775 ± 29.695  ms/op  
  • 循环100时,执行一次所需时间为2.383 毫秒
  • 循环1000时,执行一次所需时间为23.277 毫秒
  • 循环10000时,执行一次所需时间为228.775 毫秒

Logback

logback是一个小巧灵活的日志框架,在很多项目中都有用到,spring中也集成了starter。这个框架的优点…,网上一堆,不说了,上代码。

性能测试类

@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
@State(Scope.Thread)
public class PerformLogTest {

	private static Logger logger;

	@Param({"100", "1000", "10000"})
	int loopNum;

	int[] loopIdx;

	@Setup
	public void setup() {
		loopIdx = new int[loopNum];
		for (int i = 0; i < loopNum; i++) {
			loopIdx[i] = i;
		}
		LoadXmlConfig.loadConfig("logback.demo02.xml");
		logger = LoggerFactory.getLogger(PerformLogTest.class);
	}

	@Benchmark
	public void test(Blackhole blackhole) {
		int tmp = 0;
		for (int i = 0; i < loopIdx.length; i++) {
			logger.error("test " + i);
		}
	}
}

测试方法同样是test方法,循环长度和次数同上。调用类也同上,就不贴出来了。
因为这里采用了xml文件做logback的配置文件,而我有不同的配置文件,我不想频繁的改动,就写了一个加载自定义的xml的配置类LoadXmlConfig

public class LoadXmlConfig {

	/** * 自定义logback的xml配置文件并加载 * @param fileName */
	public static void loadConfig(String fileName) {
		if (fileName == null || "".equals(fileName)) return;
		String resPath = System.getProperty("user.dir") + File.separator
				+ "src" + File.separator
				+ "main" + File.separator
				+ "resources" + File.separator;
		File file = new File(resPath + fileName);
		LoggerContext loggerContext = (LoggerContext) LoggerFactory.getILoggerFactory();
		JoranConfigurator joranConfigurator = new JoranConfigurator();
		joranConfigurator.setContext(loggerContext);
		loggerContext.reset();
		try {
			joranConfigurator.doConfigure(file);
		} catch (JoranException e) {
			e.printStackTrace();
		}
		StatusPrinter.printInCaseOfErrorsOrWarnings(loggerContext);
	}
}

这里做的比较简化,就是去resource下面寻找名字相同的配置文件,下面看看配置文件

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false" scan="true" scanPeriod="30 second">

	<property name="ROOT" value="target/logs/" />
	<property name="FILE_SIZE" value="500MB" />
	<property name="MAX_HISTORY" value="100" />
	<timestamp key="DATETIME" datePattern="yyyy-MM-dd HH:mm:ss" />
	<!-- 控制台打印 -->
	<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
		<encoder charset="utf-8">
			<pattern>[%-5level] %d{${DATETIME}} [%thread] %logger{36} - %m%n
			</pattern>
		</encoder>
	</appender>
	<!-- ERROR 输入到文件,按日期和文件大小 -->
	<appender name="ERROR" class="ch.qos.logback.core.rolling.RollingFileAppender">
		<encoder charset="utf-8">
			<pattern>[%-5level] %d{${DATETIME}} [%thread] %logger{36} - %m%n
			</pattern>
		</encoder>
		<filter class="ch.qos.logback.classic.filter.LevelFilter">
			<level>ERROR</level>
			<onMatch>ACCEPT</onMatch>
			<onMismatch>DENY</onMismatch>
		</filter>
		<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
			<fileNamePattern>${ROOT}%d/error.%i.log</fileNamePattern>
			<maxHistory>${MAX_HISTORY}</maxHistory>
			<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
				<maxFileSize>${FILE_SIZE}</maxFileSize>
			</timeBasedFileNamingAndTriggeringPolicy>
		</rollingPolicy>
	</appender>

	<!-- WARN 输入到文件,按日期和文件大小 -->
	<appender name="WARN" class="ch.qos.logback.core.rolling.RollingFileAppender">
		<encoder charset="utf-8">
			<pattern>[%-5level] %d{${DATETIME}} [%thread] %logger{36} - %m%n
			</pattern>
		</encoder>
		<filter class="ch.qos.logback.classic.filter.LevelFilter">
			<level>WARN</level>
			<onMatch>ACCEPT</onMatch>
			<onMismatch>DENY</onMismatch>
		</filter>
		<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
			<fileNamePattern>${ROOT}%d/warn.%i.log</fileNamePattern>
			<maxHistory>${MAX_HISTORY}</maxHistory>
			<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
				<maxFileSize>${FILE_SIZE}</maxFileSize>
			</timeBasedFileNamingAndTriggeringPolicy>
		</rollingPolicy>
	</appender>

	<!-- INFO 输入到文件,按日期和文件大小 -->
	<appender name="INFO" class="ch.qos.logback.core.rolling.RollingFileAppender">
		<encoder charset="utf-8">
			<pattern>[%-5level] %d{${DATETIME}} [%thread] %logger{36} - %m%n
			</pattern>
		</encoder>
		<filter class="ch.qos.logback.classic.filter.LevelFilter">
			<level>INFO</level>
			<onMatch>ACCEPT</onMatch>
			<onMismatch>DENY</onMismatch>
		</filter>
		<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
			<fileNamePattern>${ROOT}%d/info.%i.log</fileNamePattern>
			<maxHistory>${MAX_HISTORY}</maxHistory>
			<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
				<maxFileSize>${FILE_SIZE}</maxFileSize>
			</timeBasedFileNamingAndTriggeringPolicy>
		</rollingPolicy>
	</appender>
	<!-- DEBUG 输入到文件,按日期和文件大小 -->
	<appender name="DEBUG" class="ch.qos.logback.core.rolling.RollingFileAppender">
		<encoder charset="utf-8">
			<pattern>[%-5level] %d{${DATETIME}} [%thread] %logger{36} - %m%n
			</pattern>
		</encoder>
		<filter class="ch.qos.logback.classic.filter.LevelFilter">
			<level>DEBUG</level>
			<onMatch>ACCEPT</onMatch>
			<onMismatch>DENY</onMismatch>
		</filter>
		<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
			<fileNamePattern>${ROOT}%d/debug.%i.log</fileNamePattern>
			<maxHistory>${MAX_HISTORY}</maxHistory>
			<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
				<maxFileSize>${FILE_SIZE}</maxFileSize>
			</timeBasedFileNamingAndTriggeringPolicy>
		</rollingPolicy>
	</appender>
	<!-- TRACE 输入到文件,按日期和文件大小 -->
	<appender name="TRACE" class="ch.qos.logback.core.rolling.RollingFileAppender">
		<encoder charset="utf-8">
			<pattern>[%-5level] %d{${DATETIME}} [%thread] %logger{36} - %m%n
			</pattern>
		</encoder>
		<filter class="ch.qos.logback.classic.filter.LevelFilter">
			<level>TRACE</level>
			<onMatch>ACCEPT</onMatch>
			<onMismatch>DENY</onMismatch>
		</filter>
		<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
			<fileNamePattern>${ROOT}%d/trace.%i.log</fileNamePattern>
			<maxHistory>${MAX_HISTORY}</maxHistory>
			<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
				<maxFileSize>${FILE_SIZE}</maxFileSize>
			</timeBasedFileNamingAndTriggeringPolicy>
		</rollingPolicy>
	</appender>

	<!-- SQL相关日志输出-->
	<logger name="com.smxknife.log.logback" level="INFO" additivity="true">
	</logger>

	<!-- Logger 根目录 -->
	<root level="ERROR">
		<appender-ref ref="ERROR" />
	</root>
</configuration>

执行结果如下:

Benchmark            (loopNum)  Mode  Cnt   Score    Error  Units
PerformLogTest.test        100  avgt    5   0.991 ±  0.420  ms/op
PerformLogTest.test       1000  avgt    5   9.655 ±  2.735  ms/op
PerformLogTest.test      10000  avgt    5  94.608 ± 17.062  ms/op
  • 循环100时,执行一次所需时间为0.991 毫秒
  • 循环1000时,执行一次所需时间为9.655 毫秒
  • 循环10000时,执行一次所需时间为94.608 毫秒

看结果貌似确实比Logging优秀很多

Log4j

上代码

package com.smxknife.log.log4j.demo02;

import org.apache.log4j.Logger;
import org.openjdk.jmh.annotations.*;
import org.openjdk.jmh.infra.Blackhole;

import java.util.concurrent.TimeUnit;

/** * @author smxknife * 2018-12-19 */
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
@State(Scope.Thread)
public class PerformLogTest {

	private static Logger logger;

	@Param({"100", "1000", "10000"})
	int loopNum;

	int[] loopIdx;

	@Setup
	public void setup() {
		loopIdx = new int[loopNum];
		for (int i = 0; i < loopNum; i++) {
			loopIdx[i] = i;
		}
		logger = Logger.getLogger(this.getClass().getSimpleName());
	}

	@Benchmark
	public void test(Blackhole blackhole) {
		int tmp = 0;
		for (int i = 0; i < loopIdx.length; i++) {
			logger.error("test " + i);
		}
	}
}

配置如下

### set log levels ###
log4j.rootLogger = ERROR,File

### 输出到控制台 ###
log4j.appender.Console=org.apache.log4j.ConsoleAppender
log4j.appender.Console.Target=System.out
log4j.appender.Console.layout=org.apache.log4j.PatternLayout
log4j.appender.Console.layout.ConversionPattern= %d{ABSOLUTE} %5p %c{1}:%L - %m%n


### 输出到日志文件 ###
log4j.appender.File=org.apache.log4j.RollingFileAppender 
log4j.appender.File.File=${user.dir}/target/logs/app.log
log4j.appender.File.DatePattern=_yyyyMMdd'.log'
log4j.appender.File.MaxFileSize=500MB
log4j.appender.File.Threshold=ALL
log4j.appender.File.layout=org.apache.log4j.PatternLayout
log4j.appender.File.layout.ConversionPattern=[%p][%d{yyyy-MM-dd HH\:mm\:ss,SSS}][%c]%m%n

测试结果

Benchmark            (loopNum)  Mode  Cnt   Score    Error  Units
PerformLogTest.test        100  avgt    5   1.069 ±  0.350  ms/op
PerformLogTest.test       1000  avgt    5  10.558 ±  5.448  ms/op
PerformLogTest.test      10000  avgt    5  93.073 ± 19.872  ms/op
  • 循环100时,执行一次所需时间为1.069 毫秒
  • 循环1000时,执行一次所需时间为10.558 毫秒
  • 循环10000时,执行一次所需时间为93.073 毫秒

从这三次测试结果来看log4j与logback性能基本差不多,这与我想象中的貌似不一样,我以为logback会明显优秀于log4j,然而在这几种情况下并没有

Log4j2

这个结果是我没有想到的,不知道是不是哪里出了什么问题,先上代码吧

package com.smxknife.log.log4j2.demo02;

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.openjdk.jmh.annotations.*;
import org.openjdk.jmh.infra.Blackhole;

import java.util.concurrent.TimeUnit;

/** * @author smxknife * 2018-12-19 */
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
@State(Scope.Thread)
public class PerformLogTest {

	private static Logger logger;

	@Param({"100", "1000", "10000"})
	int loopNum;

	int[] loopIdx;

	@Setup
	public void setup() {
		loopIdx = new int[loopNum];
		for (int i = 0; i < loopNum; i++) {
			loopIdx[i] = i;
		}
		logger = LogManager.getLogger(this.getClass().getSimpleName());
	}

	@Benchmark
	public void test(Blackhole blackhole) {
		int tmp = 0;
		for (int i = 0; i < loopIdx.length; i++) {
			logger.error("test " + i);
		}
	}
}

log4j2.xml

<?xml version="1.0" encoding="UTF-8"?>
<Configuration monitorInterval="1" status="ERROR" strict="true" name="LogConfig">

	<Properties>
		<Property name="logbasedir">target/logs</Property>
		<Property name="log.layout">%d %-5p %t (%c:%L) - %m%n</Property>
		<property name="APP_NAME">app</property>
	</Properties>

	<!--<&#45;&#45;此处使用了两种类型的appender,RollingFile为滚动类型,满足策略条件后会新建文件夹记录 &ndash;&gt;-->
	<Appenders>
		<Appender type="Console" name="STDOUT">
			<Target>SYSTEM_OUT</Target>
			<Layout type="PatternLayout" pattern="${log.layout}"/>
		</Appender>
		<Appender type="RollingFile" name="FILE" fileName="${logbasedir}/jutap-${sys:APP_NAME}.log"
				  filePattern = "${logbasedir}/jutap-${sys:APP_NAME}-%d{yyyy-MM-dd}.%i.log">
			<Policies>
				<TimeBasedTriggeringPolicy />
				<SizeBasedTriggeringPolicy size="1000 MB"/>
			</Policies>
			<Layout type="PatternLayout">
				<Charset>GBK</Charset>
				<Pattern>${log.layout}</Pattern>
			</Layout>
		</Appender>
		<!--<Appender type="RollingFile" name="ExceptionLog" fileName="${logbasedir}/exception-${sys:APP_NAME}.log"-->
				  <!--filePattern = "${logbasedir}/exception-${sys:APP_NAME}-%d{yyyy-MM-dd}.%i.log">-->
			<!--<Policies>-->
				<!--<TimeBasedTriggeringPolicy />-->
				<!--<SizeBasedTriggeringPolicy size="100 MB"/>-->
			<!--</Policies>-->
			<!--<Layout type="PatternLayout">-->
				<!--<Charset>GBK</Charset>-->
				<!--<Pattern>${log.layout}</Pattern>-->
			<!--</Layout>-->
		<!--</Appender>-->
	</Appenders>

	<Loggers>
		<!--<Logger name="exception" level="error" additivity="false">-->
			<!--<AppenderRef ref="ExceptionLog"/>-->
		<!--</Logger>-->

		<Root level="info">
			<!--<AppenderRef ref="STDOUT"/>-->
			<AppenderRef ref="FILE"/>
		</Root>
		<Logger name="om.smxknife.log.log4j2" level="error"/>
	</Loggers>

</Configuration>

测试结果如下:

Benchmark            (loopNum)  Mode  Cnt    Score    Error  Units
PerformLogTest.test        100  avgt    5    6.474 ±  2.481  ms/op
PerformLogTest.test       1000  avgt    5   59.621 ±  5.881  ms/op
PerformLogTest.test      10000  avgt    5  609.273 ± 69.103  ms/op
  • 循环100时,执行一次所需时间为6.474 毫秒
  • 循环1000时,执行一次所需时间为59.621 毫秒
  • 循环10000时,执行一次所需时间为609.273 毫秒

一个出人意料的结果,竟然执行的速度这么慢是logback和log4j的6倍左右

总结

目前先做到这里,这里还存在很多疑问,留待继续试验。

由于环境的限制,所有的测试都是在自己的笔记本上做的,难以做到高性能极限测试。
另外,由于预热功能需要预热多次,这里只有5次,有点不足。
还有一个变量,没有控制好,就是日志输出的格式,这里没有做到一致,或差不多的情况,后面需要把这点控制一下。
除此之外,上面的都是单独执行每一个测试,后面整合到一个测试例子中。
上面只有执行的平均时间测试,后面再加上所有的测试

点赞