1. What is slf4j?
Slf4j is the abbreviation of The Simple Logging Facade for Java. Generally speaking, slf4j is a series of logging interfaces. Here is a typical facade pattern design. The authors of slf4j, log4j and logback are Ceki G ü lc ü. Log4j was first developed. Later, a unified log interface slf4j was extracted based on log4j, and logback was optimized and developed based on slf4j and log4j. Therefore, logback is better than log4j in both use and performance.
2. log4j introduction and configuration
2.1 log4j introduction
Log4j (Log for Java) is an open source project of Apache. By using log4j, we can control the destination of log information transmission to console, files, GUI components, even socket server, NT event recorder, UNIX Syslog daemon, etc; We can also control the output format of each log; By defining the level of each log information, we can control the log generation process in more detail. The most interesting thing is that these can be flexibly configured through a configuration file without modifying the application code.
2.2 log4j three components
- Loggers are mainly used to control the type of log output;
Loggers Components are divided into five levels in this system: DEBUG,INFO,WARN,ERROR and FATAL. These five levels are in order, FATAL>ERROR >WARN >INFO >DEBUG It is important to understand the importance of this log information. Log4j Level control rule: only log information with a level not lower than the set level is output, assuming Loggers Level set to INFO,be INFO,WARN,ERROR and FATAL Log information at all levels will be output, And the level ratio INFO Low DEBUG Will not be output.
- Appenders (output source) is mainly used to control where logs are output, such as Console, Files, etc;
The commonly used classes are as follows: org.apache.log4j.ConsoleAppender((console) org.apache.log4j.FileAppender((file) org.apache.log4j.DailyRollingFileAppender((one log file per day) org.apache.log4j.RollingFileAppender(When the file size reaches the specified size, a new file is generated) org.apache.log4j.WriterAppender(Send log information in stream format to any specified place)
- Layouts is mainly used to control the form of log output;
Sometimes users want to format the log according to their own preferences, Log4j Can be in Appenders Additional after Layouts To complete this function. Layouts Four log output styles are provided, As per HTML Style, freely specified style, style containing log level and information, and style containing log time, thread, category and other information. 3.1,The commonly used classes are as follows: org.apache.log4j.HTMLLayout(with HTML Tabular layout) org.apache.log4j.PatternLayout((layout mode can be specified flexibly) org.apache.log4j.SimpleLayout(Level and information string containing log information) org.apache.log4j.TTCCLayout((including log generation time, thread, category and other information)
2.3 log4j pom dependency introduction
Basic mode <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>${log4j.version}</version> </dependency> The following pattern will be introduced log4j,log4j-api and log4j-core <!--slf4j corresponding log4j2 Middleware for,Bridge, tell slf4j use log4j2--> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-slf4j-impl</artifactId> <version>${log4j.version}</version> </dependency>
2.4 configuring log4j
Log4j supports two configuration file formats, one is XML (an application under the standard General Markup Language) format, and the other is the Java feature file log4j Properties (key = value).
Here we use log4j Properties, followed by log4j2 using xml
### set log levels ### log4j.rootLogger = DEBUG,Console,File ### Output to console ### log4j.appender.Console=org.apache.log4j.ConsoleAppender log4j.appender.Console.Target=System.out log4j.appender.Console.layout=org.apache.log4j.PatternLayout log4j.appender.Console.layout.ConversionPattern= %d{ABSOLUTE} %5p %c{1}:%L - %m%n ### Output to log file ### log4j.appender.File=org.apache.log4j.RollingFileAppender log4j.appender.File.File=${project}/WEB-INF/logs/app.log log4j.appender.File.DatePattern=_yyyyMMdd'.log' log4j.appender.File.MaxFileSize=10MB log4j.appender.File.Threshold=ALL log4j.appender.File.layout=org.apache.log4j.PatternLayout log4j.appender.File.layout.ConversionPattern=[%p][%d{yyyy-MM-dd HH\:mm\:ss,SSS}][%c]%m%n
3. Introduction and configuration of logback
3.1 introduction to logback
logback is an open source component of java's log. It is a version modified and optimized by the founder of log4j based on log4j. Its performance is better than log4j. At present, it is mainly divided into three modules:
logback-core:Core code module logback-classic:log4j An improved version of slf4j So it's easy for you to switch other log components later logback-access:Access module and Servlet Container integration is provided through Http To access the log
3.2 introduction of logback POM dependency
<logback.version>1.2.3</logback.version> <!--This dependency directly contains logback-core as well as slf4j-api Dependence of--> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>${logback.version}</version> </dependency>
3.3 logback configuration template
<?xml version="1.0" encoding="UTF-8"?> <configuration scan="true" scanPeriod="60 seconds" debug="false"> <!--set up app Name, followed by log path--> <property name="APP_NAME" value="metadata-manage" /> <springProfile name="dev"> <!--Development environment path--> <property name="LOG_HOME" value="/Users/aaa/IdeaProjects/log"/> </springProfile> <springProfile name="test,production"> <!--Test and production environment path--> <property name="LOG_PATH" value="/data/logs/${APP_NAME}" /> </springProfile> <!--Configuration Console appender--> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"> <!--Format output:%d Indicates the date,%thread Represents the thread name,%-5level: The level is displayed 5 characters wide from the left%msg: Log messages,%n Is a newline character--> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern> </encoder> </appender> <!--to configure info file appender--> <appender name="INFO" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${LOG_PATH}/info.log</file> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <!--The file name of the log file output--> <FileNamePattern>${LOG_PATH}/info.%d{yyyy-MM-dd}.%i.tar.gz</FileNamePattern> <!--Log file retention days--> <maxHistory>10</maxHistory> <totalSizeCap>15GB</totalSizeCap> <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP"> <!--The maximum number of files reached 128 MB Will be compressed and cut --> <maxFileSize>1024MB</maxFileSize> </timeBasedFileNamingAndTriggeringPolicy> </rollingPolicy> <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern> </encoder> <filter class="ch.qos.logback.classic.filter.LevelFilter"> <level>INFO</level> <onMatch>ACCEPT</onMatch> <onMismatch>NEUTRAL</onMismatch> </filter> <filter class="ch.qos.logback.classic.filter.LevelFilter"> <level>DEBUG</level> <onMatch>ACCEPT</onMatch> <onMismatch>DENY</onMismatch> </filter> </appender> <!--to configure WARN file appender--> <appender name="WARN" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${LOG_PATH}/warn.log</file> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <!--The file name of the log file output--> <FileNamePattern>${LOG_PATH}/warn.%d{yyyy-MM-dd}.%i.tar.gz</FileNamePattern> <!--Log file retention days--> <maxHistory>10</maxHistory> <totalSizeCap>10GB</totalSizeCap> <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP"> <!--The maximum number of files reached 128 MB Will be compressed and cut --> <maxFileSize>1024MB</maxFileSize> </timeBasedFileNamingAndTriggeringPolicy> </rollingPolicy> <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern> </encoder> <filter class="ch.qos.logback.classic.filter.LevelFilter"> <level>WARN</level> <onMatch>ACCEPT</onMatch> <onMismatch>DENY</onMismatch> </filter> </appender> <!--to configure ERROR file appender--> <appender name="ERROR" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${LOG_PATH}/error.log</file> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <!--The file name of the log file output--> <FileNamePattern>${LOG_PATH}/error.%d{yyyy-MM-dd}.%i.log</FileNamePattern> <!--Log file retention days--> <maxHistory>10</maxHistory> <totalSizeCap>10GB</totalSizeCap> <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP"> <!--The maximum number of files reached 128 MB Will be compressed and cut --> <maxFileSize>512MB</maxFileSize> </timeBasedFileNamingAndTriggeringPolicy> </rollingPolicy> <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern> </encoder> <filter class="ch.qos.logback.classic.filter.LevelFilter"> <level>ERROR</level> <onMatch>ACCEPT</onMatch> <onMismatch>DENY</onMismatch> </filter> </appender> <!--Configuring asynchronous printing of log files appender--> <appender name="asyncFileAppender" class="ch.qos.logback.classic.AsyncAppender"> <discardingThreshold>0</discardingThreshold> <queueSize>512</queueSize> <appender-ref ref="INFO" /> </appender> <!--<logger name="com.package....." level="DEBUG"/>--> <!-- development environment --> <springProfile name="dev,test,prod"> <root level="DEBUG"> <appender-ref ref="STDOUT" /> </root> </springProfile> <!-- production environment --> <springProfile name="release,test"> <root level="INFO"> <appender-ref ref="ERROR" /> <appender-ref ref="WARN" /> <appender-ref ref="INFO" /> </root> </springProfile> </configuration>
3.4 logback summary
- From my experience, the ideal log format should include (except for the log information itself of course): current time (no date, millisecond precision, because the peripheral files have been distinguished by date), log level, thread name, simple log name (not full name) and message. In logback, it will be like this:
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%d{HH:mm:ss.SSS} %-5level [%thread][%logger{0}] %m%n</pattern> </encoder> </appender> Of course, you can also bring the date <encoder> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern> <charset>UTF-8</charset> </encoder>
- When printing logs, use mode injection instead of string splicing;
log.debug("Found {} records matching filter: '{}'", records, filter); Error example log.debug("Found " + records + " recordsmatching filter: '" + filter + "'");
- Try to print logs asynchronously
4. Comparison between logback and log4j
- Faster execution speed: logback is developed based on log4j and rewrites the internal implementation. In some specific scenarios, it can even be 10 times faster than before. While ensuring that the components of logback are faster, it requires less memory.
- Logback classic naturally implements SLF4J: the longging class in ogback classic naturally implements SLF4J. When you use logback classic as the underlying implementation, you don't need to consider the problems related to LF4J diary system. Furthermore, logback classic strongly recommends using SLF4J as the client diary system. If you need to switch to log4j or other, you only need to replace a jar package without changing the code implemented through SLF4J API. This can greatly reduce the workload of changing the diary system.
- Automatic reload of configuration file: logback classic can automatically reload after the configuration file is modified. This scanning process is fast, no resource contention, and can be dynamically expanded to support millions of calls per second between hundreds of threads. It combines well with the application server and is common in JEE environment, because it will not call to create a separate thread for scanning.
Recover gracefully from I/O errors: FileAppender and its subclasses, including RollingFileAppender, can recover gracefully from I/O errors. Therefore, if a file server is temporarily down, you no longer need to restart your application, and the logging function can work normally. When the file server returns to work, the appender related to logback will recover from the previous error transparently and quickly. - Automatically clear old log Archives: you can control the maximum number of log archives by setting the maxHistory property of TimeBasedRollingPolicy or SizeAndTimeBasedFNATP. If your rollback policy is rolled back every month and you want to keep the log for one year, simply set the maxHistory property to 12. For archives older than 12 months, log files will be automatically cleared.
- Automatically compress archive log files: RollingFileAppender can automatically compress archive log files during rollback operation. Compression is usually performed asynchronously, so even if it is a large log file, your application will not be blocked.
Condition processing in configuration files: developers usually need to transform logback configuration files in different target environments, such as development environment, test environment and production environment. These configuration files are basically the same, except that some parts will be different. In order to avoid duplication, logback supports the condition processing in the configuration file. Just use, and, and then the same configuration file can be used in different environments. - Filtering: logback has much richer filtering capabilities than log4j. For example, let's assume that there is a fairly important business application deployed in a production environment. Considering that a large amount of transaction data needs to be processed and the record level is set to WARN, only warning and error messages will be recorded. Now, imagine that you encounter a bug in the development environment, but it is difficult to find it in the test platform because of unknown differences between some environments (production environment / test environment). With log4j, you can only choose to lower the record level to DEBUG in the production system to try to find the problem. Unfortunately, however, this generates a large number of logs, making analysis difficult. More importantly, redundant logging can affect the performance of the production environment. Using logback, you can choose to keep the WARN level logs of all users, except one user, such as Alice, who is the relevant user of the problem. When Alice logs into the system, she will be recorded at the DEBUG level, while other users still record logs at the WARN level. This function can be realized by adding four lines to the XML of the configuration file. Please find MDCFilter in the relevant sections
- logback native supports splitting logs by date and file size at the same time, while log4j needs to write its own code
5. log4j2 introduction and configuration
5.1 log4j2 introduction
logbakc is developed and designed on the basis of log4j, while Log4j2 is upgraded and optimized on the shoulder of logback. Although it is very similar to logback in all aspects, it provides stronger performance and concurrency, especially in asynchronous logget.
5.2 log4j2 common components
- LoggerContext: log system context;
- Configuration: each LoggerContext has a valid configuration. Configuration includes all Appender, Filter, LoggerConfig, StrSubstitutor references and Layout format settings;
- Logger: the logger inherits from AbstractLogger. When the configuration is modified, it will be associated with different LoggerConfig, which will also change its behavior;
- LoggerConfig: the LoggerConfig object is created when the Logger is declared. It contains a set of Appender references for processing events and a set of filters for filtering events passed to the Appender, which is equivalent to the collection of Appenders;
- Appender: Log4j2 also allows recording requests to be output to multiple targets, which are called Appenders. At present, the types of Appenders include console, file, socket, Apache Flume, JMS, remote UNIX system log daemon and various database API s. Users can choose to output logs to different targets according to their needs. At the same time, multiple Appenders are allowed to be opened in the configuration of one Logger;
- Filter: Log4j2 provides a filter to filter message events, which can be applied to the pre and post interceptors of LoggerConfig before and after the events are passed to LoggerConfig. Filter contains three behaviors: Accept, Deny or neutral. Accept and Deny respectively represent acceptance and rejection, that is, the filter accepts or rejects a log filtering expression. After these two behaviors are processed, it will not pass through other filters. Neutral stands for neutrality, which means that events should be handled by other filters. If no filter is configured, the event will be processed directly;
- Layout: Log4j2 not only can be output to different target Appender s, but also supports the definition of custom log format in the target. Layout is responsible for formatting log events by configuring PatternLayout;
- Policy is used to control when log files are scrolled; Strategy is used to control how log files are scrolled. If RollingFile or RollingRandomAccessFile is configured, a policy must be configured;
5.3 log4j2 configuration file
<?xml version="1.0" encoding="UTF-8"?> <configuration debug="false"> <!--Define the storage address of the log file LogBack Relative paths are used in the configuration of--> <property name="LOG_HOME" value="${app.log.root}"/> <!-- console output --> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"> <!--Format output,%d:date;%thread:Thread name;%-5level: level,Display 5 characters wide from left;%msg:Log message;%n:Line feed--> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern> </encoder> </appender> <appender name="ERROR" class="ch.qos.logback.core.ConsoleAppender"> <!-- filter Filter the type or level of output required --> <filter class="ch.qos.logback.classic.filter.LevelFilter"> <level>ERROR</level> <onMatch>ACCEPT</onMatch> <onMismatch>DENY</onMismatch> </filter> <!-- Format mode --> <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"> <!--Format output,%d:date;%thread:Thread name;%-5level: level,Display 5 characters wide from left;%msg:Log message;%n:Line feed--> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern> </encoder> </appender> <!-- Generate log files on a daily basis --> <appender name="SYS" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${LOG_HOME}/sys.log</file> <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy"> <!-- rollover daily --> <fileNamePattern>${LOG_HOME}/sys-%d{yyyy-MM-dd}.%i.log</fileNamePattern> <!-- each file should be at most 100MB, keep 60 days worth of history, but at most 20GB --> <maxFileSize>20MB</maxFileSize> <maxHistory>60</maxHistory> <totalSizeCap>2GB</totalSizeCap> </rollingPolicy> <encoder> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern> <charset>UTF-8</charset> </encoder> </appender> <appender name="APP" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${LOG_HOME}/app.log</file> <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy"> <!-- rollover daily --> <fileNamePattern>${LOG_HOME}/app-%d{yyyy-MM-dd}.%i.log</fileNamePattern> <!-- each file should be at most 100MB, keep 60 days worth of history, but at most 20GB --> <maxFileSize>20MB</maxFileSize> <maxHistory>60</maxHistory> <totalSizeCap>2GB</totalSizeCap> </rollingPolicy> <encoder> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern> <charset>UTF-8</charset> </encoder> </appender> <appender name="MONITOR" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${LOG_HOME}/monitor.log</file> <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy"> <!-- rollover daily --> <fileNamePattern>${LOG_HOME}/monitor-%d{yyyy-MM-dd}.%i.log</fileNamePattern> <!-- each file should be at most 100MB, keep 60 days worth of history, but at most 20GB --> <maxFileSize>20MB</maxFileSize> <maxHistory>60</maxHistory> <totalSizeCap>2GB</totalSizeCap> </rollingPolicy> <encoder> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern> <charset>UTF-8</charset> </encoder> </appender> <appender name="DYEING" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${LOG_HOME}/dyeing.log</file> <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy"> <!-- rollover daily --> <fileNamePattern>${LOG_HOME}/dyeing-%d{yyyy-MM-dd}.%i.log</fileNamePattern> <!-- each file should be at most 100MB, keep 60 days worth of history, but at most 20GB --> <maxFileSize>20MB</maxFileSize> <maxHistory>60</maxHistory> <totalSizeCap>2GB</totalSizeCap> </rollingPolicy> <encoder> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level - %msg%n</pattern> <charset>UTF-8</charset> </encoder> </appender> <appender name="ASYNC_DYEING" class="ch.qos.logback.classic.AsyncAppender"> <appender-ref ref="DYEING"/> </appender> <logger name="dyeingLogger" level="INFO"> <appender-ref ref="ASYNC_DYEING"/> </logger> <!--Bind specific packages appender --> <logger name="com.package.aaa" level="INFO"> <appender-ref ref="APP"/> </logger> <!-- Log output level --> <root level="INFO"> <appender-ref ref="ERROR"/> </root> </configuration>
5.4 log4j2 four queues for asynchronous logs
ArrayBlockingQueue -- Default queue, via java Native ArrayBlockingQueue realization. DisruptorBlockingQueue -- disruptor High performance queue implemented by packet. JCToolsBlockingQueue -- JCTools Implemented lockless queue. LinkedTransferQueue -- adopt java7 The above native supported LinkedTransferQueue realization.
6. Source code analysis of disruptor
6.1 ring buffer
In the article explaining mysql , it is said that mysql's redolog is written circularly through a circular storage area to ensure performance and consistency.
In the linux kernel, fifo used for interprocess communication is also realized through ring storage area.
Benefits of RingBuffer:
- Based on the array implementation, the memory is recycled, reducing the operations of memory allocation, recycling and capacity expansion.
- For the scenario with only a single read and write process, the read and write are carried out in different positions of the ring. Therefore, the read and write process does not need to be locked, which can make the cache read and write more efficient.
Using this idea for reference, disruptor uses ring queue to realize buffering. Because RingBuffer realizes the circular utilization of space, it can stay in memory all the time after one development, which reduces the pressure of GC and improves the performance of buffering. At the same time, based on RingBuffer, disruptor provides multiple models such as single producer, multiple producers, single consumer and multiple consumer groups for flexible use in different scenarios. In these modes, disruptor tries to avoid the use of locks through CAS operation in Unsafe package combined with spin, so as to make the whole implementation very concise and efficient.
6.2 single producer model
Compared with the multi producer model, the single producer model is obviously simpler. Let's see how it is implemented:
// Disruptor public <A> void publishEvent(final EventTranslatorOneArg<T, A> eventTranslator, final A arg) { ringBuffer.publishEvent(eventTranslator, arg); } // RingBuffer public <A> void publishEvent(EventTranslatorOneArg<E, A> translator, A arg0) { long sequence = this.sequencer.next(); this.translateAndPublish(translator, sequence, arg0); } public long next() { return this.next(1); } public long next(int n) { if (n < 1) { throw new IllegalArgumentException("n must be > 0"); } else { // Get the last data write location long nextValue = this.pad.nextValue; // Get the data write location this time long nextSequence = nextValue + (long)n; // Calculate loop point long wrapPoint = nextSequence - (long)this.bufferSize; // Consumer's next consumption location long cachedGatingSequence = this.pad.cachedValue; // Insufficient cache location, spin waiting if (wrapPoint > cachedGatingSequence || cachedGatingSequence > nextValue) { long minSequence; while(wrapPoint > (minSequence = Util.getMinimumSequence(this.gatingSequences, nextValue))) { Thread.yield(); } // Being awakened indicates that there is consumer consumption. Update the consumption location this.pad.cachedValue = minSequence; } // Get the data insertion location and return this.pad.nextValue = nextSequence; return nextSequence; } }
It can be seen that the whole code to get the write location is not complicated, that is to get the write location in RingBuffer. If RingBuffer space is not enough, call yield to wait for consumer wake-up. Once the location is enough, it will return to the write location. After that, the translateAndPublish method is called to release the data.
6.3 multi producer model
Under the multi producer model, the disruptor realizes the conflict free production process by isolating different producers, that is, each producer can only write to its own independent space allocated on the RingBuffer, but this introduces a new problem. Since the RingBuffer is no longer continuous, how can consumer s know where to get data? The solution is also very simple. The disruptor introduces an additional buffer, availableBuffer, whose length is the same as that of RingBuffer. Therefore, its slot position corresponds to the slot position of RingBuffer one by one. Once there is data written, it is set to 1 at the corresponding position of availableBuffer and 0 after consumption, so that the next position can be clearly known when reading.
Although the availableBuffer is divided into multiple areas by multiple producers in use, in fact, when each producer operates the availableBuffer segment it holds, it is also used as a RingBuffer. This ingenious transformation allows the multi production model to reuse the implementation in the single producer model. Therefore, in the multi producer model, the writing process is not very different from the single producer model, There are only differences in the implementation of the next method and the publish method:
public long next(int n) { if (n < 1) { throw new IllegalArgumentException("n must be > 0"); } else { long current; long next; do { while(true) { // Get next write location current = this.cursor.get(); next = current + (long)n; // Calculate the city ring segment of the available buffer segment held long wrapPoint = next - (long)this.bufferSize; // Calculate the next consumption location long cachedGatingSequence = this.gatingSequenceCache.get(); if (wrapPoint <= cachedGatingSequence && cachedGatingSequence <= current) { break; } // Spin waiting long gatingSequence = Util.getMinimumSequence(this.gatingSequences, current); if (wrapPoint > gatingSequence) { LockSupport.parkNanos(1L); } else { this.gatingSequenceCache.set(gatingSequence); } } } while(!this.cursor.compareAndSet(current, next)); return next; } }
In the multi producer model, the use of locks is successfully avoided and efficient production operation is realized by means of spin and cache slicing.
6.4 consumer realization
EventProcessor is the whole consumer event processing framework. The EventProcessor interface inherits the Runnable interface and has two main implementations:
Single threaded batch processingbatcheventprocessor
Multithreading WorkProcessor
For single consumers and multiple consumers, the implementation mode is different:
Broadcast mode – use the handleEventsWith method to pass in multiple eventhandlers, and internally use multiple batcheventprocessors to associate multiple threads for execution. It is a typical publish subscribe mode. The same event will be consumed by multiple consumers in parallel, which is suitable for the same event to trigger multiple operations. Each BatchEventProcessor is a single threaded task chain, and the task execution is orderly and very fast.
Cluster consumption mode - when multiple workhandlers are passed in using the handleEventsWithWorkerPool method, multiple workprocessors are used internally to associate multiple threads for execution. Similar to the point-to-point mode of JMS, the same event will be consumed by one of a group of consumers, which is suitable for improving the parallel processing ability of consumers. The internal implementation of each WorkProcessor is multithreaded, and the order of task execution cannot be guaranteed
7. Comparison between logback and log4j2
7.1 log4j2 more feature support
- There are few cases of data loss, which can be used for audit function. Moreover, exception s reported internally will be found, but logback and log4j will not;
- log4j2 uses the disruptor technology, and its performance is more than 10 times higher than logback in multi-threaded environment;
- The previous version of garbage free will produce a lot of temporary objects, which will cause frequent GC. log4j2 has been optimized in this regard to reduce the generation of temporary objects. As little GC as possible is actually based on the disruptor technology;
- Support lambda expressions;
- More powerful support for the function of filter;
- Syslog protocol supports TCP and UDP
- Support kafka queue
7.2 summary
- Turning off the console output in the online environment will improve the performance;
- Compared with the same case, log4j2 has better performance;
- When there are many log output scenarios, consider replacing logback with log4j2;