This article introduces how to use JDK's built-in tools to analyze and locate problems in Java programs.
Use the JDK built-in tool to view the JVM
JDK comes with many command-line and even graphical interface tools to help us view some information about the JVM. For example, running the ls command on my machine, you can see that JDK 8 provides many tools or programs:
Next, I will introduce some common monitoring tools to you. You can also understand the basic functions of various tools through the following figure:
To test these tools, let's write a piece of code: start 10 dead loop threads, allocate a string of about 10MB to each thread, and then sleep for 10 seconds. It is conceivable that this procedure will put pressure on GC:
//Start 10 threads IntStream.rangeClosed(1, 10).mapToObj(i -> new Thread(() -> { while (true) { //Each thread is an endless loop, sleeping for 10 seconds and printing 10M data String payload = IntStream.rangeClosed(1, 10000000) .mapToObj(__ -> "a") .collect(Collectors.joining("")) + UUID.randomUUID().toString(); try { TimeUnit.SECONDS.sleep(10); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(payload.length()); } })).forEach(Thread::start); TimeUnit.HOURS.sleep(1);
Modify POM XML, configure the main method class of the Java program packaged by the spring boot Maven plugin plug-in:
<plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <configuration> <mainClass>org.geekbang.time.commonmistakes.troubleshootingtools.jdktool.CommonMistakesApplication </mainClass> </configuration> </plugin>
Then start the process with java -jar and set the JVM parameters so that the minimum and maximum heap are 1GB:
java -jar common-mistakes-0.0.1-SNAPSHOT.jar -Xms1g -Xmx1g
After completing these preparations, we can use the tools provided by JDK to observe and analyze the test program.
jps
First, use jps to get the list of Java processes, which is more convenient than using ps:
➜ ~ jps 12707 22261 Launcher 23864 common-mistakes-0.0.1-SNAPSHOT.jar 15608 RemoteMavenServer36 23243 Main 23868 Jps 22893 KotlinCompileDaemon
jinfo
Then, you can use jinfo to print various parameters of the JVM:
➜ ~ jinfo 23864 Java System Properties: #Wed Jan 29 12:49:47 CST 2020 ... user.name=zhuye path.separator=\: os.version=10.15.2 java.runtime.name=Java(TM) SE Runtime Environment file.encoding=UTF-8 java.vm.name=Java HotSpot(TM) 64-Bit Server VM ... VM Flags: -XX:CICompilerCount=4 -XX:ConcGCThreads=2 -XX:G1ConcRefinementThreads=8 -XX:G1HeapRegionSize=1048576 -XX:GCDrainStackTargetSize=64 -XX:InitialHeapSize=268435456 -XX:MarkStackSize=4194304 -XX:MaxHeapSize=4294967296 -XX:MaxNewSize=2576351232 -XX:MinHeapDeltaBytes=1048576 -XX:NonNMethodCodeHeapSize=5835340 -XX:NonProfiledCodeHeapSize=122911450 -XX:ProfiledCodeHeapSize=122911450 -XX:ReservedCodeCacheSize=251658240 -XX:+SegmentedCodeCache -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseG1GC VM Arguments: java_command: common-mistakes-0.0.1-SNAPSHOT.jar -Xms1g -Xmx1g java_class_path (initial): common-mistakes-0.0.1-SNAPSHOT.jar Launcher Type: SUN_STANDARD
Looking at lines 15 and 19, we can find that we set the JVM parameters incorrectly, - Xms1g and - Xmx1g are regarded as the startup parameters of the Java program. At present, the maximum memory of the whole JVM is about 4GB, not 1GB. Recently, I sorted out the latest interview data, which contains the interview questions of major factories in 2021. If you want to change jobs, don't miss it. Click to get it!
Therefore, when we suspect that the JVM configuration is abnormal, we should use the tool to confirm the parameters at the first time. In addition to using tools to confirm JVM parameters, you can also print VM parameters and program parameters:
System.out.println("VM options"); System.out.println(ManagementFactory.getRuntimeMXBean().getInputArguments().stream().collect(Collectors.joining(System.lineSeparator()))); System.out.println("Program arguments"); System.out.println(Arrays.stream(args).collect(Collectors.joining(System.lineSeparator())));
Put the JVM parameters before - jar and restart the program. You can see the following output. From the output, you can also confirm that the JVM parameters are configured correctly this time:
➜ target git:(master) ✗ java -Xms1g -Xmx1g -jar common-mistakes-0.0.1-SNAPSHOT.jar test VM options -Xms1g -Xmx1g Program arguments test
jvisualvm
Then, start another heavyweight tool, jvisualvm, and observe the program. You can confirm that the JVM parameter setting is successful again in the overview panel:
Continuing to observe the monitoring panel, you can see that the GC activity of the JVM basically occurs once every 10 seconds, there is a fluctuation between 250MB and 900MB in the heap, and the number of active threads is 22. We can see the basic situation of the JVM on the monitoring panel, or we can directly perform manual GC and heap Dump operations here:
Jconsole if you want to see the GC curve of each memory area, you can use jconsole to observe. Jconsole is also a comprehensive graphical interface monitoring tool. One thing more convenient than jvisualvm is that it can monitor various data in the form of curves, including attribute values in MBean s:
jstat
Similarly, if there is no condition to use the graphical interface (after all, on Linux servers, we mainly use command-line tools) and want to see the trend of GC, we can use jstat tool.
jstat tool allows you to output various monitoring indicators of the JVM at a fixed monitoring frequency. For example, use - gcutil to output GC and memory usage summary information, once every 5 seconds and 100 times. You can see that Young GC is more frequent, while Full GC is basically once every 10 seconds:
➜ ~ jstat -gcutil 23940 5000 100 S0 S1 E O M CCS YGC YGCT FGC FGCT CGC CGCT GCT 0.00 100.00 0.36 87.63 94.30 81.06 539 14.021 33 3.972 837 0.976 18.968 0.00 100.00 0.60 69.51 94.30 81.06 540 14.029 33 3.972 839 0.978 18.979 0.00 0.00 0.50 99.81 94.27 81.03 548 14.143 34 4.002 840 0.981 19.126 0.00 100.00 0.59 70.47 94.27 81.03 549 14.177 34 4.002 844 0.985 19.164 0.00 100.00 0.57 99.85 94.32 81.09 550 14.204 34 4.002 845 0.990 19.196 0.00 100.00 0.65 77.69 94.32 81.09 559 14.469 36 4.198 847 0.993 19.659 0.00 100.00 0.65 77.69 94.32 81.09 559 14.469 36 4.198 847 0.993 19.659 0.00 100.00 0.70 35.54 94.32 81.09 567 14.763 37 4.378 853 1.001 20.142 0.00 100.00 0.70 41.22 94.32 81.09 567 14.763 37 4.378 853 1.001 20.142 0.00 100.00 1.89 96.76 94.32 81.09 574 14.943 38 4.487 859 1.007 20.438 0.00 100.00 1.39 39.20 94.32 81.09 575 14.946 38 4.487 861 1.010 20.442
Where S0 represents the percentage of Survivor0 area occupied, S1 represents the percentage of Survivor1 area occupied, E represents the percentage of Eden area occupied, O represents the percentage of old age occupied, M represents the percentage of metadata area occupied, YGC represents the number of young generation recycling, YGCT represents the time-consuming of young generation recycling, FGC represents the number of old generation recycling, and FGCT represents the time-consuming of old generation recycling.
The jstat command has many parameters, including - class, - compiler, - gc and so on. Java 8, Linux/Unix platform, jstat tool, you can see here. The characteristic of jstat timing output can facilitate us to continuously observe various indicators of the program.
Continuing to the Thread panel, we can see that a large number of threads starting with Thread basically run in rhythmic 10 seconds, and sleep at other times, which matches our code logic:
Click the thread Dump button on the panel to view the instantaneous thread stack of the thread:
jstack
The command-line tool jstack can also be used to grab the thread stack:
➜ ~ jstack 23940 2020-01-29 13:08:15 Full thread dump Java HotSpot(TM) 64-Bit Server VM (11.0.3+12-LTS mixed mode): ... "main" #1 prio=5 os_prio=31 cpu=440.66ms elapsed=574.86s tid=0x00007ffdd9800000 nid=0x2803 waiting on condition [0x0000700003849000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleep(java.base@11.0.3/Native Method) at java.lang.Thread.sleep(java.base@11.0.3/Thread.java:339) at java.util.concurrent.TimeUnit.sleep(java.base@11.0.3/TimeUnit.java:446) at org.geekbang.time.commonmistakes.troubleshootingtools.jdktool.CommonMistakesApplication.main(CommonMistakesApplication.java:41) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@11.0.3/Native Method) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@11.0.3/NativeMethodAccessorImpl.java:62) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@11.0.3/DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(java.base@11.0.3/Method.java:566) at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) at org.springframework.boot.loader.Launcher.launch(Launcher.java:51) at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:52) "Thread-1" #13 prio=5 os_prio=31 cpu=17851.77ms elapsed=574.41s tid=0x00007ffdda029000 nid=0x9803 waiting on condition [0x000070000539d000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleep(java.base@11.0.3/Native Method) at java.lang.Thread.sleep(java.base@11.0.3/Thread.java:339) at java.util.concurrent.TimeUnit.sleep(java.base@11.0.3/TimeUnit.java:446) at org.geekbang.time.commonmistakes.troubleshootingtools.jdktool.CommonMistakesApplication.lambda$null$1(CommonMistakesApplication.java:33) at org.geekbang.time.commonmistakes.troubleshootingtools.jdktool.CommonMistakesApplication$$Lambda$41/0x00000008000a8c40.run(Unknown Source) at java.lang.Thread.run(java.base@11.0.3/Thread.java:834) ...
After crawling, you can use a similar fastthread( https://fastthread.io/ )Such an online analysis tool to analyze the thread stack.
jcmd
Finally, let's take a look at the NMT function of the Java HotSpot virtual machine.
Through NMT, we can observe the usage of fine-grained memory. Set - XX:NativeMemoryTracking=summary/detail to turn on the NMT function, and then use the jcmd tool to view the NMT data.
We restart the program once, and this time add the JVM parameter to start NMT in detail:
-Xms1g -Xmx1g -XX:ThreadStackSize=256k -XX:NativeMemoryTracking=detail
Here, we also added the - XX: threadsacksize parameter and set its value to 256k, that is, we expect to set the thread stack to 256KB. Let's observe whether the setting is successful through NMT.
After starting the program, execute the following jcmd command to output NMT results in summary form. It can be seen that there are 32 threads at present, and the thread stack retains about 4GB of memory in total. We clearly configured the maximum thread stack of 256KB. Why is there such an exaggerated number of 4GB? What's the problem?
➜ ~ jcmd 24404 VM.native_memory summary 24404: Native Memory Tracking: Total: reserved=6635310KB, committed=5337110KB - Java Heap (reserved=1048576KB, committed=1048576KB) (mmap: reserved=1048576KB, committed=1048576KB) - Class (reserved=1066233KB, committed=15097KB) (classes #902) (malloc=9465KB #908) (mmap: reserved=1056768KB, committed=5632KB) - Thread (reserved=4209797KB, committed=4209797KB) (thread #32) (stack: reserved=4209664KB, committed=4209664KB) (malloc=96KB #165) (arena=37KB #59) - Code (reserved=249823KB, committed=2759KB) (malloc=223KB #730) (mmap: reserved=249600KB, committed=2536KB) - GC (reserved=48700KB, committed=48700KB) (malloc=10384KB #135) (mmap: reserved=38316KB, committed=38316KB) - Compiler (reserved=186KB, committed=186KB) (malloc=56KB #105) (arena=131KB #7) - Internal (reserved=9693KB, committed=9693KB) (malloc=9661KB #2585) (mmap: reserved=32KB, committed=32KB) - Symbol (reserved=2021KB, committed=2021KB) (malloc=1182KB #334) (arena=839KB #1) - Native Memory Tracking (reserved=85KB, committed=85KB) (malloc=5KB #53) (tracking overhead=80KB) - Arena Chunk (reserved=196KB, committed=196KB) (malloc=196KB)
Restart with vm native_ Run jcmd with the memory detail parameter:
jcmd 24404 VM.native_memory detail
It can be seen that there are 16 suspicious threads, and each thread reserves 262144 KB of memory, that is, 256 MB (as can be seen from the red box in the figure below, 16 results are found by using the keyword 262144 KB for thread stack from):
In fact, the unit of the threadsacksize parameter is KB, so if we want to set the thread stack 256KB, we should set 256 instead of 256k. After resetting the correct parameters, use jcmd to verify the following again:
In addition to viewing NMT, jcmd has many functions. We can see all its functions through help:
jcmd 24781 help
In addition to jps, jinfo, jcmd, jstack, jstat, jconsole and jvisualvm, there are some tools in JDK. You can view the full introduction through the official documents.
Official documents: https://docs.oracle.com/javas...