JUC multithreading and high concurrency
1, Please talk about your understanding of volatile
Package java.util.concurrent—> AtomicInteger Lock ReadWriteLock
1. volatile is a lightweight synchronization mechanism provided by java virtual machine
Visibility guaranteed, atomicity not guaranteed, instruction rearrangement prohibited
-
Ensure visibility
When multiple threads access the same variable, one thread modifies the value of the variable, and other threads can immediately see the modified value
Example when volatile keyword is not added:
package com.jian8.juc; import java.util.concurrent.TimeUnit; /** * 1 Verify the visibility of volatile * 1.1 If int num = 0, the number variable is not decorated with volatile keyword * 1.2 volatile is added to solve the problem of visibility */ public class VolatileDemo { public static void main(String[] args) { visibilityByVolatile();//Verify the visibility of volatile } /** * volatile It can ensure visibility and timely notify other threads that the value of the main physical memory has been modified */ public static void visibilityByVolatile() { MyData myData = new MyData(); //First thread new Thread(() -> { System.out.println(Thread.currentThread().getName() + "\t come in"); try { //Thread pause 3s TimeUnit.SECONDS.sleep(3); myData.addToSixty(); System.out.println(Thread.currentThread().getName() + "\t update value:" + myData.num); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } }, "thread1").start(); //The second thread is the main thread while (myData.num == 0) { //If the num of myData is always zero, the main thread will always loop here } System.out.println(Thread.currentThread().getName() + "\t mission is over, num value is " + myData.num); } } class MyData { // int num = 0; volatile int num = 0; public void addToSixty() { this.num = 60; } }
Output result:
thread1 come in thread1 update value:60 //Thread enters dead loop
When we add volatile keyword, volatile int num = 0; The output result is:
thread1 come in thread1 update value:60 main mission is over, num value is 60 //The program has no dead cycle and ends execution
-
Atomicity is not guaranteed
Atomicity: indivisibility and integrity, that is, when a thread is doing a specific business, the middle can not be blocked or divided. It needs to be complete as a whole, either succeed at the same time or fail at the same time
Validation example (adding volatile keyword to variable and not adding synchronized keyword to method):
package com.jian8.juc; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; /** * 1 Verify the visibility of volatile * 1.1 If int num = 0, the number variable is not decorated with volatile keyword * 1.2 volatile is added to solve the problem of visibility * * 2.Verifying that volatile does not guarantee atomicity * 2.1 What does atomicity mean * Indivisibility and integrity, that is, when a thread is doing a specific business, the middle can not be blocked or divided. It needs to be complete as a whole, either succeed at the same time or fail at the same time */ public class VolatileDemo { public static void main(String[] args) { // visibilityByVolatile();// Verify the visibility of volatile atomicByVolatile();//Verifying that volatile does not guarantee atomicity } /** * volatile It can ensure visibility and timely notify other threads that the value of the main physical memory has been modified */ //public static void visibilityByVolatile(){} /** * volatile Atomicity is not guaranteed * And using Atomic to ensure atomicity */ public static void atomicByVolatile(){ MyData myData = new MyData(); for(int i = 1; i <= 20; i++){ new Thread(() ->{ for(int j = 1; j <= 1000; j++){ myData.addSelf(); myData.atomicAddSelf(); } },"Thread "+i).start(); } //Wait for the above threads to complete the calculation, and then use the main thread to obtain the final result value try { TimeUnit.SECONDS.sleep(4); } catch (InterruptedException e) { e.printStackTrace(); } while (Thread.activeCount()>2){ Thread.yield(); } System.out.println(Thread.currentThread().getName()+"\t finally num value is "+myData.num); System.out.println(Thread.currentThread().getName()+"\t finally atomicnum value is "+myData.atomicInteger); } } class MyData { // int num = 0; volatile int num = 0; public void addToSixty() { this.num = 60; } public void addSelf(){ num++; } AtomicInteger atomicInteger = new AtomicInteger(); public void atomicAddSelf(){ atomicInteger.getAndIncrement(); } }
The results of three times of execution are:
//1. main finally num value is 19580 main finally atomicnum value is 20000 //2. main finally num value is 19999 main finally atomicnum value is 20000 //3. main finally num value is 18375 main finally atomicnum value is 20000 //num did not reach 20000
-
Prohibit instruction rearrangement
Orderliness: when a computer executes a program, in order to improve performance, compilers and processors often replay * * instructions * *, which are generally divided into the following three types
In the single thread environment, ensure that the final execution result of the program is consistent with the result of the sequential execution of the code.
The processor must consider the * * data dependency between instructions when reordering**
In multi-threaded environment, threads execute alternately. Due to the existence of compiler optimization rearrangement, whether the variables used in the two threads can ensure consistency cannot be determined, and the result cannot be predicted
Rearrange code instance:
Declared variable: int a,b,x,y=0
Thread 1 Thread 2 x = a; y = b; b = 1; a = 2; result x = 0 y=0 If the compiler performs rearrangement optimization on this program code, the following situations may occur:
Thread 1 Thread 2 b = 1; a = 2; x= a; y = b; result x = 2 y=1 This result shows that in the multithreaded environment, due to the existence of compiler optimization rearrangement, it is uncertain whether the variables used in the two threads can ensure consistency
volatile implements the prohibition of instruction rearrangement, so as to avoid the disorder of program execution in multi-threaded environment
Memory Barrier, also known as Memory Barrier, is a CPU instruction. It has two functions:
- Ensure the execution sequence of specific operations
- Ensure the memory visibility of some variables (use this feature to realize the memory visibility of volatile)
Because both compiler and processor can perform instruction rearrangement optimization. If a Memory Barrier is inserted into the part, it will tell the compiler and CPU that no instruction can be reordered with this Memory Barrier instruction, that is, the reordering optimization of the instructions before and after the Memory Barrier is prohibited by inserting the Memory Barrier. Another function of the Memory Barrier is to forcibly brush out the cached data of various CPUs, so any thread on the CPU can read the latest version of these data.
graph TB subgraph bbbb["right Volatile When reading variables,<br>Add a message before the read back operation load Barrier command,<br>Read shared variables from memory"] ids6[Volatile]-->red3[LoadLoad a barrier] red3-->id7["All normal read operations below are prohibited<br>And above volatile Read reorder"] red3-->red4[LoadStore a barrier] red4-->id9["All normal write operations below are prohibited<br>And above volatile Read reorder"] red4-->id8[Ordinary reading] id8-->Ordinary writing end subgraph aaaa["right Volatile When writing variables,<br>Add one after write back operation store Barrier command,<br>Flushes the shared variable values in working memory back to main memory"] id1[Ordinary reading]-->id2[Ordinary writing] id2-->red1[StoreStore a barrier] red1-->id3["No ordinary writing and<br>Below volatile Write reorder"] red1-->id4["Volatile write"] id4-->red2[StoreLoad a barrier] red2-->id5["Prevent the above volatile Write and<br>There may be some below volatile Read write reordering"] end style red1 fill:#ff0000; style red2 fill:#ff0000; style red4 fill:#ff0000; style red3 fill:#ff0000; style aaaa fill:#ffff00; style bbbb fill:#ffff00;
2. JMM (java memory model)
JMM (Java Memory Model) itself is an abstract concept and does not really exist. It describes a set of rules or specifications that define the access mode of various variables in the program (including instance fields, static fields and elements constituting array objects).
JMM regulations on synchronization:
- Before the thread is unlocked, the value of the shared variable must be flushed back to the main memory
- Before a thread locks, it must read the latest value of the main memory to its own working memory
- The same lock when locking and unlocking
Since the entity of the JVM running program is a thread, and when each thread is created, the JVM will create a working memory (some become stack space). The working memory is the private data area of each thread. The java Memory Model stipulates that all variables are stored in * * main memory, and the main memory is the contributing memory area, which can be accessed by all threads, but the operation of threads on variables (reading, assignment, etc.) must be carried out in the working memory, First, copy the variables from the main memory to your own working memory space, and then operate the variables. After the operation is completed, write the variables back to the main memory. You cannot directly operate the variables in the main memory. The working memory of each thread stores the copy * * of the variables in the main memory. Therefore, different thread devices cannot access each other's working memory, and the communication (value transfer) between threads must be completed through the main memory, The process to access during the period is shown in the following figure: [the external chain picture transfer fails, and the source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-lg4nlyng-1614561339156) (images \ 1559049634 (1)] jpg)
- visibility
- Atomicity
- Order
3. Where did you use volatile
When the normal singleton mode is multithreaded:
public class SingletonDemo { private static SingletonDemo instance = null; private SingletonDemo() { System.out.println(Thread.currentThread().getName() + "\t Construction method SingletonDemo()"); } public static SingletonDemo getInstance() { if (instance == null) { instance = new SingletonDemo(); } return instance; } public static void main(String[] args) { //The constructor is executed only once // System.out.println(getInstance() == getInstance()); // System.out.println(getInstance() == getInstance()); // System.out.println(getInstance() == getInstance()); //After concurrent multithreading, the constructor will execute multiple times in some cases for (int i = 0; i < 10; i++) { new Thread(() -> { SingletonDemo.getInstance(); }, "Thread " + i).start(); } } }
Its construction method will be executed multiple times in some cases
Solution:
-
Singleton mode DCL code
DCL (Double Check Lock mechanism) makes a judgment before and after locking
public static SingletonDemo getInstance() { if (instance == null) { synchronized (SingletonDemo.class) { if (instance == null) { instance = new SingletonDemo(); } } } return instance; }
Most run result construction methods will be executed only once, but the instruction rearrangement mechanism will make the program have a very small chance that the construction method will be executed many times
DCL (double ended lock checking) mechanism is not necessarily thread safe. The reason is that there is sometimes instruction rearrangement. Adding volatile can prohibit instruction rearrangement
The reason is that when a thread performs the first detection and reads that instance is not null, the reference object of instance may not complete initialization. instance=new SingleDemo(); It can be divided into the following three steps (pseudo code):
memory = allocate();//1. Allocate object memory space instance(memory); //2. Initialization object instance = memory; //3. Set the memory address just allocated for instance execution. At this time, instance= null
There is no data dependency between step 2 and step 3, and the execution result of the program has not changed in a single thread before or after rearrangement. Therefore, this rearrangement optimization is allowed if step 3 is earlier than step 2, but the instance has not been initialized
However, instruction rearrangement will only ensure the consistency of serial semantic execution (single thread), but it does not care about the semantic consistency between multiple threads.
Therefore, when a thread accesses instance that is not null, the instance example may not have been initialized, which leads to thread safety problems.
-
Singleton pattern volatile code
To solve the above problems, you can add volatile to the singletondemo instance
private static volatile SingletonDemo instance = null;
2, You know what
1. compareAndSet ---- compare and exchange
AtomicInteger.conpareAndSet(int expect, indt update)
public final boolean compareAndSet(int expect, int update) { return unsafe.compareAndSwapInt(this, valueOffset, expect, update); }
The first parameter is the expected value obtained. If the expected values are not consistent, update is assigned. If the expected values are inconsistent, it proves that the data has been modified, and fasle is returned to cancel the assignment
Example:
package com.jian8.juc.cas; import java.util.concurrent.atomic.AtomicInteger; /** * 1.CAS What is it? * 1.1 Compare and exchange */ public class CASDemo { public static void main(String[] args) { checkCAS(); } public static void checkCAS(){ AtomicInteger atomicInteger = new AtomicInteger(5); System.out.println(atomicInteger.compareAndSet(5, 2019) + "\t current data is " + atomicInteger.get()); System.out.println(atomicInteger.compareAndSet(5, 2014) + "\t current data is " + atomicInteger.get()); } }
The output result is:
true current data is 2019 false current data is 2019
2. CAS underlying principle? Understanding of Unsafe
Compare the values in the current working memory with those in the main memory. If they are the same, perform the specified operation. Otherwise, continue to compare until you know the values in the main memory and working memory
-
atomicInteger.getAndIncrement();
public final int getAndIncrement() { return unsafe.getAndAddInt(this, valueOffset, 1); }
-
Unsafe
-
It is the core class of CAS. Since Java methods cannot directly access the stratum system, they need to be accessed through local methods. Unsafe is equivalent to a back door. Based on this class, specific memory data can be directly operated. Unsafe class exists in sun In misc package, its internal method operation can directly operate memory like C pointer, because the execution of CAS operation in Java depends on the method of unsafe class.
All methods in the Unsafe class are modified native ly, that is, the methods in the Unsafe class directly call the underlying resources of the operating system to perform corresponding tasks
-
The variable valueOffset represents the offset address of the variable value in memory, because Unsafe obtains data according to the cheap address in memory
-
The variable value is decorated with volatile to ensure the visibility between multiple threads
-
-
What is CAS
CAS is fully called compare and swap, which is a CPU concurrency primitive
If the value of the function is the expected value of the memory, it determines whether the function is changed to the new value.
The CAS concurrency primitive embodied in JAVA language is sun misc. Each method in the Unsafe class. Call the CAS method in the Unsafe class, and the JVM will help us implement the CAS assembly instruction. This is a function completely dependent on hardware, through which atomic operation is realized. As CAS is a system primitive, which belongs to the language category of the operating system and is composed of several instructions. It is a process used to complete a function, and the execution of the primitive must be continuous. It is not allowed to be interrupted in the execution process, that is to say, CAS is an atomic instruction of the CPU and will not cause data inconsistency.
//unsafe.getAndAddInt public final int getAndAddInt(Object var1, long var2, int var4) { int var5; do { var5 = this.getIntVolatile(var1, var2); } while(!this.compareAndSwapInt(var1, var2, var5, var5 + var4)); return var5; }
var1 AtomicInteger object itself
var2 the reference address of the object
var4 data to be changed
var5 finds the real value in the main memory through VAR1 and var2
Compare the value before the object with var5;
If it is the same, update var5+var4 and return true,
If they are different, continue to compare them until the update is completed
3. CAS disadvantages
-
**Long cycle time and high overhead**
For example, when the getAndAddInt method is executed, there is a do while loop. If CAS fails, it will always try. If CAS fails for a long time, it may bring great overhead to the CPU
-
Only atomic operations of one shared variable can be guaranteed
When operating on multiple shared variables, cyclic CAS cannot guarantee the atomicity of the operation. At this time, locks can be used to ensure the atomicity
-
ABA problem
3, ABA problem of atomic class AtomicInteger? Atomic update reference?
1. How ABA is produced
An important prerequisite for the implementation of CAS algorithm is to remove the data in the memory at a certain time and compare and replace it at the current time. Then the time difference class will lead to the change of data.
For example, thread 1 takes out a from memory location V, thread 2 also takes out a from memory, and thread 2 performs some operations to change the value to B, and then thread 2 changes the V location data to A. at this time, thread 1 performs CAS operation and finds that the value in memory is still a, and then thread 1 operates successfully.
Although the CAS operation of thread 1 is successful, it does not mean that the process is OK
2. How to solve it? Atomic reference
Example code:
package juc.cas; import lombok.AllArgsConstructor; import lombok.Getter; import lombok.ToString; import java.util.concurrent.atomic.AtomicReference; public class AtomicRefrenceDemo { public static void main(String[] args) { User z3 = new User("Zhang San", 22); User l4 = new User("Li Si", 23); AtomicReference<User> atomicReference = new AtomicReference<>(); atomicReference.set(z3); System.out.println(atomicReference.compareAndSet(z3, l4) + "\t" + atomicReference.get().toString()); System.out.println(atomicReference.compareAndSet(z3, l4) + "\t" + atomicReference.get().toString()); } } @Getter @ToString @AllArgsConstructor class User { String userName; int age; }
Output results
true User(userName=Li Si, age=23) false User(userName=Li Si, age=23)
3. Atomic reference to timestamp
Add a new mechanism and modify the version number
package com.jian8.juc.cas; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicReference; import java.util.concurrent.atomic.AtomicStampedReference; /** * ABA Problem solving * AtomicStampedReference */ public class ABADemo { static AtomicReference<Integer> atomicReference = new AtomicReference<>(100); static AtomicStampedReference<Integer> atomicStampedReference = new AtomicStampedReference<>(100, 1); public static void main(String[] args) { System.out.println("=====Below ABA Generation of problems====="); new Thread(() -> { atomicReference.compareAndSet(100, 101); atomicReference.compareAndSet(101, 100); }, "Thread 1").start(); new Thread(() -> { try { //Ensure that thread 1 completes an ABA operation TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(atomicReference.compareAndSet(100, 2019) + "\t" + atomicReference.get()); }, "Thread 2").start(); try { TimeUnit.SECONDS.sleep(2); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("=====Below ABA Problem solving====="); new Thread(() -> { int stamp = atomicStampedReference.getStamp(); System.out.println(Thread.currentThread().getName() + "\t First version number" + stamp); try { TimeUnit.SECONDS.sleep(2); } catch (InterruptedException e) { e.printStackTrace(); } atomicStampedReference.compareAndSet(100, 101, atomicStampedReference.getStamp(), atomicStampedReference.getStamp() + 1); System.out.println(Thread.currentThread().getName() + "\t Second version number" + atomicStampedReference.getStamp()); atomicStampedReference.compareAndSet(101, 100, atomicStampedReference.getStamp(), atomicStampedReference.getStamp() + 1); System.out.println(Thread.currentThread().getName() + "\t 3rd version number" + atomicStampedReference.getStamp()); }, "Thread 3").start(); new Thread(() -> { int stamp = atomicStampedReference.getStamp(); System.out.println(Thread.currentThread().getName() + "\t First version number" + stamp); try { TimeUnit.SECONDS.sleep(4); } catch (InterruptedException e) { e.printStackTrace(); } boolean result = atomicStampedReference.compareAndSet(100, 2019, stamp, stamp + 1); System.out.println(Thread.currentThread().getName() + "\t Is the modification successful" + result + "\t Current latest actual version number:" + atomicStampedReference.getStamp()); System.out.println(Thread.currentThread().getName() + "\t Current latest actual value:" + atomicStampedReference.getReference()); }, "Thread 4").start(); } }
Output result:
=====Below ABA Generation of problems===== true 2019 =====Below ABA Problem solving===== Thread 3 Version No. 1 of the first time Thread 4 Version No. 1 of the first time Thread 3 2nd Edition No.2 Thread 3 3rd Edition No. 3 Thread 4 Is the modification successful false Current latest actual version number: 3 Thread 4 Current latest actual value: 100
4, We know that ArrayList is thread unsafe. Please write an unsafe case and give a solution
HashSet is consistent with ArrayList HashMap
The bottom layer of HashSet is a HashMap. The stored value is placed in the key of HashMap. Value stores a static Object object of PRESENT
1. Thread unsafe
package com.jian8.juc.collection; import java.util.ArrayList; import java.util.List; import java.util.UUID; /** * Collection class insecurity * ArrayList */ public class ContainerNotSafeDemo { public static void main(String[] args) { notSafe(); } /** * Fault phenomenon * java.util.ConcurrentModificationException */ public static void notSafe() { List<String> list = new ArrayList<>(); for (int i = 1; i <= 30; i++) { new Thread(() -> { list.add(UUID.randomUUID().toString().substring(0, 8)); System.out.println(list); }, "Thread " + i).start(); } } }
report errors:
Exception in thread "Thread 10" java.util.ConcurrentModificationException
2. Cause
Caused by concurrent normal modification
One person is writing and another student grabs, resulting in inconsistent data and abnormal concurrent modification
3. Solution: * * CopyOnWriteArrayList
List<String> list = new Vector<>();//Vector thread safety List<String> list = Collections.synchronizedList(new ArrayList<>());//Using auxiliary classes List<String> list = new CopyOnWriteArrayList<>();//Copy on write, separate read from write Map<String, String> map = new ConcurrentHashMap<>(); Map<String, String> map = Collections.synchronizedMap(new HashMap<>());
CopyOnWriteArrayList.add method:
The CopyOnWrite container is copied on write. When adding a container to an element, you do not directly add it to the current container Object [] but first copy the current container Object [] and copy a new container Object[] newElements. After adding elements, you can point the reference of the original container to the new container setArray(newElements). In this way, you can read the CopyOnWrite container concurrently, There is no need to lock, because the current container will not add any elements, so the CopyOnWrite container is also an idea of separating reading and writing. Reading and writing are different containers
public boolean add(E e) { final ReentrantLock lock = this.lock; lock.lock(); try { Object[] elements = getArray(); int len = elements.length; Object[] newElements = Arrays.copyOf(elements, len + 1); newElements[len] = e; setArray(newElements); return true; } finally { lock.unlock(); } }
5, Fair lock, unfair lock, reentrant lock, recursive lock, spin lock? Handwritten spin lock
1. Fair lock, unfair lock
-
What is it?
Fair locks are first come first served, and unfair locks are allowed to be added. Lock lock = new ReentrantLock(Boolean fair); Default is unfair.
-
**Fair lock * * means that multiple threads acquire locks in the order they apply for locks, similar to queuing for meals.
-
**Unfair lock * * refers to that the order in which multiple threads acquire locks is not in accordance with the order in which they apply for locks. It is possible that the thread applying for locks may acquire locks first. In the case of high concurrency, it may cause priority reversal or section phenomenon.
-
-
The difference between the two
-
Threads acquire a fair lock in the order in which they requested it
Fair lock is fair. In a concurrent environment, when each thread acquires a lock, it will first view the waiting queue maintained by the lock. If it is empty or the current thread is the first in the waiting queue, it will occupy the lock. Otherwise, it will be added to the waiting queue and get itself from the queue according to the rules of FIFO.
-
a nonfair lock permits barging: threads requesting a lock can jump ahead of the queue of waiting threads if the lock happens to be available when it is requested
Unfair lock is rude. You can try to occupy the amount directly when you come up. If the attempt fails, you can use the method similar to fair lock.
-
-
other
For Java ReentrantLock, whether the lock is fair or not is specified through the constructor. Grinding is a non fair lock. The advantage of non fair lock is that the throughput is greater than that of fair lock
For Synchronized, it is an unfair lock
2. Reentrant (recursive lock)
-
What is a recursive lock
After the outer function of the same thread obtains the lock, the inner recursive function can still obtain the code of the lock. When the same thread obtains the lock in the outer method, it will automatically obtain the lock when entering the inner method, that is, the thread can enter any code block synchronized with the lock it already owns
-
ReentrantLock/Synchronized is a typical reentrant lock
-
The biggest function of reentrant lock is to avoid deadlock
-
Code example
package com.jian8.juc.lock; #### public static void main(String[] args) { Phone phone = new Phone(); new Thread(() -> { try { phone.sendSMS(); } catch (Exception e) { e.printStackTrace(); } }, "Thread 1").start(); new Thread(() -> { try { phone.sendSMS(); } catch (Exception e) { e.printStackTrace(); } }, "Thread 2").start(); } } class Phone{ public synchronized void sendSMS()throws Exception{ System.out.println(Thread.currentThread().getName()+"\t -----invoked sendSMS()"); Thread.sleep(3000); sendEmail(); } public synchronized void sendEmail() throws Exception{ System.out.println(Thread.currentThread().getName()+"\t +++++invoked sendEmail()"); } }
package com.jian8.juc.lock; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; public class ReentrantLockDemo { public static void main(String[] args) { Mobile mobile = new Mobile(); new Thread(mobile).start(); new Thread(mobile).start(); } } class Mobile implements Runnable{ Lock lock = new ReentrantLock(); @Override public void run() { get(); } public void get() { lock.lock(); try { System.out.println(Thread.currentThread().getName()+"\t invoked get()"); set(); }finally { lock.unlock(); } } public void set(){ lock.lock(); try{ System.out.println(Thread.currentThread().getName()+"\t invoked set()"); }finally { lock.unlock(); } } }
3. Exclusive lock (write lock) / shared lock (read lock) / mutex lock
-
concept
-
Exclusive lock: it means that the lock can only be held by one thread at a time. It is an exclusive lock for ReentrantLock and Synchronized
-
Shared lock: only this lock can be held by multiple threads
ReentrantReadWriteLock the read lock is a shared lock and the write lock is an exclusive lock
-
Mutex lock: the shared lock of read lock can ensure that concurrent reading is very efficient. The processes of reading, writing, reading and writing are mutually exclusive
-
-
Code example
package com.jian8.juc.lock; import java.util.HashMap; import java.util.Map; import java.util.concurrent.TimeUnit; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantReadWriteLock; /** * There is no problem for multiple threads to read a resource class at the same time, so in order to meet the concurrency, reading shared resources should be carried out at the same time. * however * If a thread is like taking and writing shared resources, it should not be free. Other threads can read or write resources * summary * Reading can coexist * Reading and writing cannot coexist * Writing cannot coexist */ public class ReadWriteLockDemo { public static void main(String[] args) { MyCache myCache = new MyCache(); for (int i = 1; i <= 5; i++) { final int tempInt = i; new Thread(() -> { myCache.put(tempInt + "", tempInt + ""); }, "Thread " + i).start(); } for (int i = 1; i <= 5; i++) { final int tempInt = i; new Thread(() -> { myCache.get(tempInt + ""); }, "Thread " + i).start(); } } } class MyCache { private volatile Map<String, Object> map = new HashMap<>(); private ReentrantReadWriteLock rwLock = new ReentrantReadWriteLock(); /** * Write operation: atomic + exclusive * The whole process must be a complete unity, which cannot be divided or interrupted * * @param key * @param value */ public void put(String key, Object value) { rwLock.writeLock().lock(); try { System.out.println(Thread.currentThread().getName() + "\t Writing:" + key); TimeUnit.MILLISECONDS.sleep(300); map.put(key, value); System.out.println(Thread.currentThread().getName() + "\t Write complete"); } catch (Exception e) { e.printStackTrace(); } finally { rwLock.writeLock().unlock(); } } public void get(String key) { rwLock.readLock().lock(); try { System.out.println(Thread.currentThread().getName() + "\t Reading:" + key); TimeUnit.MILLISECONDS.sleep(300); Object result = map.get(key); System.out.println(Thread.currentThread().getName() + "\t Read complete: " + result); } catch (Exception e) { e.printStackTrace(); } finally { rwLock.readLock().unlock(); } } public void clear() { map.clear(); } }
4. Spin lock
-
spinlock
It means that the thread trying to obtain the lock will not block immediately, but will try to obtain the lock in a circular way. This has the advantage of reducing the consumption of thread context switching, but the disadvantage is that the cycle will consume CPU
public final int getAndAddInt(Object var1, long var2, int var4) { int var5; do { var5 = this.getIntVolatile(var1, var2); } while(!this.compareAndSwapInt(var1, var2, var5, var5 + var4)); return var5; }
Handwritten spin lock:
package com.jian8.juc.lock; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicReference; /** * Realize spin lock * The advantage of spin lock is that the successful position is known by cyclic comparison, and there is no blocking like wait * * Thread A can only wait for 5 seconds for the lock to be held by thread A, and then it can only wait for the lock to be held by thread B through spin. After that, thread A can only wait for 5 seconds for the lock to be released by thread B */ public class SpinLockDemo { public static void main(String[] args) { SpinLockDemo spinLockDemo = new SpinLockDemo(); new Thread(() -> { spinLockDemo.mylock(); try { TimeUnit.SECONDS.sleep(3); }catch (Exception e){ e.printStackTrace(); } spinLockDemo.myUnlock(); }, "Thread 1").start(); try { TimeUnit.SECONDS.sleep(3); }catch (Exception e){ e.printStackTrace(); } new Thread(() -> { spinLockDemo.mylock(); spinLockDemo.myUnlock(); }, "Thread 2").start(); } //Atomic reference thread AtomicReference<Thread> atomicReference = new AtomicReference<>(); public void mylock() { Thread thread = Thread.currentThread(); System.out.println(Thread.currentThread().getName() + "\t come in"); while (!atomicReference.compareAndSet(null, thread)) { } } public void myUnlock() { Thread thread = Thread.currentThread(); atomicReference.compareAndSet(thread, null); System.out.println(Thread.currentThread().getName()+"\t invoked myunlock()"); } }
6, Have you used CountDownLatch/CyclicBarrier/Semaphore
1. Countdown latch
-
It allows one or more threads to wait until the operations of other threads are completed. For example, the main thread of an application wants to execute after the thread responsible for starting the framework service has started all the framework services
-
CountDownLatch mainly has two methods. When one or more threads call the await() method, the calling thread will be blocked. When other threads call the countDown() method, the counter will be decremented by 1. When the counter value becomes 0, the thread blocked by calling the await() method will be awakened and continue to execute
-
Code example:
package com.jian8.juc.conditionThread; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; public class CountDownLatchDemo { public static void main(String[] args) throws InterruptedException { // general(); countDownLatchTest(); } public static void general(){ for (int i = 1; i <= 6; i++) { new Thread(() -> { System.out.println(Thread.currentThread().getName()+"\t After self-study, leave the classroom"); }, "Thread-->"+i).start(); } while (Thread.activeCount()>2){ try { TimeUnit.SECONDS.sleep(2); } catch (InterruptedException e) { e.printStackTrace(); } } System.out.println(Thread.currentThread().getName()+"\t=====The monitor finally closed the door and left"); } public static void countDownLatchTest() throws InterruptedException { CountDownLatch countDownLatch = new CountDownLatch(6); for (int i = 1; i <= 6; i++) { new Thread(() -> { System.out.println(Thread.currentThread().getName()+"\t Destroyed"); countDownLatch.countDown(); }, CountryEnum.forEach_CountryEnum(i).getRetMessage()).start(); } countDownLatch.await(); System.out.println(Thread.currentThread().getName()+"\t=====Qin Tongyi"); } }
2. CyclicBarrier (gather seven dragon balls to summon the divine dragon)
-
CycliBarrier
A barrier that can be used cyclically. A group of threads are blocked when they reach a barrier (also known as the synchronization point). The barrier will not open until the last thread reaches the barrier, and all threads intercepted by the barrier will continue to work. Threads enter the barrier and pass through the await() method of CycliBarrier
-
Code example:
package com.jian8.juc.conditionThread; import java.util.concurrent.BrokenBarrierException; import java.util.concurrent.CyclicBarrier; public class CyclicBarrierDemo { public static void main(String[] args) { cyclicBarrierTest(); } public static void cyclicBarrierTest() { CyclicBarrier cyclicBarrier = new CyclicBarrier(7, () -> { System.out.println("====Summon the Dragon====="); }); for (int i = 1; i <= 7; i++) { final int tempInt = i; new Thread(() -> { System.out.println(Thread.currentThread().getName() + "\t Collected to No" + tempInt + "Dragon Ball"); try { cyclicBarrier.await(); } catch (InterruptedException e) { e.printStackTrace(); } catch (BrokenBarrierException e) { e.printStackTrace(); } }, "" + i).start(); } } }
3. Semaphore semaphore
Can replace Synchronize and Lock
-
Semaphores are mainly used for two purposes: one is for the mutual exclusion of multiple shared resources, and the other is for the control of the number of concurrent threads
-
Code example:
Example of parking space grabbing:
package com.jian8.juc.conditionThread; import java.util.concurrent.Semaphore; import java.util.concurrent.TimeUnit; public class SemaphoreDemo { public static void main(String[] args) { Semaphore semaphore = new Semaphore(3);//Simulate three parking spaces for (int i = 1; i <= 6; i++) {//Simulate 6 cars new Thread(() -> { try { semaphore.acquire(); System.out.println(Thread.currentThread().getName() + "\t Grab the parking space"); try { TimeUnit.SECONDS.sleep(3);//Stop for 3s } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(Thread.currentThread().getName() + "\t Stop 3 s Leave the parking space after"); } catch (InterruptedException e) { e.printStackTrace(); } finally { semaphore.release(); } }, "Car " + i).start(); } } }
7, Blocking queue
- **ArrayBlockingQueue * * is a bounded blocking queue based on array structure, which sorts elements according to FIFO principle
- **LinkedBlockingQueue * * is a blocking queue based on linked list structure. This queue sorts elements by FIFO, and its throughput is usually higher than ArrayBlockingQueue
- **SynchronousQueue * * is a blocking queue that does not store elements. When an insert operation is cancelled, it must wait until another thread calls the remove operation. Otherwise, the insert operation is always blocked, and the throughput is usually higher than
1. Queue and blocking queue
-
The first is a queue, and the role of a blocking queue in the data structure is roughly shown in the figure below
Thread 1 adds elements to the blocking queue, while thread 2 removes elements from the blocking queue
When the blocking queue is empty, the operation of getting elements from the queue will be blocked
When the blocking queue is full, adding elements from the queue is blocked
Threads trying to get elements from an empty blocking queue will be blocked until other threads insert new elements into an empty queue.
The thread trying to add a new element to the blocking queue with full network will also be blocked. After other threads remove one or more elements from the column or completely empty the queue, the queue will be idle again and added subsequently
2. Why? What are the benefits?
-
In the field of multithreading: the so-called blocking will suspend the thread in some cases. Once the conditions are met, the suspended thread will be awakened automatically
-
Why BlockingQueue
The advantage is that we don't need to care about when to block the thread and when to wake up the thread, because all this is done by BlockingQueue
Before the release of concurrent package, in the multithreaded environment, each programmer must control these details by himself, especially considering efficiency and thread safety. This time, it brings great complexity to our program
3. The core method of BlockingQueue
Method type | Throw exception | Special value | block | overtime |
---|---|---|---|---|
insert | add(e) | offer(e) | put(e) | offer(e,time,unit) |
remove | remove() | poll() | take | poll(time,unit) |
inspect | element() | peek() | Not available | Not available |
Method type | status |
---|---|
Throw exception | When the blocking queue is full, add ing to the queue will throw IllegalStateException: Queue full When the blocking queue is empty, remove will throw NoSuchElementException in the network queue |
Special value | Insert method, success true, failure false Remove the method and successfully return the elements out of the queue. If there is no element in the queue, null will be returned |
Always blocked | When the blocking queue is full, the producer thread continues to put elements into the queue. The queue will block all the time. The thread knows the put data or exits in response to an interrupt When the blocking queue is empty, the consumer thread attempts to take the element from the queue, and the queue will block all the time. The consumer thread knows that the queue is available. |
Timeout exit | When the blocking queue is full, the queue will block the producer thread for a certain period of time. After the time limit is exceeded, the producer thread will exit |
4. Architecture combing + Category Analysis
-
Species analysis
- ArrayBlockingQueue: a bounded blocking queue composed of data structures.
- LinkedBlockingQueue: a bounded (but the default size is Integer.MAX_VALUE) blocking queue composed of a linked list structure.
- PriorityBlockingQueue: an unbounded blocking queue that supports prioritization.
- DelayQueue: delay unbounded blocking queue implemented using priority queue.
- Synchronousqueue: a blocking queue that does not store elements, that is, a queue of individual elements.
- LinkedTransferQueue: an unbounded blocking queue composed of a linked list structure.
- LinkedBlockingDeque: a bidirectional blocking queue composed of a calendar structure.
-
SychronousQueue
-
Theory: SynchronousQueue has no capacity. Unlike other blockingqueues, SynchronousQueue is a BlockingQueue that does not store elements. Each put operation must wait for a take operation, otherwise elements cannot be added, and vice versa.
-
Code example
package com.jian8.juc.queue; import java.util.concurrent.BlockingQueue; import java.util.concurrent.SynchronousQueue; import java.util.concurrent.TimeUnit; /** * ArrayBlockingQueue Is a bounded blocking queue based on array structure, which sorts elements according to FIFO principle * LinkedBlockingQueue It is a blocking queue based on linked list structure. This queue sorts elements according to FIFO, and the throughput is usually higher than ArrayBlockingQueue * SynchronousQueue It is a blocking queue that does not store elements. An insert operation must wait until another thread calls the remove operation. Otherwise, the insert operation is always blocked, and the throughput is usually higher than * 1.queue * 2.Blocking queue * 2.1 Is there a good side to blocking queues * 2.2 Have to block, how do you manage */ public class SynchronousQueueDemo { public static void main(String[] args) throws InterruptedException { BlockingQueue<String> blockingQueue = new SynchronousQueue<>(); new Thread(() -> { try { System.out.println(Thread.currentThread().getName() + "\t put 1"); blockingQueue.put("1"); System.out.println(Thread.currentThread().getName() + "\t put 2"); blockingQueue.put("2"); System.out.println(Thread.currentThread().getName() + "\t put 3"); blockingQueue.put("3"); } catch (InterruptedException e) { e.printStackTrace(); } }, "AAA").start(); new Thread(() -> { try { TimeUnit.SECONDS.sleep(5); System.out.println(Thread.currentThread().getName() + "\ttake " + blockingQueue.take()); TimeUnit.SECONDS.sleep(5); System.out.println(Thread.currentThread().getName() + "\ttake " + blockingQueue.take()); TimeUnit.SECONDS.sleep(5); System.out.println(Thread.currentThread().getName() + "\ttake " + blockingQueue.take()); } catch (InterruptedException e) { e.printStackTrace(); } }, "BBB").start(); } }
-
5. Where is it used
-
Producer consumer model
-
Traditional edition
package com.jian8.juc.queue; import java.util.concurrent.locks.Condition; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; /** * For a variable with an initial value of zero, two threads operate alternately, one plus 1 and one minus 1, for 5 rounds * 1. Thread operation resource class * 2. Judgment work notice * 3. Mechanism to prevent false arousal */ public class ProdConsumer_TraditionDemo { public static void main(String[] args) { ShareData shareData = new ShareData(); for (int i = 1; i <= 5; i++) { new Thread(() -> { try { shareData.increment(); } catch (Exception e) { e.printStackTrace(); } }, "ProductorA " + i).start(); } for (int i = 1; i <= 5; i++) { new Thread(() -> { try { shareData.decrement(); } catch (Exception e) { e.printStackTrace(); } }, "ConsumerA " + i).start(); } for (int i = 1; i <= 5; i++) { new Thread(() -> { try { shareData.increment(); } catch (Exception e) { e.printStackTrace(); } }, "ProductorB " + i).start(); } for (int i = 1; i <= 5; i++) { new Thread(() -> { try { shareData.decrement(); } catch (Exception e) { e.printStackTrace(); } }, "ConsumerB " + i).start(); } } } class ShareData {//Resource class private int number = 0; private Lock lock = new ReentrantLock(); private Condition condition = lock.newCondition(); public void increment() throws Exception { lock.lock(); try { //1. Judgment while (number != 0) { //Waiting cannot produce condition.await(); } //2. Work number++; System.out.println(Thread.currentThread().getName() + "\t" + number); //3. Notice condition.signalAll(); } catch (Exception e) { e.printStackTrace(); } finally { lock.unlock(); } } public void decrement() throws Exception { lock.lock(); try { //1. Judgment while (number == 0) { //Waiting cannot be consumed condition.await(); } //2. Consumption number--; System.out.println(Thread.currentThread().getName() + "\t" + number); //3. Notice condition.signalAll(); } catch (Exception e) { e.printStackTrace(); } finally { lock.unlock(); } } }
-
Blocking queue version
package com.jian8.juc.queue; import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.BlockingQueue; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; public class ProdConsumer_BlockQueueDemo { public static void main(String[] args) { MyResource myResource = new MyResource(new ArrayBlockingQueue<>(10)); new Thread(() -> { System.out.println(Thread.currentThread().getName() + "\t Production thread start"); try { myResource.myProd(); } catch (Exception e) { e.printStackTrace(); } }, "Prod").start(); new Thread(() -> { System.out.println(Thread.currentThread().getName() + "\t Consumer thread start"); try { myResource.myConsumer(); } catch (Exception e) { e.printStackTrace(); } }, "Consumer").start(); try { TimeUnit.SECONDS.sleep(5); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("5s after main Stop, end of thread"); try { myResource.stop(); } catch (Exception e) { e.printStackTrace(); } } } class MyResource { private volatile boolean flag = true;//It is enabled by default for production + consumption private AtomicInteger atomicInteger = new AtomicInteger(); BlockingQueue<String> blockingQueue = null; public MyResource(BlockingQueue<String> blockingQueue) { this.blockingQueue = blockingQueue; System.out.println(blockingQueue.getClass().getName()); } public void myProd() throws Exception { String data = null; boolean retValue; while (flag) { data = atomicInteger.incrementAndGet() + ""; retValue = blockingQueue.offer(data, 2, TimeUnit.SECONDS); if (retValue) { System.out.println(Thread.currentThread().getName() + "\t Insert queue" + data + "success"); } else { System.out.println(Thread.currentThread().getName() + "\t Insert queue" + data + "fail"); } TimeUnit.SECONDS.sleep(1); } System.out.println(Thread.currentThread().getName() + "\t The big boss stopped, flag=false,End of production"); } public void myConsumer() throws Exception { String result = null; while (flag) { result = blockingQueue.poll(2, TimeUnit.SECONDS); if (null == result || result.equalsIgnoreCase("")) { flag = false; System.out.println(Thread.currentThread().getName() + "\t More than 2 s No cake, consumption exit"); System.out.println(); return; } System.out.println(Thread.currentThread().getName() + "\t Consumption queue" + result + "success"); } } public void stop() throws Exception { flag = false; } }
-
-
Thread pool
-
Message Oriented Middleware
6. What's the difference between synchronized and lock? What are the benefits of using the new lock? Please give an example
difference
-
Original composition
-
When synchronized, the keyword belongs to the jvm
The bottom layer of monitorenter is completed through the monitor object. In fact, wait/notify and other methods also depend on the monitor object. wait/notify and other methods can be dropped only in synchronization or method
monitorexit
-
Lock is a concrete class and an api level lock (java.util.)
-
-
usage method
- Synchronized does not require the user to manually release the lock. When the synchronized code is executed, the system will automatically let the thread release the occupation of the lock
- ReentrantLock requires the user to release the lock manually. If the lock is not released actively, it may lead to deadlock. It needs the lock() and unlock() methods to cooperate with the try/finally statement block
-
Is waiting interruptible
- synchronized cannot be interrupted unless an exception is thrown or normal operation is completed
- ReentrantLock can be interrupted, set timeout method tryLock(long timeout, TimeUnit unit), or lockInterruptibly() in the code block, calling interrupt() method can be interrupted.
-
Is locking fair
- synchronized unfair lock
- ReentrantLock can be both. The default is a fair lock. The construction method can pass in a boolean value. true is a fair lock and false is a non fair lock
-
Lock binding multiple conditions
- synchronized no
- ReentrantLock is used to wake up the threads that need to be awakened. It can wake up accurately, rather than waking up one thread randomly or all threads like synchronized.
package com.jian8.juc.lock; import java.util.concurrent.locks.Condition; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; /** * synchronized What's the difference between and lock * <p===lock Multiple conditions can be bound=== * Call the threads in sequence to realize the startup of three threads a > b > C. the requirements are as follows: * AA Print 5 times, BB print 10 times, CC print 15 times * Then * AA Print 5 times, BB print 10 times, CC print 15 times * . . . . * Ten rounds */ public class SyncAndReentrantLockDemo { public static void main(String[] args) { ShareData shareData = new ShareData(); new Thread(() -> { for (int i = 1; i <= 10; i++) { shareData.print5(); } }, "A").start(); new Thread(() -> { for (int i = 1; i <= 10; i++) { shareData.print10(); } }, "B").start(); new Thread(() -> { for (int i = 1; i <= 10; i++) { shareData.print15(); } }, "C").start(); } } class ShareData { private int number = 1;//A:1 B:2 C:3 private Lock lock = new ReentrantLock(); private Condition condition1 = lock.newCondition(); private Condition condition2 = lock.newCondition(); private Condition condition3 = lock.newCondition(); public void print5() { lock.lock(); try { //judge while (number != 1) { condition1.await(); } //work for (int i = 1; i <= 5; i++) { System.out.println(Thread.currentThread().getName() + "\t" + i); } //notice number = 2; condition2.signal(); } catch (Exception e) { e.printStackTrace(); } finally { lock.unlock(); } } public void print10() { lock.lock(); try { //judge while (number != 2) { condition2.await(); } //work for (int i = 1; i <= 10; i++) { System.out.println(Thread.currentThread().getName() + "\t" + i); } //notice number = 3; condition3.signal(); } catch (Exception e) { e.printStackTrace(); } finally { lock.unlock(); } } public void print15() { lock.lock(); try { //judge while (number != 3) { condition3.await(); } //work for (int i = 1; i <= 15; i++) { System.out.println(Thread.currentThread().getName() + "\t" + i); } //notice number = 1; condition1.signal(); } catch (Exception e) { e.printStackTrace(); } finally { lock.unlock(); } } }
8, Has the thread pool been used? ThreadPoolExecutor talk about your understanding
1. Use of Callable interface
package com.jian8.juc.thread; import java.util.concurrent.Callable; import java.util.concurrent.ExecutionException; import java.util.concurrent.FutureTask; import java.util.concurrent.TimeUnit; /** * In multithreading, the third way to obtain multithreading */ public class CallableDemo { public static void main(String[] args) throws ExecutionException, InterruptedException { //FutureTask(Callable<V> callable) FutureTask<Integer> futureTask = new FutureTask<Integer>(new MyThread2()); new Thread(futureTask, "AAA").start(); // new Thread(futureTask, "BBB").start();// Reuse, take values directly, and do not restart two threads int a = 100; int b = 0; //b = futureTask.get();// It is required to obtain the calculation result of the Callable thread. If the calculation is not completed, it will be forced, which will lead to congestion until the calculation is completed while (!futureTask.isDone()) {//Value after futureTask is completed b = futureTask.get(); } System.out.println("*******Result" + (a + b)); } } class MyThread implements Runnable { @Override public void run() { } } class MyThread2 implements Callable<Integer> { @Override public Integer call() throws Exception { System.out.println("Callable come in"); try { TimeUnit.SECONDS.sleep(5); } catch (InterruptedException e) { e.printStackTrace(); } return 1024; } }
2. Why use thread pools
-
The work done by the thread pool is mainly to control the number of running threads, put the tasks in the queue during the processing, and then start them after the threads are created. If the number of threads exceeds the maximum number, the threads exceeding the number will queue up and wait until other threads have finished executing, and then take the tasks out of the queue for execution
-
main features
Thread reuse, control the maximum number of concurrent threads, and manage threads
- Reduce resource consumption by reusing created threads to reduce the consumption caused by thread creation and destruction
- Mentioned response speed. When the task arrives, the task can be executed immediately without waiting for the thread to be created
- Improve the objective ideal of threads. Threads are scarce resources. If they are created without restrictions, they will not only consume system resources, but also reduce the stability of the system. Using thread pool can carry out unified allocation, tuning and monitoring
3. How to use thread pool
-
Architecture description
The thread pool in Java is implemented through the Executor framework, which uses Executor, executors, executorservice and ThreadPoolExecutor
-
Coding implementation
There are five implementations, executors Newscheduledthreadpool() is a time scheduling. Java 8 introduces executors Newworksteelingpool (int), which uses the processor currently available on the machine as its parallelism level
There are three key points
-
Executors.newFixedThreadPool(int)
Perform long-term tasks with much better performance
Create a fixed length thread pool to control the maximum concurrent number of threads, and the fried threads will wait in the queue.
The values of corePoolSize and maximumPoolSize of the thread pool created by newFixedThreadPool are similar. It uses LinkedBlockingQueue
-
Executors.newSingleThreadExecutor()
One task, one task execution scenario
Create a thread pool for one-way calls. It will only use the only working thread to execute tasks, ensuring that all tasks are executed in the specified order
The newSingleThreadExecutor sets both corePoolSize and maximumPoolSize to 1 and uses LinkedBlockingQueue
-
Executors.newCachedThreadPool()
Execute many short-term asynchronous applets or servers with light load
Create a cacheable thread pool. If the length of the thread pool exceeds the processing needs, you can flexibly recycle the idle thread. If there is no recyclable thread, you can create a new thread.
newCachedThreadPool set corePoolSize to 0 and maximumPoolSize to integer MAX_ Value, the SynchronousQueue used, that is, when a task comes, a thread is created to run. When the county is idle for more than 60s, the thread is destroyed
-
-
ThreadPoolExecutor
4. Introduction to several important parameters of thread pool
public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler)
- corePoolSize: the number of resident core threads in the thread pool
- After the thread pool is created, when a request task comes, the thread in the pool will be arranged to execute the request task
- When the number of threads in the thread pool reaches the corePoolSize, the arriving tasks will be put into the cache queue
- maximumPoolSize: the maximum number of threads that can be executed simultaneously in the thread pool. It must be greater than or equal to 1
- Idle time of thread: alikeeptime
- When the current number of thread pools exceeds corePoolSize and the idle time of the file port reaches the keepAliveTime value, the redundant idle threads will be destroyed until only corePoolSize threads are left
- Unit: the unit of keepAliveTime
- workQueue: task queue, a task submitted but not yet executed
- threadFactory: refers to the thread factory that generates the working threads in the thread pool. It is used to create threads. Generally, the default is used
- handler: reject policy, which indicates how to reject the runable request execution when the queue is full and the worker thread is greater than or equal to the maximum poolsize of the thread pool
5. The underlying working principle of thread pool
technological process
-
After creating the thread pool, wait for the submitted character request.
-
When calling the execute() method to add a request task, the thread pool will make the following judgment
2.1 if the number of running threads is less than corePoolSize, the ship thread runs the task immediately;
2.2 if the number of running threads is greater than or equal to corePoolSize, put the task into the queue;
2.3 if the queue is full and the number of running threads is less than maximumPoolSize, create a non core thread and run the task immediately
2.4 if the queue is full and the number of running threads is greater than or equal to maxmumPoolSize, start the saturation rejection policy to execute
-
When a thread completes a task, it will execute the next task from the queue
-
When a thread has nothing to do for more than a certain time (keepAliveTime), the thread pool will judge:
If the number of currently running threads is greater than corePoolSize, the thread will be stopped; Therefore, after all tasks of the thread pool are completed, it will shrink to the size of corePoolSize at most
9, Has the thread pool been used? How do you set reasonable parameters in production
1. Rejection policy of thread pool
-
What is threading strategy
The waiting queue is also full, and there are no more new tasks to fill. At the same time, the max thread in the thread pool has reached and cannot continue to serve new tasks. At this time, we need the rejection policy mechanism to deal with this problem reasonably.
-
JDK built-in rejection policy
-
Abortpolicy (default)
Throw the RejectedExecutionException exception directly to prevent the normal operation of the system
-
CallerRunsPolicy
”The caller runs "a kind of adjustment mechanism, which neither discards tasks nor throws exceptions, but backs some tasks back to the caller, thus reducing the traffic of new tasks
-
DiscardOldestPolicy
Discard the longest waiting task in the queue, then add the current task to the queue and try to submit the current task again
-
DiscardPolicy
Directly discard the task without any processing or exception. If the task is allowed to be lost, this is the best solution
-
-
Both implement the RejectedExecutionHandler interface
2. Which of the three single / fixed / variable methods you use to create thread pools in your work
We can only use custom ones in production!!!!
Why?
Thread pools cannot be created with Executors. Try using ThreadPoolExecutor to avoid the risk of resource exhaustion
FixedThreadPool and SingleThreadPool allow the request queue length to be integer MAX_ Value, which may accumulate a large number of requests;; The number of threads allowed to be created by CachedThreadPool and ScheduledThreadPool is integer MAX_ Value, a large number of threads may be created, resulting in OOM
3. How do you use thread pool in your work? Have you customized the use of thread pool
package com.jian8.juc.thread; import java.util.concurrent.*; /** * The fourth way to obtain java multithreading -- thread pool */ public class MyThreadPoolDemo { public static void main(String[] args) { ExecutorService threadPool = new ThreadPoolExecutor(3, 5, 1L, TimeUnit.SECONDS, new LinkedBlockingDeque<>(3), Executors.defaultThreadFactory(), new ThreadPoolExecutor.DiscardPolicy()); //new ThreadPoolExecutor.AbortPolicy(); //new ThreadPoolExecutor.CallerRunsPolicy(); //new ThreadPoolExecutor.DiscardOldestPolicy(); //new ThreadPoolExecutor.DiscardPolicy(); try { for (int i = 1; i <= 10; i++) { threadPool.execute(() -> { System.out.println(Thread.currentThread().getName() + "\t Handle the business"); }); } } catch (Exception e) { e.printStackTrace(); } finally { threadPool.shutdown(); } } }
4. How do you consider reasonably configuring thread pools?
-
CPU intensive
CPU intensive means that the task requires a lot of computation without blocking, and the CPU runs at full speed all the time
CPU intensive tasks can only be accelerated (through multithreading) on real multi-core CPUs
On a single core CPU, no matter how many simulated multithreads you open, the task cannot be accelerated, because the total computing power of the CPU is that
CPU intensive tasks are configured with as few threads as possible:
General formula: number of CPU cores + thread pool of 1 thread
-
IO intensive
-
Since IO intensive task threads are not always executing tasks, they should be configured with as many threads as possible, such as CPU cores * 2
-
IO intensive, that is, the task requires a lot of IO, that is, a lot of blocking.
Running IO intensive tasks on a single thread will waste a lot of CPU computing power and wait.
Therefore, using multithreading in IO intensive tasks can greatly speed up the running of programs. Even on a single core CPU, this acceleration mainly takes advantage of the wasted blocking time.
In IO intensive mode, most threads are blocked, so you need to configure the number of threads:
Reference formula: CPU cores / (1-blocking coefficient) blocking coefficient is between 0.8 and 0.9
Eight core CPU: 8 / (1-0, 9) = 80
-
10, Deadlock coding and location analysis
-
What is it?
Deadlock refers to the phenomenon that two or more processes wait for each other due to competing for resources in the execution process. If there is no external interference, they will not be able to move forward. If the system resources are sufficient and the resource requests of the process can be met, the possibility of deadlock is very low, otherwise they will fall into deadlock due to competing for limited resources.
-
Main causes of deadlock
- Insufficient system resources
- The sequence of process running is not appropriate
- Improper allocation of resources
-
Deadlock example
package com.jian8.juc.thread; import java.util.concurrent.TimeUnit; /** * Deadlock refers to the phenomenon that two or more processes wait for each other due to competing for resources in the process of implementation. If there is no external interference, they will not be able to move forward, */ public class DeadLockDemo { public static void main(String[] args) { String lockA = "lockA"; String lockB = "lockB"; new Thread(new HoldThread(lockA,lockB),"Thread-AAA").start(); new Thread(new HoldThread(lockB,lockA),"Thread-BBB").start(); } } class HoldThread implements Runnable { private String lockA; private String lockB; public HoldThread(String lockA, String lockB) { this.lockA = lockA; this.lockB = lockB; } @Override public void run() { synchronized (lockA) { System.out.println(Thread.currentThread().getName() + "\t Own:" + lockA + "\t Try to get:" + lockB); try { TimeUnit.SECONDS.sleep(2); } catch (InterruptedException e) { e.printStackTrace(); } synchronized (lockB) { System.out.println(Thread.currentThread().getName() + "\t Own:" + lockB + "\t Try to get:" + lockA); } } } }
-
solve
- Use jps -l to locate the process number
- jstack process number find deadlock view