Before JDK 1.5, there were only synchronized and volatile coordination mechanisms for shared objects. In JDK 1.5, a new mechanism, ReentrantLock, was added. This mechanism was born not to replace synchronized, but to provide an optional high-level function when synchronized is not applicable.
synchronized belongs to exclusive pessimistic lock, which is implemented implicitly by JVM. synchronized only allows one thread to operate resources at the same time.
In Java, each object implicitly contains a monitor object. The process of locking is actually the process of competing for the monitor. When a thread enters the bytecode monitorenter instruction, it will hold the monitor object and release the monitor object when it executes monitorexit. When other threads do not get the monitor object, the thread will, You need to block and wait to get the object.
ReentrantLock is one of the default implementation methods of Lock. It is based on AQS (Abstract Queued Synchronizer). It is implemented by non fair Lock by default. There is a state field in it to indicate whether the Lock is occupied or not. If it is 0, it means that the Lock is not occupied. At this time, the thread can change the state to 1, And successfully obtain the Lock, while other threads that do not obtain the Lock can only queue to obtain the Lock resources.
Both synchronized and ReentrantLock provide the function of lock, which is mutually exclusive and invisible. In JDK 1.5, the performance of synchronized is much lower than that of ReentrantLock, but after JDK 1.6, the performance of synchronized is slightly lower than that of ReentrantLock
- synchronized is implemented implicitly by JVM, and ReentrantLock is an API provided by Java language;
- ReentrantLock can be set as fair lock, but synchronized cannot;
- ReentrantLock can only modify code blocks, while synchronized can be used to modify methods and code blocks;
- ReentrantLock needs to be locked and released manually. If you forget to release the lock, the resource will be occupied permanently, while synchronized does not need to release the lock manually;
- ReentrantLock can know if the lock has been successfully acquired, but synchronized can't.
Analysis of ReentrantLock source code
First, let's look at the two constructors of ReentrantLock
public ReentrantLock() { sync = new NonfairSync(); // Unfair lock } public ReentrantLock(boolean fair) { sync = fair ? new FairSync() : new NonfairSync(); }
The nonparametric constructor creates a non fair lock, and the user can set a boolean value according to the second constructor to decide whether to use the fair lock to realize thread scheduling.
- Fair lock VS. unfair lock
The meaning of fair lock is that threads need to obtain locks according to the order of requests; The so-called "queue jumping" means that when a thread sends a request, the state of the lock just becomes available, then the thread can skip all the queued threads in the queue and directly own the lock.
However, fair lock has some overhead due to suspension and recovery, so its performance is not as good as non fair lock. Therefore, ReentrantLock and synchronized are the default implementation methods of non fair lock.
ReentrantLock obtains the lock through lock() and releases it through unlock(). The code is as follows:
Lock lock = new ReentrantLock(); try { // Lock lock.lock(); //... business processing } finally { // Release the lock lock.unlock(); }
Lock() in ReentrantLock is implemented through sync.lock(), but lock() in Sync class is an abstract method, which needs subclass NonfairSync or FairSync to implement. The source code of lock() in NonfairSync is as follows:
final void lock() { if (compareAndSetState(0, 1)) // Sets the current thread as the holder of this lock setExclusiveOwnerThread(Thread.currentThread()); else acquire(1); }
The source code of lock() in FairSync is as follows:
final void lock() { acquire(1); }
It can be seen that the non fair lock has only one more line of compareAndSetState method than the fair lock. This method attempts to replace the state value from 0 to 1. If the setting is successful, it means that no other thread currently holds the lock and does not need to queue any more. It can directly occupy the lock. Otherwise, it needs to queue through the acquire method.
The source code of acquire is as follows:
public final void acquire(int arg) { if (!tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt(); }
The tryAcquire method attempts to acquire the lock. If it fails to acquire the lock, it will be added to the blocking queue. Let's see the source code of tryAcquire
protected final boolean tryAcquire(int acquires) { final Thread current = Thread.currentThread(); int c = getState(); if (c == 0) { // Fair lock is one more line of code than unfair lock! hasQueuedPredecessors() if (!hasQueuedPredecessors() && compareAndSetState(0, acquires)) { //Attempt to acquire lock setExclusiveOwnerThread(current); // Get success, mark is preempted return true; } } else if (current == getExclusiveOwnerThread()) { int nextc = c + acquires; if (nextc < 0) throw new Error("Maximum lock count exceeded"); setState(nextc); // set state=state+1 return true; } return false; }
For this method, fair lock is only one more line of code than non fair lock! hasQueuedPredecessors(), which is used to check whether there is a thread in the queue that has been waiting longer than it. If not, try to find out whether the lock can be obtained. If successful, it will be marked as occupied.
If the lock acquisition fails, the addWaiter method is called to wrap the thread as a Node object and put it into the queue at the same time. However, the addWaiter method does not attempt to acquire the lock, and the acquireQueued method will attempt to acquire the lock. If the acquisition fails, the Node will be suspended. The source code is as follows:
/** * If a thread in the queue attempts to acquire a lock, it will be suspended if it fails */ final boolean acquireQueued(final Node node, int arg) { boolean failed = true; // Gets the status identifier of whether the lock is successful or not try { boolean interrupted = false; // Is the thread interrupted for (;;) { // Get previous node (precursor node) final Node p = node.predecessor(); // When the current node is the next node of the head node, it has the right to try to acquire the lock if (p == head && tryAcquire(arg)) { setHead(node); // Set the current node as the head node p.next = null; // The original head node is out of the queue, waiting to be GC failed = false; // Get success return interrupted; } // Determine whether the lock can be suspended after the acquisition failure if (shouldParkAfterFailedAcquire(p, node) && parkAndCheckInterrupt()) // If the thread is interrupted, return true interrupted = true; } } finally { if (failed) cancelAcquire(node); } }
This method uses for(; 😉 Try to acquire the lock in an infinite loop. If the acquisition fails, call the shouldParkAfterFailedAcquire method to try to suspend the current thread. The source code is as follows:
/** * Determine whether a thread can be suspended */ private static boolean shouldParkAfterFailedAcquire(Node pred, Node node) { // Gets the state of the precursor node int ws = pred.waitStatus; // The state of the precursor node is SIGNAL, and the current thread can be suspended (blocked) if (ws == Node.SIGNAL) return true; if (ws > 0) { do { // If the state of the precursor node is CANCELLED, keep looking until a normal waiting state is found node.prev = pred = pred.prev; } while (pred.waitStatus > 0); // And put the current node after it pred.next = node; } else { // Change the state of the precursor node to SIGNAL compareAndSetWaitStatus(pred, ws, Node.SIGNAL); } return false; }
The prerequisite for thread listing to be suspended is that the state of the precursor node is SIGNAL, which means that the successor node is in the waiting state. After the current node releases the lock, it will wake up the successor node. Therefore, in the above code, we will first determine the state of the precursor node. If it is SIGNAL, the current thread can be suspended and return true; If the state of the precursor node is > 0, it means that the precursor node is canceled. At this time, you need to keep looking until you find the most recently waiting precursor node, and then use it as your own precursor node; If the precursor node is normal (not cancelled), modify the precursor node status to SIGNAL.
At this point, the whole process of locking is over. In the end, the thread that does not get the lock will be suspended in the queue. After the thread that owns the lock releases the lock, it will wake up other threads to get the lock resources. The whole running process is as follows:
unlock is much simpler than lock. The source code is as follows:
public void unlock() { sync.release(1); } public final boolean release(int arg) { // Attempt to release lock if (tryRelease(arg)) { // Successful release Node h = head; if (h != null && h.waitStatus != 0) unparkSuccessor(h); return true; } return false; }
The lock release process is as follows: first, call the tryRelease method to try to release the lock. If the release is successful, check whether the status of the header node is SIGNAL. If so, wake up the thread associated with the next node of the header node; false if the lock release fails.
The source code of tryRelease is as follows:
/** * An attempt was made to release the lock held by the current thread */ protected final boolean tryRelease(int releases) { int c = getState() - releases; // The state after releasing the lock. 0 indicates that the lock is released successfully // Throw an exception if the thread that owns the lock is not the current thread if (Thread.currentThread() != getExclusiveOwnerThread()) throw new IllegalMonitorStateException(); boolean free = false; if (c == 0) { // The lock was released successfully free = true; setExclusiveOwnerThread(null); // Empty exclusive thread } setState(c); // Update the state value. 0 indicates that the lock is released successfully return free; }
In the tryRelease method, it will first determine whether the current thread occupies the lock. If not, it will throw an exception; If it is, first calculate whether the lock state value getState() - releases is 0. If it is 0, it means that the lock can be released normally, then empty the exclusive thread, and finally update the lock state and return the execution result.
JDK 1.6 lock optimization
- Adaptive spin lock
When JDK 1.5 was upgraded to JDK 1.6, the HotSpot virtual machine team made great efforts in lock optimization, such as the implementation of adaptive spin lock and lock upgrade.
JDK 1.6 introduces an adaptive spin lock, which means that the spin time is no longer fixed. For example, if the lock is successfully acquired through spin wait on the same lock object, the virtual machine will think that it is likely to succeed next time (the lock is acquired through spin), so the allowed spin wait time will be relatively long, However, when a lock is rarely acquired successfully by spin, the spin process may be directly ignored when acquiring the lock in the future, so as to avoid wasting CPU resources. This is the function of adaptive spin lock.
- Lock upgrade
Lock upgrade is actually a process of upgrading from biased lock to lightweight lock and then to heavyweight lock. This is the optimization function provided by JDK 1.6, also known as lock inflation.
Biased lock is a kind of lock state set without competition. Biased lock means that it will be biased to the first thread to acquire it. When the lock object is acquired for the first time, it will be marked as "01" in this object header, indicating the mode of biased lock, and the ID of this thread will be recorded in the object header. In this case, if the thread holding the biased lock enters each time, it will not perform any synchronization operation, such as Locking Until another thread tries to acquire the lock, the biased lock mode will not end. Biased lock can improve the performance of programs with synchronization but without competition. However, if most locks are always accessed by different threads, the biased lock mode is redundant. At this time, you can disable the biased lock by - XX:-UseBiasedLocking to improve performance.
Lightweight lock is relative to the weight lock. Before JDK 1.6, synchronized was realized by mutex lock of the operating system. This kind of implementation needs to be converted between the user state and the core state, which has a great performance consumption. This traditional way of realizing the lock is called weight lock.
The lightweight lock is implemented by CAS (Compare and Swap). It compares the Mark Word (an area in the object header) of the thread and the object. If the update is successful, it means that the current thread successfully owns the lock; If it fails, the virtual opportunity first checks whether the Mark Word of the object points to the stack frame of the current thread. If it does, it means that the current thread already owns the lock. Otherwise, it means that the lock has been occupied by other threads. When more than two threads scramble for this lock, the lightweight lock expands to the heavyweight lock. This is the process of lock upgrade and the content of JDK 1.6 lock optimization.
Pessimistic lock and optimistic lock
Pessimistic lock refers to the conservative strategy of data modification to the outside world. It thinks that the thread will easily modify the data, so it will take a locked state in the whole process of data modification. Until one thread is used up, other threads can continue to use it.
Let's take a look at the implementation process of pessimistic lock. Take synchronized as an example, the code is as follows:
public class LockExample { public static void main(String[] args) { synchronized (LockExample.class) { System.out.println("lock"); } } }
The results of decompilation are as follows:
Compiled from "LockExample.java" public class com.lagou.interview.ext.LockExample { public com.lagou.interview.ext.LockExample(); Code: 0: aload_0 1: invokespecial #1 // Method java/lang/Object."<init>":()V 4: return public static void main(java.lang.String[]); Code: 0: ldc #2 // class com/lagou/interview/ext/LockExample 2: dup 3: astore_1 4: monitorenter // Lock 5: getstatic #3 // Field java/lang/System.out:Ljava/io/PrintStream; 8: ldc #4 // String lock 10: invokevirtual #5 // Method java/io/PrintStream.println:(Ljava/lang/String;)V 13: aload_1 14: monitorexit // Release the lock 15: goto 23 18: astore_2 19: aload_1 20: monitorexit 21: aload_2 22: athrow 23: return Exception table: from to target type 5 15 18 any 18 21 18 any }
It can be seen that the code block modified by synchronized uses the monitorenter instruction to lock before execution, and then uses the monitorexit instruction to release the lock resource after execution. The code is locked during the whole execution period, which is the typical pessimistic lock implementation process.
The concept of optimistic lock and pessimistic lock is just the opposite. Optimistic lock thinks that in general, there will be no conflict when the data is modified, so the lock will not be added before the data access, and the data will be detected only when the data is submitted for change.
Most optimistic locks in Java are implemented by CAS (Compare And Swap) operation. CAS is a multi thread synchronous atomic instruction. CAS operation contains three important information: memory location, expected original value and new value. If the value of the memory location is equal to the expected original value, then the value of the location can be updated to the new value, otherwise no modification is made.
CAS may cause ABA problem. ABA problem means that the thread gets the original expected value a, but when CAS is about to be carried out, other threads preempt the execution right and change the value from a to B, and then other threads change the value from B to A. However, the value of a is no longer the original value of a, but the original thread does not know this, When CAS is carried out, it only compares the expected original value of a and modifies it, which causes the problem of ABA.
Take police as like as two peas. If someone puts 100W cash in his house, he will take it to the house in a few minutes. But when he doesn't notice, he comes in with a thief, and uses empty boxes to replace the box full of money. When someone comes in, he sees the box is exactly the same. He will think it is the original box and take it to make up for it. There must be a problem with this situation, because the box is already empty, which is the problem of ABA.
The common way to deal with ABA is to add a version number, and update the version number after each modification. Take the above example, if the position of the box changes every time you move the box, and this change is equivalent to the "version number". When someone comes in and finds that the position of the box has changed, he will know that someone has tampered with it and will give up the original plan, This solves the problem of ABA.
JDK provides AtomicStampedReference class in 1.5, which can also solve the problem of ABA. This class maintains a "version number" Stamp, and compares not only the current value but also the version number each time. This solves the problem of ABA.
The related source code is as follows:
public class AtomicStampedReference<V> { private static class Pair<T> { final T reference; final int stamp; // "Version number" private Pair(T reference, int stamp) { this.reference = reference; this.stamp = stamp; } static <T> Pair<T> of(T reference, int stamp) { return new Pair<T>(reference, stamp); } } // Compare and set public boolean compareAndSet(V expectedReference, V newReference, int expectedStamp, // Original version number int newStamp) { // New version number Pair<V> current = pair; return expectedReference == current.reference && expectedStamp == current.stamp && ((newReference == current.reference && newStamp == current.stamp) || casPair(current, Pair.of(newReference, newStamp))); } //... omit other source code }
It can be seen that when it is modified, it will compare the original value with the version number. When the comparison is successful, it will modify the value and the version number.
Tip: optimistic locking has the advantage that it only locks at commit time, so it won't cause deadlock.
Reentrant lock
Reentrant lock is also called recursive lock. It refers to the same thread. If the external function owns the lock, the internal function can continue to acquire the lock. In Java language, ReentrantLock and synchronized are both reentrant locks.
Now let's use synchronized to demonstrate what a reentrant lock is. The code is as follows:
public class LockExample { public static void main(String[] args) { reentrantA(); // Reentrant lock } /** * Method A of reentrant lock */ private synchronized static void reentrantA() { System.out.println(Thread.currentThread().getName() + ": implement reentrantA"); reentrantB(); } /** * Method B of reentrant lock */ private synchronized static void reentrantB() { System.out.println(Thread.currentThread().getName() + ": implement reentrantB"); } }
The execution results of the above code are as follows:
main: implement reentrantA main: implement reentrantB
It can be seen from the results that the execution threads of reentrant a method and reentrant B method are "main". We call reentrant a method, and reentrant B is nested in its method. If synchronized is non reentrant, the thread will be blocked all the time.
The implementation principle of re entrant lock is that a thread ID is stored in the lock to determine which thread the current lock belongs to, and a counter is maintained inside the lock. When the lock is idle, the counter value is 0. When the lock is occupied by a thread and re entrant, 1 is added respectively. When the lock is released, the counter is decreased by 1 until it is reduced to 0, indicating that the lock is idle.
Shared lock and exclusive lock
A lock that can only be held by a single thread is called an exclusive lock, while a lock that can be held by multiple threads is called a shared lock.
Exclusive lock means that at most one thread can hold the lock at any time. For example, synchronized is exclusive lock, while ReadWriteLock allows multiple threads to read at the same time, which belongs to shared lock.
Exclusive lock can be understood as pessimistic lock, and mutex lock should be added every time when accessing a resource, while shared lock can be understood as optimistic lock, which relaxes the condition of locking and allows multiple threads to access the resource at the same time.
Statement:
- Based on pull hook "Java source code analysis 34 lecture" own notes