Thread application enhancement - Thread01

1, Process and thread cognitive enhancement

1.1 how to understand processes and threads?

  • Process: the basic unit of the operating system for resource scheduling and allocation (such as browser, APP, JVM);
  • Thread: the smallest execution unit in a process, which is the basic unit of CPU resource allocation (it can be understood as a sequential execution flow);

Note: multiple threads can share all the resources of their processes.

1.2 how to understand concurrency and parallelism in multithreading?

  • Concurrency: multithreading preempts the CPU and may not execute at the same time, focusing on the alternating execution of multiple tasks.

The current operating systems, whether windows, linux or macOS, are actually multi-user and multi task time-sharing operating systems. Users using these operating systems can do many things "at the same time". But in fact, for a computer with a single CPU, it can only do one thing at a time. In order to look like "doing multiple things at the same time", the time-sharing operating system divides the CPU time into time intervals with basically the same length, namely "time slices". Through the management of the operating system, the time slices are allocated to each thread task in turn. Our seemingly "doing multiple things at the same time" is actually completed concurrently through CPU time slice technology.

  • Parallel: threads can not share CPU, and each thread can execute multiple tasks at the same time.

In short: I think parallelism only occurs in multi CPU or multi-core CPU, and concurrency can be understood as a subset of parallelism.

1.3 how to understand the life cycle and state changes of threads?

The process of creating, running, and finally destroying a thread is called the life cycle of the thread. During this life cycle, the thread may experience the following states:


These states can be summarized as: new state, ready state, running state, blocking state and death state;

2, Cognitive enhancement of thread concurrency security

2.1 how to understand thread safety and insecurity?

  • When multiple threads execute concurrently, they can still ensure the correctness of data. This phenomenon is called thread safety;
  • When multiple threads execute concurrently, the correctness of data cannot be guaranteed. This phenomenon is called thread insecurity.

Case 1: simulate multiple threads to execute ticket selling operation at the same time

Write ticket selling task class:

class TicketTask implements Runnable{
	int ticket=20;
	@Override
	public void run() {
		doTicket();
	}
	public void doTicket() {
		while(true) {
			if(ticket<=0)break;
			System.out.println(ticket--);
		}
	}
}

Prepare ticket selling test method:

public static void main(String[] args) {
	TicketTask task=new TicketTask();
	Thread t1=new Thread(task);
	Thread t2=new Thread(task);
	Thread t3=new Thread(task);
	
	t1.start();
	t2.start();
	t3.start(); 
}

Case 2: simulate multiple threads to execute counting operation at the same time

class Counter{
	private int count;
	public void doCount() {
		count++;
	} 
}

2.2 what are the factors leading to thread insecurity?

(1) Multiple threads execute concurrently.
(2) Shared data sets (critical resources) exist when multiple threads execute concurrently.
(3) The operation of multiple threads on a shared dataset is not an atomic operation (an operation that cannot be split)

Case:

  1. Design a thread safe counter
  2. Design a thread safe Container

2.3. How to ensure the safety of concurrent threads?

  1. Restrict (block) access to a share (e.g., Lock: syncronized, Lock): multiple threads queue execution on synchronous methods or synchronous code blocks.
  2. Non blocking synchronization based on CAS (comparison and exchange) (based on CPU hardware technical support)
    (a) Memory address
    (b) Expected data value (A)
    (c) Value to update

CAS algorithm supports concurrent updates in lockless state, but ABA problem and long-term spin problem may occur.

  1. Unshared, one object instance per thread (for example, threadlocal)
    (a) Does Connection allow multi-threaded sharing? (not allowed, one per thread)
    (b) Does SimpleDateFormat allow multi-threaded sharing? (one per thread, not allowed)
    (c) Are SqlSession objects allowed to be shared? (not allowed, one per thread)

explain:

  • There are three main concerns about thread safety in Java: visibility, ordering and atomicity;
  • The Java Memory Model (JMM) solves the problems of visibility and ordering, while locks solve the problems of atomicity.

2.4 Synchronized keyword application and principle analysis?

  1. Introduction to synchronized:
    1) synchronized is an implementation of exclusive lock, which supports reentry;
    2) Based on this mechanism, multi threads can synchronize (mutually exclusive and cooperative) on shared data sets.
    2.1) mutual exclusion: multiple threads are queued on the shared data set.
    2.2) collaboration: multithreading performs collaborative execution on shared data sets (Communication)

explain:

  • Exclusivity: if thread T1 already holds lock L, no thread t other than T1 is allowed to hold the lock L;
  • Reentrant: if thread T1 already holds lock L, thread T1 is allowed to acquire lock l multiple times. More specifically, after acquiring it once, it can enter the lock multiple times.
  1. Synchronized. Application Analysis:
    1) Decoration method: synchronization method (lock is the current instance or Class object)
    1.1) modify static method: the lock used by default is the Class object of the Class where the method is located
    1.2) modify instance method: the instance object of the class in which the method is used by default
    2) Decorated code block: synchronous code block (object configured in parentheses of code block)

  2. Analysis of Synchronized principle: realize synchronization based on Monitor object.
    1) The synchronization code block is explicitly implemented by monitorenter and monitorexit instructions.
    2) The synchronization method uses acc_ An implicit implementation of the synchronized flag.

  3. Synchronized lock Optimization: bottom layer
    In order to reduce the performance consumption caused by obtaining and releasing locks, jdk1 There are four states for locks after 6, from low to high: no lock state, biased lock state, lightweight lock state and heavyweight lock state. These States will gradually upgrade with the competition.

Note: locks can be upgraded but cannot be downgraded, which means that biased locks cannot be downgraded to biased locks after being upgraded to lightweight locks. The purpose of this strategy is to improve the efficiency of obtaining and releasing locks.

2.5 how to understand the application of volatile keyword?

  1. Definition: volatile is generally used to modify attribute variables
    1) Ensure the visibility of shared variables (especially in multi-core or multi cpu scenarios)
    2) Prohibit the reordering of instructions (for example, there are three steps at the bottom of count + +)
    3) Atomicity is not guaranteed (for example, one thread cannot be guaranteed to execute all count + + instructions before other threads can execute.)

  2. Application scenario analysis:
    1) Status tag (boolean type attribute)
    2) Safe publishing (object safe publishing in thread safe singleton - double detection mechanism)
    3) Read / write lock strategy (one write, concurrent read, similar to read / write lock)

  3. Code implementation analysis:

(1) Status tag code example

class Looper{
	private volatile boolean isStop;
	public void loop() {
		for(;;) {
			if(isStop)break; 
		} 
	}
	public void stop() {
		isStop=true; 
	} 
}
public class TestVolatile01 {
	public static void main(String[] args)throws Exception {
		Looper looper=new Looper();
		Thread t1=new Thread() {
			public void run() {
				looper.loop();
			};
		};
		t1.start();
		t1.join(1000);
		looper.stop();
	} 
}
  1. Security release code example
class Singleton{
	private Singleton() {}
	private static volatile Singleton instance;
	public static Singleton getSingleton() {//Large object, rare use
		if(instance==null) {
			synchronized (Singleton.class) {
			instance=new Singleton();//Space allocation, attribute initialization, instance assignment
		} 
	}
	return instance; 
	} 
}
  1. Application case of read / write lock:
class Counter{
	private volatile int count;
	public int getCount() {//read
		return count; 
	}
	public synchronized void doCount() {//write
		count++;
	} 
}

2.6 how to understand the application of the happy before principle?

In JMM, if one operation A happens before another operation B, then the result of operation A is visible to the result of operation B, then we call this method the happened before principle, for example:

  1. Single thread rule:


2. Monitor lock rule


3. Volatile variable rule

4. Thread start rule

5. Thread join rule

Note: JMM determines whether there is data competition, whether the thread is safe, and whether the variable value is visible in multi-threaded environment based on the happy before rule.

2.7 how to understand pessimistic lock and optimistic lock in JAVA?

In order to ensure the security of multi-threaded concurrent access, JAVA provides lock based applications, which can be divided into two categories: pessimistic lock and optimistic lock.

Description of pessimistic lock & optimistic lock definition:

1) Pessimistic lock: it is assumed that concurrency conflicts will occur and all operations that may violate data integrity are shielded Only one thread can write at a time.
For example, java can be implemented based on syncronized,Lock, ReadWriteLock, etc.

2) Optimistic lock: assuming no conflict occurs, only check whether the data integrity is violated when submitting the operation. Multiple threads can execute write operations concurrently, but only one thread can write successfully.
For example, in java, CAS (Compare And Swap) algorithm can be used (this algorithm depends on hardware CPU).

Description of pessimistic lock & optimistic lock application scenario:

1) Pessimistic lock is suitable for scenarios with more write operations. Write can ensure that the data is correct during write operations.
2) Optimistic lock is suitable for scenarios with many read operations. The feature of no lock can greatly improve the performance of read operations

Pessimistic lock & optimistic lock application case analysis

Pessimistic lock implementation counter:

Option 1:

class Counter{
	private int count;
	public synchronized int count() {
		count++;
		return count; 
	} 
}

Option 2:

class Counter{
	private int count;
	private Lock lock=new ReentrantLock();
	public int count() {
		lock.lock();
		try {
			count++;
			return count; 
		}finally {
			lock.unlock();
		} 
	} 
}

Optimistic lock implementation counter:

class Counter{
	private AtomicInteger at=new AtomicInteger();//CAS
	public int count() {
		return at.incrementAndGet();
	} 
}

AtomicInteger is implemented based on CAS algorithm.

2.8 how to understand thread context switching?

The time for a thread to be executed by the cpu is limited. When this thread runs out of cpu time allocated to it, the cpu will switch to the next thread for execution. For example:

Before thread switching, the thread needs to save the current state so that the next time it obtains the CPU time slice, it can load the corresponding state to continue executing the remaining tasks. This switching process is time-consuming and will affect the execution efficiency of multithreaded programs, so it is necessary to reduce the frequent switching of threads when using multithreading. So how?

The scheme to reduce multi-threaded context switching is as follows:
1) Lock free concurrent programming: Lock competition will lead to thread context switching
2) CAS algorithm: CAS algorithm can achieve locking effect in data update
3) Use minimal threads: avoid unnecessary thread waiting
4) Using CO process: single thread completes the scheduling and switching of multiple tasks to avoid multithreading

2.9 how to understand deadlock and avoid deadlock?

Multiple threads wait for a lock that has been occupied by the other thread, resulting in a state of waiting for the other thread to release the lock. This process is called deadlock, as shown in the figure:

Deadlock Case Analysis-1:

Case sharing of possible deadlock

class SyncTask01 implements Runnable {
	private Object obj1;
	private Object obj2;
	public SyncTask01(Object o1, Object o2) {
		this.obj1 = o1;
		this.obj2 = o2; 
	}
	@Override
	public void run() {
		synchronized (obj1) {
			work();
			synchronized (obj2) {
				work();
			} 
		} 
	}
	private void work() {
		try {
			Thread.sleep(30000);
		} catch (InterruptedException e) {
			e.printStackTrace();
		} 
	} 
}

Deadlock test

public class TestDeadLock01 {
	public static void main(String[] args) throws Exception {
		Object obj1 = new Object();
		Object obj2 = new Object();
		Thread t1 = new Thread(new SyncTask01(obj1, obj2), "t1");
		Thread t2 = new Thread(new SyncTask01(obj2, obj1), "t2");
		t1.start();
		t2.start();
	} 
}

Deadlock Case analysis-2:

class SyncTask02 implements Runnable{
	private List<Integer> from;
	private List<Integer> to;
	private Integer target;
	public SyncTask02(List<Integer> from,List<Integer> to,Integer target) {
		this.from=from;
		this.to=to;
		this.target=target; 
	}
	@Override
	public void run() {
		moveListItem(from, to, target);
	}
	private static void moveListItem (List<Integer> from, List<Integer> to, Integer item) {
		log("attempting lock for list", from); 
		synchronized (from) {
			log("lock acquired for list", from);
			try {
				TimeUnit.SECONDS.sleep(1);
			} catch (InterruptedException e) {
				e.printStackTrace();
			}
			log("attempting lock for list ", to);
			synchronized (to) {
				log("lock acquired for list", to);
				if(from.remove(item)){
					to.add(item);
				}
				log("moved item to list ", to);
			}
		}
	} 
	private static void log (String msg, Object target) {
		System.out.println(Thread.currentThread().getName() +": " + msg + " " + System.identityHashCode(target));
	} 
}
public class TestDeadLock02 {
	public static void main(String[] args) {
		List<Integer> list1 = new ArrayList<>(Arrays.asList(2, 4, 6, 8, 10));
		List<Integer> list2 = new ArrayList<>(Arrays.asList(1, 3, 7, 9, 11));

		Thread thread1 = new Thread(new SyncTask02(list1, list2, 2));
		Thread thread2 = new Thread(new SyncTask02(list2, list1, 9));
		thread1.start();
		thread2.start();
	} 
}

How to avoid deadlock?

  1. Avoid acquiring multiple locks in one thread at the same time;
  2. Avoid a thread from acquiring other lock resources in one lock;
  3. Consider using timed locks to replace internal locking mechanisms, such as lock tryLock(timeout)

3, Application foundation of thread communication and process communication

3.1 how to understand process and thread communication?

  • Thread communication: multi thread communication in java is mainly shared memory (variable) and other methods.
  • Process communication: process communication (IPC) in java is mainly Socket, MQ, etc.

3.2 how to realize the communication between threads within the process?

3.2.1 implementation based on wait/nofity/notifyall

  1. Description of wait()/notify()/notifyall() method definition:
    1) Wait: block the thread that is using the monitor object and release the monitor object at the same time
    2) notify: wakes up a single thread waiting on the monitor object, but does not release the monitor object. At this time, the code calling this method continues to execute and does not release the object lock until the execution is completed
    3) notifyAll: wakes up all threads waiting on the monitor object, but does not release the monitor object. At this time, the code calling this method continues to execute and does not release the object lock until the execution is completed

  2. Application description of wait()/notify()/notifyall() method
    1) These methods must be applied to synchronous code blocks or synchronous methods
    2) These methods must be called by the monitor object (object lock)

Note: wait/notify/notifyAll is generally used to avoid performance loss caused by polling.

  1. Application case implementation of wait()/notify()/notifyall():
    Manually implement the blocking queue, and realize the communication of threads on the queue based on the wait()/notifyAll() method;
    Case: there is a producer consumer model in which producers and consumers operate container objects concurrently;


Code implementation: implement a thread safe container class (implement a blocking queue based on array)

/**
* Bounded message queue: used to access messages
* 1)Data structure: array (linear structure)
* 2)Specific algorithm: FIFO - First in First out
*/
public class BlockContainer<T> {//Class generics
	/**An array used to store data*/
	private Object[] array;
	/**Record the number of effective elements*/
	private int size;
	public BlockContainer () {
		this(16);//This (parameter list) means to call the constructor of the specified parameters of this class
	}
	public BlockContainer (int cap) {
		array=new Object[cap];//The default value of each element is null
	} 
}

Add a put method to the container to put the data.

 /**
 * The producer thread puts data into the container through the put method
 * Data is always in size
 * Note: this inside the instance method always points to
 * The current object (current instance) that calls this method
 * Note: there is no this in the static method. This can only be used
 * It is applied in instance method, construction method and instance code block
 */
 public synchronized void put(T t){//Synchronization lock: this
	 //1. Determine whether the container is full, and wait when it is full
	 while(size==array.length)
	 try{
	 	this.wait();
	 }catch(Exception e){}
	 //2. Release data
	 array[size]=t;
	 //3. Number of effective elements plus 1
	 size++;
	 //4. Inform consumers to retrieve data
	 this.notifyAll();
 }

Add a take method to the container class to fetch data from the container.

/**
 * Consumers get data through this method
 * Position: always take down the data of the position marked 0
 * @return
 */
@SuppressWarnings("unchecked")
public synchronized T take(){
	//1. Determine whether the container is empty, and wait if it is empty
	while(size==0)
	try{
		this.wait();
	}catch(Exception e){}
	//2. Fetch data
	Object obj=array[0];
	//3. Move elements
	System.arraycopy(
					array,//src original array
					1, //Where does srcPos copy from
					array, //Which array does dest put in
					0, //Where does destPost start
					size-1);//Copy several
	//4. Number of effective elements minus 1
	size--;
	//5. Set the size position to null
	array[size]=null;
	//6. Inform the producer to release the data
	this.notifyAll();//Notifies the wait ing thread of objects with the same lock
	return (T)obj;
}

3.2.2 Condition based implementation

  1. Condition class definition description
    Condition is a tool class for multi-threaded collaboration. Based on this class, it is convenient to block threads holding locks or wake up blocked threads. Its strength is that it can establish different conditions for multiple threads, and specify different threads to wake up through the signal()/signalall() method.

  2. Application description of Condition class
    1) Get Condition object based on Lock object
    2) The Condition object based await()/signal()/signalall() method implements thread blocking or wake-up.

  3. Application case implementation of Condition class object:
    Manually implement the blocking queue, and realize the communication of threads on the queue based on the wait()/notifyAll() method.

/**
* Bounded message queue: used to access messages
* 1)Data structure: array (linear structure)
* 2)Specific algorithm: FIFO - First in First out
*/
public class BlockContainer<T> {//Class generics
	/**An array used to store data*/
	private Object[] array;
	/**Record the number of effective elements*/
	private int size;
	public BlockContainer() {
		this(16);//This (parameter list) means to call the constructor of the specified parameters of this class
	}
 	public BlockContainer(int cap) {
		array=new Object[cap];//The default value of each element is null
	}
 	//JDK1. Reentrant lock introduced after 5 (more flexible than synchronized)
	private ReentrantLock lock=new ReentrantLock(true);// true means fair lock is used, and the default is non fair lock
	private Condition producerCondition=lock.newCondition();//Communication conditions
	private Condition consumerCondition=lock.newCondition();//Communication conditions
}

Add a put method to the container to put data into the container

/**
* The producer thread puts data into the container through the put method
* Data is always in size
* Note: this inside the instance method always points to
* The current object (current instance) that calls this method
* Note: there is no this in the static method. This can only be used
* It is applied in instance method, construction method and instance code block
*/
public void put(T t){//Synchronization lock: this
	System.out.println("put");
	lock.lock();
	try{
		//1. Determine whether the container is full, and wait when it is full
		while(size==array.length)
		//Equivalent to the wait method in the Object class
		try{
			producerCondition.await();
		}catch(Exception e){
			e.printStackTrace();
		}
		//2. Release data
		array[size]=t;
		//3. Number of effective elements plus 1
		size++;
		//4. Inform consumers to retrieve data
		consumerCondition.signalAll();//Equivalent to notifyall() in object class
	} finally {
		lock.unlock();
	}
}

Add the take method to the container class to fetch data from the container

/**
* Consumers get data through this method
* Position: always take down the data of the position marked 0
* @return
*/
@SuppressWarnings("unchecked")
public T take(){
	System.out.println("take");
	lock.lock();
	try{
		//1. Determine whether the container is empty, and wait if it is empty
		while(size==0)
		try{
			consumerCondition.await();
		}catch(Exception e){}
		//2. Fetch data
		Object obj=array[0];
		//3. Move elements
		System.arraycopy(
						array,//src original array
						1, //Where does srcPos copy from
						array, //Which array does dest put in
						0, //Where does destPost start
						size-1);//Copy several
		//4. Number of effective elements minus 1
		size--;
		//5. Set the size position to null
		array[size]=null;
		//6. Inform the producer to release the data
		producerCondition.signalAll();//Notifies the wait ing thread of objects with the same lock
		return (T)obj;
	} finally {
		lock.unlock();
	}
}

3.3 how to realize interprocess communication (IPC)?

3.3.1 realize interprocess communication based on socket?

Simple server based on BIO

public class BioMainServer01 {
	private Logger log=LoggerFactory.getLogger(BioMainServer01.class);
	private ServerSocket server;
	private volatile boolean isStop=false;
	private int port;
	public BioMainServer01(int port) {
		this.port=port; 
	}
	public void doStart()throws Exception {
		server=new ServerSocket(port);
		while(!isStop) {
			Socket socket=server.accept();
			log.info("client connect");
			doService(socket);
		}
		server.close();
	}
	public void doService(Socket socket) throws Exception{
		InputStream in=socket.getInputStream();
		byte[] buf=new byte[1024];
		int len=-1;
		while((len=in.read(buf))!=-1) {
			String content=new String(buf,0,len);
			log.info("client say {}", content);
		}
		in.close();
		socket.close();
	}
	public void doStop() {
		isStop=false; 
	}
	public static void main(String[] args)throws Exception {
		BioMainServer01 server=new BioMainServer01(9999);
		server.doStart();
	} 
}

Start the service, and then open the browser for access or access through the following client side

public class BioMainClient {
	public static void main(String[] args) throws Exception{
		Socket socket=new Socket();
		socket.connect(new InetSocketAddress("127.0.0.1", 9999));
		OutputStream out=socket.getOutputStream();
		Scanner sc=new Scanner(System.in);
		System.out.println("client input:");
		out.write(sc.nextLine().getBytes());
		out.close();
		sc.close();
		socket.close();
	} 
}

Tags: Java Multithreading Concurrent Programming

Posted by andrew10181 on Sat, 16 Apr 2022 07:07:42 +0930