Database index and transaction


1, Why index? (advantages and disadvantages of index)

1. Meaning of index

An index is the directory of a table. You can find the index position in the directory before finding the content, so as to quickly locate the query data. For indexes, they are saved in additional files.

Index is a data structure specially used to help users query data quickly in database. Similar to the directory in the dictionary, when looking up the contents of the dictionary, you can find the storage location of the data according to the directory, and then obtain it directly.

Can we do without an index? The answer is yes. You can arrange the data neatly in the disk array in the database without using the index. When you get the data, you only need to search one by one and return the results. However, if the developed application has tens of millions or even billions of levels of data, you can't run the program without a deep understanding of the principle of the index. It takes several days to check a database alone, so you need the index, Can quickly find the required data.

2. Advantages and disadvantages of index


  • It greatly accelerates the indexing process and reduces the number of IO
  • Create a unique index to ensure the uniqueness in the database table
  • Accelerates the connection between tables
  • When searching for grouping and sorting, it can significantly reduce the grouping and sorting time in the query


  • The index table occupies physical space
  • When the data in the data table is added, modified and deleted, it is necessary to dynamically maintain the index table, which reduces the speed of data maintenance

2, Indexed data structure

Any kind of data structure is not generated out of thin air. It must have its background and use scenario. Let's summarize what we need this data structure to do. In fact, it is very simple, that is, control the disk IO times to a small order of magnitude every time we look for data, preferably a constant order of magnitude. So we think that if a highly controllable multi-channel search tree can meet the demand? In this way, b + tree came into being.

As shown in the figure above, it is a b-tree. For the definition of b-tree, see B tree , here are just some key points. The light blue block is called a disk block. You can see that each disk block contains several data items (shown in dark blue) and pointers (shown in yellow). For example, disk block 1 contains data items 17 and 35, including pointers P1, P2 and P3. P1 represents the disk block smaller than 17, P2 represents the disk block between 17 and 35, and P3 represents the disk block greater than 35. Real data exists in leaf nodes, i.e. 3, 5, 9, 10, 13, 15, 28, 29, 36, 60, 75, 79, 90 and 99. Non leaf nodes do not store real data, but only data items that guide the search direction. For example, 17 and 35 do not really exist in the data table.

1. Search process of b-tree

As shown in the figure, if you want to find the data item 29, you will first load the disk block 1 from the disk to the memory. At this time, an IO occurs. Use the binary search in the memory to determine that 29 is between 17 and 35. Lock the P2 pointer of disk block 1. The memory time is negligible because it is very short (compared with the IO of the disk). Load the disk block 3 from the disk to the memory through the disk address of the P2 pointer of disk block 1, and the second IO occurs. 29 is between 26 and 30, Lock the P2 pointer of disk block 3, load disk block 8 to memory through the pointer, and the third IO occurs. At the same time, do a binary search in memory to find 29, and end the query. There are three IOS in total. The real situation is that the three-tier b + tree can represent millions of data. If millions of data searches only need three IO, the performance improvement will be huge. If there is no index and each data item needs one IO, then a total of millions of IO are required. Obviously, the cost is very high.

The dynamic diagram shows the search process:

2. b-tree property

1) the index field should be as small as possible: through the above analysis, we know that the number of IO depends on the height h of b + number. Assuming that the data in the current data table is N and the number of data items in each disk block is m, there is h = ㏒ (m+1)N. when the amount of data N is certain, the larger m, the smaller h; M = size of disk block / size of data item. The size of disk block, that is, the size of a data page, is fixed. If the space occupied by data item is smaller, the number of data items is more, and the height of tree is lower. This is why each data item, that is, the index field should be as small as possible. For example, int takes up 4 bytes, which is half less than bigint8 bytes. This is why the b + tree requires that the real data be placed in the leaf node instead of the inner node. Once placed in the inner node, the data items of the disk block will decrease significantly, resulting in the increase of the tree. When the data item is equal to 1, it will degenerate into a linear table.

2) leftmost matching feature of the index (i.e. matching from left to right): when the data items of the b + tree are composite data structures, such as (name,age,sex), the b + number establishes the search tree from left to right. For example, when data such as (Zhang San, 20,F) is retrieved, the b + tree will give priority to comparing the name to determine the search direction in the next step. If the name is the same, then compare the age and sex in turn to get the retrieved data; However, when there is no name data like (20,F), the b + tree does not know which node to check next, because name is the first comparison factor when establishing the search tree. You must search according to name first to know where to query next. For example, when retrieving data such as (Zhang San, F), the b + tree can use name to specify the search direction, but the next field age is missing, so we can only find the data whose name is equal to Zhang San, and then match the data whose gender is F. this is a very important property, that is, the leftmost matching feature of the index.

3, Mysql index classification and usage scenarios

1. Function

#1. The function of index is to speed up search
#2. The primary key, unique and unique indexes in MySQL are also indexes. These indexes not only speed up the search, but also have the function of constraints

2. Index classification and usage scenarios of MySQL

Index classification
1.General index index :Speed up search
2.unique index
    Primary key index: primary key : Speed up search+Constraint (not empty and unique)
    Unique index: unique: Speed up search+Constraints (unique)
3.Joint index
    -primary key(id,name):Federated primary key index
    -unique(id,name):Federated unique index
    -index(id,name):Joint general index
4.Full text index fulltext :When used to search for a long article, the effect is the best.
5.Spatial index spatial :Just know, almost No
For example, you are making a membership card system for a shopping mall. 
The system has a membership form
 There are the following fields:
    Member number INT
    Member name VARCHAR(10)
    Membership ID number VARCHAR(18)
    Member telephone VARCHAR(11)
    Member Address VARCHAR(50)
    Member notes TEXT

Then this member number, as the primary key, is used PRIMARY
 If the member name needs to be indexed, it is ordinary INDEX
 Membership ID number can be chosen if index is to be built. UNIQUE (Unique (duplicate is not allowed)

#In addition, there is a full-text index, FULLTEXT
 If you need to build an index, you can choose full-text search.
When used to search for a long article, the effect is the best.
Used in relatively short text, if only one or two lines, ordinary INDEX it's fine too.
But in fact, we will not use full-text search MySQL Instead of the self-contained index, third-party software will be selected, such as Sphinx,Dedicated to full-text search.
#Others, such as SPATIAL, can be understood and hardly used

4, Two types of indexes: hash and btree

#When creating the above index, we can specify the index type for it, which can be divided into two categories

hash Index of type: single query is fast and range query is slow
btree Index of type: b+Tree, the more layers, the exponential growth of data volume (usually used) btree,because innodb (it is supported by default)

#Different storage engines support different index types
InnoDB Supports transaction, row level locking, and B-tree,Full-text Index, not supported Hash Indexes;
MyISAM Transaction is not supported, table level locking is supported, and B-tree,Full-text Index, not supported Hash Indexes;
Memory Transaction is not supported, table level locking is supported, and B-tree,Hash Index, not supported Full-text Indexes;
NDB Supports transaction, row level locking, and Hash Index, not supported B-tree,Full-text Equal index;
Archive Transaction and table level locking are not supported B-tree,Hash,Full-text Equal index;

5, Create / delete index syntax

 #Method 1: when creating a table
       CREATE TABLE Table name (
                 Field name 1 data type [Integrity constraints],
                 Field name 2 data type [Integrity constraints],
                 [UNIQUE | FULLTEXT | SPATIAL ]   INDEX | KEY
                 [Index name]  (Field name[(length)]  [ASC |DESC]) 
 #Method 2: CREATE an index on an existing table
         CREATE  [UNIQUE | FULLTEXT | SPATIAL ]  INDEX  Index name 
                 ON Table name (Field name[(length)]  [ASC |DESC]) ;

 #Method 3: ALTER TABLE creates an index on an existing table
                 Index name (Field name[(length)]  [ASC |DESC]) ;

Make good use of help documents
help create
help create index

1.Create index

    -When to create a table
    create table s1(
    id int ,#You can add it here primary key #id int index #You can't index like this because index It's just an index without constraints #You can't add indexes when defining fields like primary keys and unique constraints
    name char(20),
    age int,
    email varchar(30)
    imary key(id) #You can also add a primary key index here
    index(id) #You can add a common index like this (choose one of the two)
    -Create index after creating table(#Format: create index name on table name (field name);)

    create index name on s1(name); #Add normal index
    create unique age on s1(age);Add unique index
    alter table s1 add primary key(id); #Adding a primary key index means adding a primary key constraint to the id field
    create index name on s1(id,name); #Add normal union index

2.Delete index
    drop index id on s1;
    drop index name on s1; #Delete normal index
    drop index age on s1; #Deleting a unique index is the same as an ordinary index. You can delete it directly without adding unique before the index
    alter table s1 drop primary key; #Delete the primary key (because it is added according to alter, we also use alter to delete it)
3.View index
	show index from s1;

6, Index test

1. Prepare test data

#1. Preparation form
create table s1(
id int,
name varchar(20),
gender char(6),
email varchar(50)

#2. Create a stored procedure to insert records in batches
mysql> delimiter // #Declare that the end symbol of the stored procedure is//

mysql> create procedure auto_insert1()  #Create a stored procedure
    declare i int default 1;  #Define the initial value of variable i as 1
    while(i<3000000)do   #Loop statement when I < 3000000
        insert into s1 values(i,concat('cm',i),'f',concat('cm',i,'@oldboy'));  #Batch insert name cm1-cm3000000, gender f, email cm1@oldboy-cm3000000@oldboy Data information
    set i=i+1;  #Insert a piece of data i+1
    end while;
	END // 			

mysql> delimiter ; #Redeclare the semicolon as the closing symbol (terminate and save the declaration created above)

#3. View stored procedures
mysql> show create procedure auto_insert1\G 

#4. Call stored procedure
mysql> call auto_insert1();

2. Test query speed before index creation

#No index: scan from beginning to end, so the query speed is very slow
mysql> select * from s1 where id=333;
| id   | name    | gender | email          |
|  333 | cm333 | male     | cm333@oldboy   |
1 row in set (1.33 sec)

mysql> explain select * from s1 where id=333;

mysql> select * from s1 where email='cm333@oldboy';
| id   | name    | gender | email          |
|  333 | cm333   | male   | cm333@oldboy   |
1 row in set (1.50 sec)

3. Add index

#1. You must create an index for the fields of the search criteria, such as select * from S1 where id > 5; You need to index the ID

#2. When there is already a large amount of data in the table, indexing will be very slow and occupy hard disk space. Insertion, deletion and update are very slow, only query is fast
 such as create index idx on s1(id);All data in the table is scanned, and then id Create an index structure for data items and store them in the table on the hard disk.
After the construction, the query will be very fast

mysql> create index idx on s1(id);

#3. It should be noted that the index of innodb table will be stored in S1 In the IBD file, and the index of the myisam table will have a separate index file table1 MYI

mysql> select * from s1 where id=333;
| id   | name    | gender | email          |
|  333 | cm333   | male   | cm333@oldboy   |
1 row in set (0.01 sec)

7, innodb core feature transaction

1. What is a transaction

A set of data operation execution steps, which are regarded as a unit of work:
1)Used to group multiple statements
2)Can be used when multiple clients concurrently access data in the same table

All steps succeed or fail
1)If all steps are OK, perform
2)If the step is incorrect or incomplete, cancel

2. Transaction presentation

#1. Create a table
mysql> create table jiaoyi(id int primary key auto_increment,name varchar(10),money int);

#2. Insert two pieces of data
mysql> insert into jiaoyi values(1,'qiudao',100);
Query OK, 1 row affected (0.01 sec)

mysql> insert into jiaoyi values(2,'oldboy',200);
Query OK, 1 row affected (0.00 sec)

#3. Open another window
mysql> select * from jiaoyi;
The data is consistent with the viewing result of the original window

#4. Start a transaction (logical)
mysql> start transaction;
mysql> update jiaoyi set money=0 where name='qiudao';
mysql> select * from jiaoyi;
mysql> update jiaoyi set money=400 where name='oldboy';
mysql> select * from jiaoyi;
#Another window cannot see the data change before the commit is executed
mysql> commit;
#After the commit is executed, everyone else can see the data changes

#5. start a transaction (illogical)
mysql> start transaction;
mysql> update jiaoyi set money=-100 where name='qiudao';
mysql> select * from jiaoyi;
| id   | name   | money |
|    1 | qiudao |  -100 |
|    2 | oldboy    |   400 |
2 rows in set (0.00 sec)
#Since the value of money cannot be negative, it is judged by logic that the operation failed and rolled back
mysql> rollback;
mysql> select * from jiaoyi;
| id   | name   | money |
|    1 | qiudao |     0 |
|    2 | oldboy    |   400 |
2 rows in set (0.00 sec)

3. Popular understanding of affairs

#The concept of database accompanied by "transaction".

What do we understand as a "deal"?
1)Exchange of things (ancient times)
2)Exchange of money, cash and physical goods (Modern 1)
3)Exchange between virtual currency and physical object (Modern 2)
4)Virtual currency and virtual physical exchange (modern 3)

What is the "transaction" in the database?
1)How to ensure the "harmony" of "transaction"?

4. The life cycle of a successful transaction

start transaction;

5. Life cycle of a failed transaction

start transaction;

6. Transaction characteristics ACID

All statements are successfully executed or cancelled as a unit.

If the database is in a consistent state at the beginning of a transaction, it will remain in a consistent state during the execution of the transaction.

There is no interaction between transactions.

After the transaction completes successfully, all changes made are accurately recorded in the database. Your changes are not lost.

We can analyze that among the four characteristics of transaction, all operations will go to disk, so persistence is the purpose of transaction operation, atomicity is the basis of transaction, isolation is a strategy and means to realize data security, and the final maintenance is data consistency, which is the most important in transaction. Among the four features, isolation is a means to achieve consistency.

ACID Among the four features, the most difficult to understand is not consistency, but transaction isolation. Database authoritative experts have studied four levels of transaction isolation for transaction isolation. The four levels of transaction isolation are to solve the problems caused by high concurrency of data (dirty reading, non repeatable reading and unreal reading).

7. Transaction control statement

1) Auto submit

#1. View automatic submission
mysql> show variables like 'autocommit';
| Variable_name | Value |
| autocommit    | ON    |
1 row in set (0.01 sec)

#2. Temporary closure
set autocommit =0;

#3. Permanent closure
[root@db01 ~]# vim /etc/my.cnf

2) Implicit commit of transactions

1)In the current version, there is no need to manually start a transaction start transaction;,As long as you enter DML Statement, the transaction will be automatically opened.
2)In some cases, transactions are implicitly committed

#Implicit commit trigger
1.The transaction was not executed commit If used DDL perhaps DCL Automatic commit of last transaction
2.The transaction was not executed commit If you execute it manually begin,The previous transaction is automatically committed
3.The transaction was not executed commit If the lock table is executed( lock tables)Or unlock( unlock tables),The previous transaction is automatically committed
4.load data infile(Import data) will automatically commit the previous transaction for update
6.stay autocommit=1 The previous transaction will be submitted automatically when

update ....
insert into ....
update ....
create table ....
delete ....

3) Control statement

START TRANSACTION(or BEGIN): Explicitly start a new transaction				#Transaction on
SAVEPOINT: Assign a location in the transaction process for future reference				  #Temporary Archive
COMMIT: Permanently record changes made by the current transaction							 #Submit
ROLLBACK: Cancel changes made by the current transaction								#RollBACK 
ROLLBACK TO SAVEPOINT: Cancel in savepoint Subsequent changes			  #Back to archive point
RELEASE SAVEPOINT: delete savepoint identifier 						#Delete temporary Archive
SET AUTOCOMMIT: Disable or enable default for current connection autocommit pattern

8, Isolation level of transaction

Problems caused by transaction concurrency

We talked about the isolation of transactions earlier. If you want to improve the throughput of the system, when there are multiple tasks to be processed, you should let multiple transactions execute at the same time, which is transaction concurrency. Since the transaction is executed concurrently, the conflict problem of the same data operation will inevitably occur. Let's see what problems will occur.

1.Update lost

Lost Update,When two transactions update the same row of data, both parties do not know the existence of the other party, so it is possible to overwrite the modification of the other party. For example, if two people edit a document at the same time, the last person who changes it will always overwrite the previous person's changes.

2.Dirty reading

Dirty Reads,When a transaction modifies a piece of data during execution, another transaction just reads the data and does other operations based on the data. Because the previous transaction has not been committed, if it is further processed based on the modified data, irreparable losses will be incurred.

3.Non repeatable reading

Non-Repeatable Reads,Similarly, two transactions are operating on the same data. If a data is read at the beginning of the transaction, another transaction modifies the data. When the transaction reads the data again, it is found that it has changed. That is, it is impossible to read a piece of data repeatedly.

4.Unreal reading

Phantom Read,As in the above scenario, at the beginning of a transaction, no data is found according to a certain query condition. As a result, due to the influence of another transaction, the data is found again. This is like an illusion, which is called Unreal reading.

1. Four isolation levels

First of all, the problem of update loss should be solved by the application layer, because the database has no way to control users not to update a piece of data. But the other three problems can be solved. Since there is a solution, why set so many isolation levels?

Just now, if we want to improve performance and throughput, we have to pay some price. If we want to achieve no side effects at all, we just need to queue transactions for execution. One by one, there will never be the problem of dirty reading and unreal reading, but this will lead to very slow database processing. Then what shall I do? The only thing the official can do is to provide you with various levels of processing methods, which you can choose according to the specific business scenario, so there is an isolation level.

Read uncommitted Read uncommitted

Reading uncommitted is actually a transaction that can be read without committing. Obviously, this isolation level will lead to reading other uncommitted data. Once further processing is made based on the read data and another transaction finally rolls back the operation, the data will be disordered and difficult to track. In general, reading uncommitted levels can lead to dirty reads.

Read commit Read committed

As the name suggests, it can only be read after the transaction is submitted. Suppose you take the bank card to spend. Before you pay, you see that the card has 2000 yuan. At this time, your wife is shopping on Taobao and completes the payment ahead of you. At this time, when you pay again, you will be prompted that the balance is insufficient, but it is clear that you see that the money in the card is enough.

These are two transactions. When two transactions are executed, the transaction A At the beginning, I read that the card has 2000 yuan. At this time, the transaction B Spend all of Cary's money, business A When I finally reconfirmed the balance, I found that there was no money in the card. Obviously, read submission can solve the problem of dirty reading, but it can't solve the problem of non repeatable reading.

Sql Server,Oracle The default isolation level for is Read committed. 

Repeatable Repeatable read

It can be seen from the name. Its emergence is to solve the problem of non repeatable reading, transaction A Once started, regardless of the transaction B How to change data, transaction A What you always read is the value it just started reading. Then the problem comes, suppose the transaction B hold id The data of 1 is changed to 2, transaction A I don't know id Changed when the transaction A When adding data, it is found to be 2 id It already exists. This is fantasy reading.

MySQL The default isolation level is Repeatable read. 

Serialization serializable

This is the most invincible existence. All transactions are executed one by one, because there is no concurrent scene. There are no unreal reads, dirty reads and unrepeatable reads. But again, the basic concurrency will be very poor. Finally, the isolation level should be selected according to your business scenario. There is no best one, only the most suitable one.

The more strict the transaction isolation level is, the more computer performance will be consumed and the lower the efficiency will be. Generally, setting it to allow non repeatable reading can solve most problems.

The default data isolation level used by Mysql is REPEATABLE READ, which can be read repeatedly and can be read unreasonably.

2. Example demonstration of common problems of transaction concurrency

2.1 dirty reading

Dirty read: for example, if two transactions operate on the same database record in parallel, transaction A can read the uncommitted data of transaction B.

1) Modify auto commit transaction switch
Temporarily valid

mysql Database transaction switch
 Turn on auto commit transactions: set autocommit = 1;
Turn off auto commit transactions: set autocommit = 0 ;
View transaction switch: show variables like '%autocommit%';

2) Modify the transaction isolation level of the database
//Global --- here we show how to set global
set global transaction isolation level read uncommitted;
//Current session
set session transaction isolation level read uncommitted;

3) Terminal 1 queries the information specified by the user (no transaction is submitted here)
-- Turn off auto commit transactions
set autocommit = 0;
-- Query specified user
select * from jiaoyi where id=1;
-- Modify the specified user balance
update jiaoyi set money = 500;

4) Terminal 2 queries user information
---------- window B ------------------------------------------
-- Turn off auto commit transactions
set autocommit = 0;
-- Query specified user
select * from jiaoyi where id=1;
| id   | name   | money |
|    1 | qiudao |   500 |
1 row in set (0.00 sec)

Obviously, business A The transaction has not been committed after modifying the data. At this time, the transaction B Transactions can be read A There is no data for the committed transaction. This is dirty reading. Data in the case of dirty reading is not desirable, so generally there is no database transaction level set to allow dirty reading.

One sentence summary: dirty reading refers to transactions A Read transaction B Data that was modified but not committed to the transaction.

2.2 non repeatable reading

After learning dirty reading, let's see what is non repeatable reading. For example, transaction A reads the same record multiple times in the same transaction. At this time, transaction B modifies the data being read by transaction A and commits the transaction, but transaction A reads the data submitted by transaction B, resulting in inconsistent data between the two reads.

1) Modify transaction isolation level
//Global --- here we show how to set global
set global transaction isolation level read committed;
//Current session
set session transaction isolation level read committed;

2) Window 1 opens the transaction to query the data of the specified user
---------- window A ------------------------------------------
-- Turn off auto commit transactions
set autocommit = 0;
-- Query specified user
select * from jiaoyi where id = 1;

Window 2 executes the transaction

-- Query specified user
select * from jiaoyi where id = 1;
-- Commit transaction

3) Window 2 executes transactions at the same time

Window 1 first opens the transaction to query the information of the specified user, then window 2 opens the transaction to query the data of the specified user, modify the data, submit the transaction, and then return to window 1 to query the information of the specified user; The operation contents of window 2 are as follows

-- Turn off auto commit transactions
set autocommit = 0;
-- Query specified user
select * from jiaoyi where id = 1;
-- Modify the specified user balance
update money  set money = 300 where id=1;
-- Commit transaction

affair A In the two queries, the data queried are different, which means that they cannot be read repeatedly. Mysql By default, the isolation level of non repeatable reading is adopted. To sum up in one sentence, non repeatable reading is a transaction A Read transaction B The data of the transaction has been committed, resulting in inconsistent data information read twice.

2.3 unreal reading

Above, I learned what can not be read repeatedly. In mysql database, it is not allowed to read repeatedly.

//Global --- here we show how to set global
set global transaction isolation level repeatable read;
//Current session
set session transaction isolation level repeatable read;

2) Window 1
-- Turn off auto commit transactions
set autocommit = 0;

select * from jiaoyi where id = 5;

###At this time, window 1 does not query the data with id 5. When preparing to insert, window 2 inserts a piece of data with id 5.

-- No query results found, please insert
insert  into jiaoyi values (5, 'cm',200);

-- Re query id 5
select * from money where id = 5;


3) Window 2: here, transaction A queries the record with the specified id of 5. After no query result is found, transaction B inserts A record with the id of 5 and submits the transaction;
-- Turn off auto commit transactions
set autocommit = 0;
-- Insert a record
insert  into jiaoyi values (5,'cm',200);
-- Commit transaction

In the above transaction A In, insert without committing the transaction id Records with 5 will always report errors and primary key conflicts, but how to query again id No records of 5 can be queried; This is because in MySql In the design of, the query in the transaction is the log before being modified. Namely Undo log. 

3. Modify isolation level

#View isolation level
mysql> show variables like '%iso%';
| Variable_name | Value            |
| tx_isolation  | READ-UNCOMMITTED |
1 row in set (0.00 sec)

#Configure isolation level
[root@db01 ~]# vim /etc/my.cnf

8, Log of transactions

1.redo log

redo,As the name suggests, "redo log" is a kind of transaction log.

1) Function

In transaction ACID In the process, the realization is“ D "The role of persistence.
REDO: It records the change process of memory data pages

characteristic:WAL(Write Ahead Log)Log write first

2) REDO working process

#Perform steps
update t1 set num=2 where num=1; 
1)First will t1 In the table num=1 The data page of the row is loaded into memory buffer page
2)MySQL Instance in memory num=1 Change the data page to num=2
3)num=1 become num=2 The process of change will be recorded, redo Memory area, i.e redo buffer page in

#Commit transaction execution steps
1)When knocked down commit At the moment of command, MySQL Will redo buffer page Write to disk area redo log
2)When the write is successful, commit return ok

2.undo log

1) Function

undo,As the name suggests, "rollback log" is a kind of transaction log.

In transaction ACID In the process, the realization is“ A"The role of atomicity. of course CI Its characteristics are also similar to undo of

8, Locks in transactions

1. What is a lock? Function of lock

As the name suggests, "lock" means lock.

In transaction ACID In the feature process, "lock" and "isolation level" are implemented together“ I"The role of isolation.

2. Type of lock

Exclusive lock: ensure the consistency of data during multi transaction operation.(When I modify the data, others cannot modify it)
Shared lock: ensure that data query will not be blocked during multi transaction work.

Optimistic lock: during multi transaction operations, data can be modified at the same time. Who submits first and who modifies successfully.
Pessimistic lock: during multi transaction operation, only one person can modify the data.

3. Multi version concurrency control (MVCC)

1)Only modify class operations (exclusive locks) are blocked, and query class operations (shared locks) are not blocked
2)Optimistic lock mechanism (whoever submits first shall prevail)

4. Lock granularity

MyISAM: Table lock
InnoDB: Row level lock

Tags: Database

Posted by chandler on Thu, 14 Apr 2022 05:47:31 +0930