Phase VII module I Dubbo & zookeeper
Task 1: distributed technology Zookeeper
Course objectives:
1. Zookeeper overview
2. Zookeeper local mode installation
3. Zookeeper internal principle
4. Zookeeper actual combat
1. Zookeeper overview
1.1 general
- Meituan, are you hungry? Taobao, 58 city and other applications are the real-life version of zookeeper
- Lao sun, I opened a restaurant. How can we all eat our food? You need to enter meituan, so that you can see my hotel in meituan app and place orders to complete a transaction
- Zookeeper is an open source distributed (multiple servers do one thing) Apache project that provides coordination services for distributed applications.
- In the big data technology ecosystem, zookeeper, Hadoop, Hive, Pig and other technologies
1.2 working mechanism
- Zookeeper is understood from the perspective of design pattern: it is a distributed service management framework designed based on the observer pattern (one person works and others stare at him)
- It is responsible for storing and managing data that everyone cares about
○ then accept the registration of observers, once these data change
○ Zookeeper will respond accordingly to those observers who will be responsible for notifying those who have registered
○ so as to realize the similar Master/Slave management mode in the cluster - Zookeeper = file system + notification mechanism
- Business incorporation
- Get the list of hotels currently open
- Server node offline
- Server node online and offline event notification
- Get the server list again and register to listen
1.3 features
- What is the difference between distributed and cluster?
○ whether distributed or clustered, many people are doing things. The specific differences are as follows:
○ for example: I have a restaurant that is becoming more and more popular. I have to recruit more staff
▸ distributed: recruit one chef, one waiter and one front desk. The work of the three people is different, but the final goal is
All work for hotels
▸ cluster: recruit three waiters, and the work of three people is the same
- It is a cluster composed of one leader and multiple follower s (among the lions, one male lion and N female lions)
- As long as more than half of the nodes in the cluster survive, Zookeeper can work normally (5 servers, 2 servers, no problem; 4 servers)
(stop when two servers are hung) - Global data consistency: each server saves a copy of the same data. No matter which server the client connects to, the data is the same
coincident - The atomicity of data update means that the data will either succeed or fail at one time (failure will lead to benevolence)
- Real time. Within a certain time range, the client can read the latest data
- The updated requests are executed in order and will be executed one by one in the order they are sent (123 instead of 321)
Or something else)
1.4 data structure
- The structure of ZooKeeper data model is very similar to that of linux file system. On the whole, it can be regarded as a tree, and each node is called a ZNode (ZooKeeper node).
- Each ZNode can store 1MB of data (metadata) by default, and the path of each ZNode is unique
○ Metadata, also known as intermediary data and relay data, is data about data, which mainly describes the information of data properties, and is used to support functions such as indicating storage location, historical data, resource search, file record, etc
1.5 application scenarios
- The services provided include: unified naming service, unified configuration management, unified cluster management, dynamic online and offline of server nodes, soft load balancing, etc
1.5.1 unified naming service
- In a distributed environment, applications or services usually need to be uniformly named for easy identification
- For example, the IP address of the server is not easy to remember, but the domain name is relatively easy to remember
1.5.2 unified configuration management
- In distributed environment, configuration file synchronization is the only way
- For 1000 servers, if the configuration file is modified, the operation and maintenance personnel will be crazy. How to quickly synchronize each modification to each server
- Leave configuration management to Zookeeper
1. Write the configuration information to a node of Zookeeper
2. Each client listens to this node
3. Once the data file in the node is modified, Zookeeper will notify each client server
1.5.3 dynamic online and offline of server nodes
- The client can get the changes of the server online and offline in real time
- On meituan APP, you can see whether the merchant is operating or proofing in real time
1.5.4 soft load balancing
- Zookeeper will record the number of accesses of each server and let the server with the least number of accesses handle the latest customer requests (rain and dew)
- They are all their own children. They have to be level with a bowl of water
1.6 download address
Image library address: http://archive.apache.org/dist/zookeeper/
- apache-zookeeper-3.6.0.tar.gz needs to install maven, and then run mvn clean install and mvn javadoc:aggregate. The previous command will download and install many jar packages. I don't know how long it will take
- apache-zookeeper-3.6.0-bin.tar.gz has brought all kinds of jar packages it needs
2. Zookeeper local mode installation
2.1 local mode installation
2.1.1 preparation before installation
- Install jdk
- Copy apache-zookeeper-3.6.0-bin tar. GZ to opt directory
- Unzip the installation package
[root@localhost opt]# tar -zxvf apache-zookeeper-3.6.0-bin.tar.gz
- rename
[root@localhost opt]# mv apache-zookeeper-3.6.0-bin zookeeper
2.1.2 configuration modification
- Create zkData and zkLog directories on the directory / opt/zookeeper /
[root@localhost zookeeper]# mkdir zkData [root@localhost zookeeper]# mkdir zkLog
- Enter / opt/zookeeper/conf to copy a copy of zoo_sample.cfg file and name it zoo cfg
[root@localhost conf]# cp zoo_sample.cfg zoo.cfg
- Edit zoo Cfg file, modify dataDir path:
dataDir=/opt/zookeeper/zkData dataLogDir=/opt/zookeeper/zkLog
2.1.3 operating Zookeeper
- Start Zookeeper
[root@localhost bin]# ./zkServer.sh start
- Check whether the process starts
[root@localhost bin]# jps
- QuorumPeerMain: it is the startup entry class of zookeeper cluster. It is used to load the configuration and start the QuorumPeer thread
- View status:
[root@localhost bin]# ./zkServer.sh status
- Start client
[root@localhost bin]# ./zkCli.sh
- Exit client
[zk: localhost:2181(CONNECTED) 0] quit
2.2 interpretation of configuration parameters
The configuration file zoo. In Zookeeper The meaning of parameters in CFG is interpreted as follows:
- tickTime =2000: Communication heartbeat, heartbeat time between Zookeeper server and client, unit: Ms
○ the basic time used by Zookeeper, the time interval between servers or between clients and servers to maintain heartbeat, that is, one heartbeat will be sent every tickTime, and the time unit is milliseconds. - initLimit =10: initial communication time limit of LF
○ the maximum number of heartbeats that can be tolerated at startup between the Follower server and the Leader server in the cluster
○ 10 * 2000 (10 heartbeat times) if the leader and follower do not send heartbeat communication, it will be regarded as a failed connection, and the leader and follower will be completely disconnected - syncLimit =5: time limit of LF synchronous communication
○ the maximum response time unit between the Leader and the Follower after the cluster is started. If the response exceeds synclimit * ticktime - > 10 seconds, the Leader thinks that the Follower has died and will delete the Follower from the server list - dataDir: data file directory + data persistence path
○ it is mainly used to save the data in Zookeeper. - dataLogDir: log file directory
- clientPort =2181: client connection port
○ listen to the port connected by the client.
3. Internal principle of zookeeper
3.1 election mechanism (key points of interview)
- Half mechanism: more than half of the machines in the cluster survive and the cluster is available. Therefore, Zookeeper is suitable for installing an odd number of servers
- Although Master and Slave are not specified in the configuration file. However, when Zookeeper works, one node is the Leader and the other is the Follower. The Leader is temporarily selected through the internal election mechanism
- Server1 votes first and votes for itself. It is 1 vote for itself. Without more than half of the votes, it can't be a leader at all. Push the boat with the flow and count the votes
Voted for Server2 with a larger id than itself - Server2 also voted for itself. With the votes given by Server1, the total number of votes was 2, not more than half of them
Unable to become a leader, he also learned from Server1, pushed the boat with the current, and gave all his votes to Server3 with a larger id than himself - Server3 got two votes for Server1 and Server2, plus one vote for itself. More than half of the three votes successfully became
leader - Both Server4 and Server5 voted for themselves, but they couldn't change the number of votes for Server3. They had to resign themselves to their fate and admit that Server3 is
leader
3.2 node type
- persistent:
○ after the persistent directory node client is disconnected from zookeeper, the node still exists
○ after the persistent_sequential client is disconnected from zookeeper, the node still exists. When creating znode, set the sequence ID, and a value will be appended to the name of znode. The sequence number is a monotonically increasing counter maintained by the parent node, such as Znode001, Znode002 - ephemeral:
○ after the client and server of ephemeral node are disconnected, the created node will be deleted automatically
○ after the temporary sequential number directory node (ephemeral_sequential) client is disconnected from zookeeper, the node will be deleted. When creating znode, the sequence ID will be set, and a value will be added after the znode name. The sequence number is a monotonically increasing counter maintained by the parent node, such as Znode001, Znode002
Note: the serial number is equivalent to i + +, which is similar to the self growth in the database
3.3 listener principle (key points of interview)
- When the Zookeeper client is created in the main method, two threads will be created. One is responsible for network connection communication and the other is responsible for listening
- The monitoring event will be sent to zookeeper through network communication
- After zookeeper obtains the registered listening events, it immediately adds the listening events to the listening list
- When zookeeper listens to data changes or path changes, it will send this message to the listening thread
○ common monitoring
1. Listen for changes in node data: get path [watch]
2. Listen for changes in the increase or decrease of child nodes: ls path [watch] - The listening thread will call the process method internally (we need to implement the content of the process method)
3.4 data writing process
- If the Client wants to write data to ZooKeeper Server1, it must first send a write request
- If Server1 is not a Leader, Server1 will further forward the received request to the Leader.
- The Leader will broadcast the write request to each Server. After each Server writes successfully, the Leader will be notified.
- When the Leader receives more than half of the Server data and writes it successfully, it means that the data is written successfully.
- Then, the Leader will tell Server1 that the data has been written successfully.
- Server1 will notify the Client that the data has been written successfully and the whole process is over
4. Zookeeper actual combat (development focus)
4.1 Distributed installation and deployment
Cluster idea: first fix one server, and then clone two to form a cluster!
4.1.1 installation of zookeeper
Please refer to Article 2.1
4.1.2 configure server number
- Create myid file in / opt/zookeeper/zkData
[root@localhost zkData]# vim myid
- Add the number corresponding to the server in the file: 1
- The other two servers correspond to 2 and 3 respectively
4.1.3 configure zoo Cfg file
- Open zoo Cfg file, add the following configuration
#######################cluster########################## server.1=192.168.204.141:2888:3888 server.2=192.168.204.142:2888:3888 server.3=192.168.204.143:2888:3888
- Configuration parameters: server A=B:C:D
○ A: a number indicating the server number (the data in the / opt/zookeeper/zkData/myid file configured in the cluster mode is the value of a)
○ B: ip address of the server
○ C: the port to exchange information with the Leader server in the cluster
○ D: special port for election. In case the Leader server in the cluster hangs up, a port is needed for re election
Create a new Leader, and this port is the port used to communicate with each other during the election.
4.1.4 configure the other two servers
- Under the virtual machine data directory vms, create zk02
- Put the data in the data directory of this server vmx files and all Copy zk02 the vmdk files separately
- Virtual machine - > file - > Open (select the. vmx file under zk02)
- Open this virtual machine, pop up a dialog box, and select "I have copied this virtual machine"
- After entering the system, modify the ip in linux and change the value in / opt/zookeeper/zkData/myid to 2
For the third server zk03, repeat the above steps
4.1.5 cluster operation
- The firewall of each server must be turned off
[root@localhost bin]# systemctl stop firewalld.service
- Start the first one
[root@localhost bin]# ./zkServer.sh start
- View status
[root@localhost bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default Using config: /opt/zookeeper/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Error contacting service. It is probably not running.
Note: because there are no more than half of the servers, the cluster fails (failure will also occur if the firewall is not closed)
- When starting the second server
○ check the status of the first station: Mode: follower
○ check the status of the second machine: Mode: leader
4.2 client command line operation
- Start client
[root@localhost bin]# ./zkCli.sh
- Show all operation commands
help
- View the content contained in the current znode
ls /
- View detailed data of current node
The old version of zookeeper uses ls2 /, which has been replaced by the new command
ls -s /
○ cZxid: create the transaction of the node
▸ each time you modify the ZooKeeper state, you will receive a time stamp in the form of zxid, that is, the ZooKeeper transaction ID.
Transaction ID is the total order of all modifications in ZooKeeper.
▸ each modification has a unique zxid. If zxid1 is less than zxid2, zxid1 occurs before zxid2.
○ ctime: number of milliseconds created (since 1970)
○ mZxid: the last updated transaction zxid
○ mtime: the number of milliseconds last modified (since 1970)
○ pZxid: the last updated child node zxid
○ cversion: creation version number and modification times of child nodes
○ dataVersion: data change version number
○ aclVersion: permission version number
○ ephemeral owner: if it is a temporary node, this is the session id of the znode owner. 0 if it is not a temporary node.
○ dataLength: data length
○ numChildren: number of sub nodes
- Create 2 ordinary nodes respectively
○ create two nodes in the root directory: China and the United States
create /china create /usa
○ create a Russian node in the root directory and save "Putin" data to the node
create /ru "pujing"
○ multi level node creation
▸ in Japan, create Tokyo "hot"
▸ japan must be created in advance, otherwise the error "node does not exist" will be reported
create /japan/Tokyo "hot"
- Get the value of the node
get /japan/Tokyo
- Create transient node: after successful creation, quit exits the client, reconnects, and the transient node disappears
create -e /uk ls / quit ls /
- Create node with sequence number
○ create three cities under Russian ru
create -s /ru/city # Execute three times ls /ru [city0000000000, city0000000001, city0000000002]
○ if there is no serial number node, the serial number will increase from 0.
○ if there are already 2 nodes under the original node, the sorting starts from 2, and so on
- Modify node data value
set /japan/Tokyo "too hot"
- Value change or child node change of listening node (path change)
1. Register and listen to the data changes of the / usa node on the server3 host
addWatch /usa
2. Modify the data of / usa on Server1 host
set /usa "telangpu"
3. Server3 will respond immediately
WatchedEvent state:SyncConnected type:NodeDataChanged path:/usa
4. Create a child node under / Server1
create /usa/NewYork
5. Server3 will respond immediately
WatchedEvent state:SyncConnected type:NodeCreated path:/usa/NewYork
- Delete node
delete /usa/NewYork
- Recursively delete nodes (non empty nodes, child nodes under nodes)
deleteall /ru
not only delete / ru, but also delete all child nodes under / ru
4.3 API Application
4.3.1 IDEA environment construction
-
Create a Maven project
-
Add pom file
<dependencies> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>2.8.2</version> </dependency> <dependency> <groupId>org.apache.zookeeper</groupId> <artifactId>zookeeper</artifactId> <version>3.6.0</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> </dependency> </dependencies>
- Create log4j under resources properties
log4j.rootLogger=INFO, stdout log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n log4j.appender.logfile=org.apache.log4j.FileAppender log4j.appender.logfile.File=target/zk.log log4j.appender.logfile.layout=org.apache.log4j.PatternLayout log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n
4.3.2 create ZooKeeper client
public class TestZK { // Cluster ip private String connStr ="192.168.249.81:2181,192.168.249.82:2181,192.168.249.83:2181"; /* session Timeout of 60 seconds: it must not be too small, because the delay of connecting zookeeper and loading cluster environment will be slightly higher due to performance reasons If you don't have enough time to create a client, you will report an error if you start operating the node */ private int sessionTimeout = 60000; @Test public void init() throws IOException { // Create listener Watcher watcher = new Watcher() { public void process(WatchedEvent watchedEvent) { } }; // Create zookeeper client ZooKeeper zk = new ZooKeeper(connStr, sessionTimeout, watcher); } }
4.3.3 create node
- An ACL object is a pair of Id and permission
○ indicates which range of Id (Who) is allowed to perform those operations after passing the authentication (How): Who, How and What;
○ permission (What) is a bit code represented by int, and each bit represents the allowable state of a corresponding operation.
○ similar to the file permissions of linux, the difference is that there are five operations: CREATE, READ, WRITE, DELETE and Admin (corresponding to the permission to change ACL)
▸ OPEN_ACL_UNSAFE: create an open node and allow arbitrary operations (use the least, and use few other permissions)
▸ READ_ACL_UNSAFE: create a read-only node
▸ CREATOR_ALL_ACL: only the creator has full permissions
@Before public void init() throws IOException{ // Omit } @Test public void createNode() throws Exception { String nodeCreated = zKcli.create("/lagou", "laosun".getBytes(), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); // Parameter 1: path of node to be created // Parameter 2: node data // Parameter 3: node permissions // Parameter 4: type of node System.out.println("nodeCreated = " + nodeCreated); }
4.3.4 value of query node
@Test public void find() throws Exception{ byte[] bs = zKcli.getData("/lagou", false, new Stat()); // An error will be reported when the path does not exist String data = new String(bs); System.out.println("Data found:"+data); }
4.3.5 modifying node values
@Test public void update()throws Exception{ Stat stat = zKcli.setData("/lagou", "laosunA".getBytes(), 0); //View node details first Love, gain dataVersion = 0 System.out.println(stat); }
4.3.6 deleting nodes
@Test public void delete() throws Exception { zKcli.delete("/lagou", 1); // First check the node details and get dataVersion = 1 System.out.println("Delete succeeded!"); }
4.3.7 obtaining child nodes
@Test public void getChildren() throws Exception { List<String> children = zKcli.getChildren("/",false); // false: do not listen for (String child : children) { System.out.println(child); } }
4.3.8 monitor changes of child nodes
@Test public void getChildren() throws Exception { List<String> children = zKcli.getChildren("/", true); // true: register listening for (String child : children) { System.out.println(child); } // Let the thread not stop and wait for the listening response System.in.read(); }
- When the program is running, we create a node under linux
- The IDEA console will respond: NodeChildrenChanged –/
4.3.9 judge whether Znode exists
@Test public void exist() throws Exception { Stat stat = zKcli.exists("/lagou", false); System.out.println(stat == null ? "non-existent" : "existence"); }
4.4 case - simulated meituan merchants online and offline
4.4.1 requirements
- Simulated meituan service platform, business notice, business closing notice
- Create the / meituan node under the root node in advance
4.4.2 merchant services
public class ShopServer { private static String connectString = "192.168.204.141:2181,192.168.204.142:2181,192.168.204.143:2181"; private static int sessionTimeout = 60000; private ZooKeeper zk = null; // Create a client connection to zk public void getConnect() throws IOException { zk = new ZooKeeper(connectString, sessionTimeout, new Watcher() { public void process(WatchedEvent event) { } }); } // Register to cluster public void register(String ShopName) throws Exception { // Only the "EPHEMERAL_SEQUENTIAL" node can give a shop number, shop1, shop2..." String create = zk.create("/meituan/Shop", ShopName.getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL); System.out.println("["+ShopName+"] Open! " + create); } // Business function public void business(String ShopName) throws Exception { System.out.println("["+ShopName+"] In business ..."); System.in.read(); } public static void main(String[] args) throws Exception { ShopServer shop = new ShopServer(); // 1. Connect zookeeper cluster (contact meituan) shop.getConnect(); // 2. Register the server node (stay in meituan) shop.register(args[0]); // 3. Business logic processing (business) shop.business(args[0]); } }
4.4.3 customers
public class Customers { private static String connectString = "192.168.204.141:2181,192.168.204.142:2181,192.168.204.143:2181"; private static int sessionTimeout = 60000; private ZooKeeper zk = null; // Create a client connection to zk public void getConnect() throws IOException { zk = new ZooKeeper(connectString, sessionTimeout, new Watcher() { public void process(WatchedEvent event) { // Get all merchants again try { getShopList(); } catch (Exception e) { e.printStackTrace(); } } }); } // Get server list information public void getShopList() throws Exception { // 1 obtain the information of the server's child nodes and listen to the parent nodes List<String> shops = zk.getChildren("/meituan", true); // 2 storage server information list ArrayList<String> shoplist = new ArrayList(); // 3 traverse all nodes to obtain the host name information in the node for (String shop : shops) { byte[] data = zk.getData("/meituan/" + shop, false, new Stat()); shoplist.add(new String(data)); } // 4 print server list information System.out.println(shoplist); } // Business function public void business() throws Exception { System.out.println("The customer is browsing the merchant ..."); System.in.read(); } public static void main(String[] args) throws Exception { // 1. Get zk connection (customer opens meituan) Customers client = new Customers(); client.getConnect(); // 2. Obtain the sub node information of / meituan and obtain the server information list (obtain the merchant list from meituan) client.getShopList(); // 3. Start of business process (compare merchants and order) client.business(); } }
- Run the customer class and you will get the list of merchants
- First, add a merchant in linux, and then observe the console output of the client (the latest merchant list will be updated immediately). If you add more, the merchant list will also be output in real time
create /meituan/KFC "KFC" create /meituan/BKC "BurgerKing" create /meituan/baozi "baozi"
- If you delete a merchant in linux, you will also see the latest list of merchants after the merchant is removed in real time on the console of the client
delete /meituan/baozi
- Run the merchant service class (run in the form of main method with parameters)
4.5 case - distributed lock - commodity spike
- Lock: we have contacted in multithreading. The function is to prevent the current resources from being accessed by other threads!
○ my diary can't be seen by others. So lock it in the safe
○ when I open the lock and take away the diary, others can use the safe - The "herd effect" caused by using traditional locks in zookeeper: 1000 people create nodes, only one person can succeed, and 999 people need to wait!
- Sheep are a kind of scattered organization. Usually they rush left and right blindly together, but once one sheep moves, the other sheep will rush up without thinking, regardless of the wolves nearby and the better grass not far away. Herding is a metaphor that people have a herd mentality. Herd mentality is easy to lead to blind obedience, and blind obedience often falls into scams or fails.
- Avoid "herding" and zookeeper uses distributed locks
- All requests come in and create a temporary order node under / lock. Rest assured that zookeeper will help you number and sort
- Determine whether you are the smallest node under / lock
1. Lock (create, acquire)
2. No, listen to the nodes at the ego level in front - Obtain the lock request, process the business logic, release the lock (delete the node), and the latter node is notified (the younger one dies, and you become the youngest one)
- Repeat step 2
Implementation steps
1. Initialize the database
Create the database zkproduct and use the default character set utf8
-- Commodity list create table product( id int primary key auto_increment, -- Item number product_name varchar(20) not null, -- Trade name stock int not null, -- stock version int not null -- edition ) insert into product (product_name,stock,version) values('lucky charm-empty cart -a grand prix',5,0)
-- Order form create table `order`( id varchar(100) primary key, -- Order number pid int not null, -- Item number userid int not null -- User number )
2. Construction works
Set up the ssm framework, for inventory TABLE-1 and order table + 1
<packaging>war</packaging> <properties> <spring.version>5.2.7.RELEASE</spring.version> </properties> <dependencies> <!-- Spring --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jdbc</artifactId> <version>${spring.version}</version> </dependency> <!-- Mybatis --> <dependency> <groupId>org.mybatis</groupId> <artifactId>mybatis</artifactId> <version>3.5.5</version> </dependency> <dependency> <groupId>org.mybatis</groupId> <artifactId>mybatis-spring</artifactId> <version>2.0.5</version> </dependency> <!-- Connection pool --> <dependency> <groupId>com.alibaba</groupId> <artifactId>druid</artifactId> <version>1.1.10</version> </dependency> <!-- database --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>8.0.20</version> </dependency> <!-- junit --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <!-- maven embedded tomcat plug-in unit --> <plugin> <groupId>org.apache.tomcat.maven</groupId> <!-- at present apache Only provided tomcat6 and tomcat7 Two plug-ins --> <artifactId>tomcat7-maven-plugin</artifactId> <configuration> <port>8001</port> <path>/</path> </configuration> <executions> <execution> <!-- After packaging,Running services --> <phase>package</phase> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> </plugins> </build>
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:context="http://www.springframework.org/schema/context" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd"> <!-- 1.Comments under scan package --> <context:component-scan base-package="controller,service,mapper"/> <!-- 2.Create data connection pool object --> <bean id="dataSource" class="com.alibaba.druid.pool.DruidDataSource" destroy-method="close"> <property name="url" value="jdbc:mysql://192.168.204.131:3306/zkproduct? serverTimezone=GMT" /> <property name="driverClassName" value="com.mysql.jdbc.Driver" /> <property name="username" value="root" /> <property name="password" value="123123" /> <property name="maxActive" value="10" /> <property name="minIdle" value="5" /> </bean> <!-- 3.establish SqlSessionFactory,And introduce the data source object --> <bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean"> <property name="dataSource" ref="dataSource"></property> <property name="configLocation" value="classpath:mybatis/mybatisconfig.xml"></property> </bean> <!-- 4.tell spring Container, in which file is the database statement code--> <!-- mapper.xDao Interface correspondence resources/mapper/xDao.xml--> <bean class="org.mybatis.spring.mapper.MapperScannerConfigurer"> <property name="basePackage" value="mapper"></property> </bean> <!-- 5.Associate a data source to a transaction --> <bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> <property name="dataSource" ref="dataSource"></property> </bean> <!-- 6.Open transaction --> <tx:annotation-driven/> </beans>
<?xml version="1.0" encoding="UTF-8"?> <web-app xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd" version="3.1"> <servlet> <servlet-name>springMVC</servlet-name> <servletclass>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value>classpath:spring/spring.xml</param-value> </init-param> <load-on-startup>1</load-on-startup> <async-supported>true</async-supported> </servlet> <servlet-mapping> <servlet-name>springMVC</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> </web-app>
@Mapper @Component public interface OrderMapper { // Generate order @Insert("insert into `order` (id,pid,userid) values (#{id},#{pid},# {userid})") int insert(Order order); }
@Mapper @Component public interface ProductMapper { // Query goods (purpose: inventory) @Select("select * from product where id = #{id}") Product getProduct(@Param("id") int id); // Inventory reduction @Update("update product set stock = stock-1 where id = #{id}") int reduceStock(@Param("id") int id); }
@Service public class OrderServiceImpl implements OrderService { @Autowired ProductMapper productMapper; @Autowired OrderMapper orderMapper; @Override public void reduceStock(int id) throws Exception { // 1. Obtain inventory Product product = productMapper.getProduct(id); // Analog network delay Thread.sleep(1000); if(product.getStock() <= 0) throw new RuntimeException("It's all gone!"); // 2. Inventory reduction int i = productMapper.reduceStock(id); if(i == 1){ Order order = new Order(); order.setId(UUID.randomUUID().toString()); order.setPid(id); order.setUserid(101); orderMapper.insert(order); }else throw new RuntimeException("Inventory reduction failed, please try again!"); } }
@Controller public class ProductAction { @Autowired private OrderService orderService; @GetMapping("/product/reduce") @ResponseBody public Object reduceStock(int id) throws Exception{ orderService.reduceStock(id); return "ok"; } }
3. Start up test
- Start two projects with port numbers 8001 and 8002 respectively
- Using nginx for load balancing
upstream sga{ server 192.168.204.1:8001; server 192.168.204.1:8002; } server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { proxy_pass http://sga; root html; index index.html index.htm; }
- Use JMeter to simulate 10 http requests within 1 second
- Download address: http://jmeter.apache.org/download_jmeter.cgi
- Check the test results and all 10 requests are successful
- Check the database, and the stock becomes - 5 (data result error caused by concurrency)
4. zookeeper client provided by apahce
It is very troublesome to implement distributed client class based on zookeeper's original ecology. We use apahce to provide a zookeeper client
Curator: http://curator.apache.org/
<dependency> <groupId>org.apache.curator</groupId> <artifactId>curator-recipes</artifactId> <version>4.2.0</version> <!-- Netizens voted the most awesome version --> </dependency>
recipes is a complete collection of curator genealogy, which includes zookeeper and framework
5. Add the logic code of distributed lock in the control layer
@Controller public class ProductAction { @Autowired private ProductService productService; private static String connectString = "192.168.204.141:2181,192.168.204.142:2181,192.168.204.143:2181"; @GetMapping("/product/reduce") @ResponseBody public Object reduce( int id) throws Exception { // Retry strategy (try once in 1000 milliseconds, up to 3 times) RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3); //1. Create curator tool object CuratorFramework client = CuratorFrameworkFactory.newClient(connectString, retryPolicy); client.start(); //2. Create an "internal mutex" based on the tool object InterProcessMutex lock = new InterProcessMutex(client, "/product_"+id); try { //3. Lock lock.acquire(); productService.reduceStock(id); }catch(Exception e){ if(e instanceof RuntimeException){ throw e; } }finally{ //4. Release the lock lock.release(); } return "ok"; } }
6. Test again and solve the concurrency problem!
Task 2: distributed system architecture solution - Dubbo
Course objectives:
1. dubbo overview
2. Quick start
3. Monitoring center
4. Comprehensive actual combat
1. dubbo overview
1.1 what is a distributed system?
- Definition of distributed system principle and model:
○ "distributed system is a collection of several independent computers, which are like a single related system to users"
○ distributed system is a software system based on network.
○ simply put: multiple (different responsibilities) people work together to accomplish one thing!
○ no server can meet the data throughput of Taobao's double 11, which must be completed by many servers. - The allegorical saying: "three cobblers surpass Zhuge Liang" is a true portrayal of the distributed system
1.1.1 single application architecture
- When the website traffic is very small, only one application is needed to deploy all functions together (all businesses are put in one tomcat), so as to reduce the deployment nodes and costs;
- At this time, the data access framework (ORM) used to simplify the workload of addition, deletion, modification and query is the key;
- For example, the cashier system of a supermarket and the employee management system of a company
ORM: Object Relational Mapping
- advantage
○ fast development of small projects and low cost
○ simple structure
○ easy to test
○ easy to deploy - shortcoming
○ the project is not easy to develop and maintain modules with high cost
○ new business difficulties
○ core business and edge business are mixed together, and problems affect each other
1.1.2 vertical application architecture
- When the number of visits increases gradually, the acceleration brought by a single application increases and the machine decreases. Split the application into several unrelated applications to improve efficiency;
- The large module is divided into several unrelated small modules according to the mvc layered mode, and each small module has an independent server
- At this time, the web framework (MVC) used to accelerate the development of front-end pages is the key; Because each app has its own page
MVC: Model View Controller
- Disadvantages:
○ it is impossible to have no intersection between modules, and the common modules cannot be reused, which is a waste of development
1.1.3 distributed service architecture
- When there are more and more vertical applications, the interaction between applications is inevitable. Extract the core business as an independent business, and gradually form a stable service center, so that the front-end application can respond to the changing market demand more quickly;
- At this time, the remote call of distributed service framework (RPC) for users to improve business reuse and integration is the key;
RPC: independent application servers rely on RPC (Romote Procedure Call) to call
- Logistics service is not busy, with 100 servers; Goods and services are particularly busy, and there are also 100 servers;
○ how to optimize the allocation of resources? ↓
1.1.4 flow computing architecture
- When there are more and more services, problems such as capacity evaluation and waste of small service resources gradually appear. At this time, it is necessary to add a dispatching center to manage the cluster capacity in real time based on the access pressure and improve the cluster utilization rate;
- At this time, the resource scheduling and Governance Center (SOA) for improving machine utilization is the key;
SOA: Service Oriented Architecture, which is simply understood as "service governance", such as "dispatcher" of bus station
1.2 introduction to Dubbo
- Dubbo is a distributed service framework and an open source project of Alibaba. It is now handed over to apache for maintenance
- Dubbo is committed to improving the performance and transparency of RPC remote service invocation scheme and SOA Service governance scheme
- In short, dubbo is a service framework. If there are no distributed requirements, it doesn't need to be used
1.2.1 RPC
- RPC [Remote Procedure Call] refers to Remote Procedure Call, which is a way of inter process communication
- Basic communication principle of RPC
1. Serialize objects on the client side
2. The underlying communication framework uses netty (socket based on tcp protocol) to send serialized objects to the service provider
3. After the service provider obtains the data file through socket, it deserializes to obtain the object to be operated
4. After the object data operation is completed, the new object is serialized and returned to the client through the socket of the service provider
5. The client obtains the serialized data and then deserializes it to obtain the latest data object. So far, a request is completed
- RPC has two core modules: Communication (socket) and serialization.
1.2.2 node role
node | Role description |
---|---|
Provider | Service provider (bath center) |
Consumer | Service consumer (guest) |
Registry | Registration Center for service registration and discovery (convenience service center, all hotels and entertainment places have been registered in this center) |
Monitor | Statistics Center of monitoring service (count the number of times the service is called) |
Container | Service operation container (barbecue street, bath Street) |
1.2.3 calling relationship
- The service container is responsible for starting, loading and running the service provider;
- When the service provider starts, it registers its own services with the registry;
- When the service consumer starts, he subscribes to the service he needs from the registration center;
- Return the service provider address list to the consumer in the registration center. If there is any change, the registration center will push the change data to the consumer based on the long connection;
- The service consumer selects one provider from the provider address list based on the soft load balancing algorithm to call. If the call fails, select another one to call;
- Service consumers and providers accumulate call times and call times in memory, and regularly send statistical data to the monitoring center every minute;
2. Quick start
2.1 Registration Center
2.1.1 Zookeeper
- It is officially recommended to use the zookeeper registration center;
- The registration center is responsible for the registration and search of service address, which is equivalent to directory service;
- Service providers and consumers only interact with the registration center at startup, and do not forward requests during registration, resulting in less pressure;
- Zookeeper is a sub project of apache hadoop. It is a tree like directory service that supports change push. It is suitable as a service registry of dubbo. It has high industrial intensity and can be used in production environment;
dubbo is not only the job seeker, but also the recruitment unit, and zookeeper is the talent market / recruitment website;
2.1.2 installation
1. Install jdk
2. Copy apache-zookeeper-3.6.0-bin tar. GZ to opt directory
3. Unzip the installation package
[root@localhost opt]# tar -zxvf apache-zookeeper-3.6.0-bin.tar.gz
4. Rename
[root@localhost opt]# mv apache-zookeeper-3.6.0-bin zookeeper
5. Create zkData and zkLog directories on the directory / opt/zookeeper /
[root@localhost zookeeper]# mkdir zkData [root@localhost zookeeper]# mkdir zkLog
6. Enter / opt/zookeeper/conf to copy a copy of zoo_sample.cfg file and name it zoo cfg
[root@localhost conf]# cp zoo_sample.cfg zoo.cfg
7. Edit zoo Cfg file, modify dataDir path:
dataDir=/opt/zookeeper/zkData dataLogDir=/opt/zookeeper/zkLog
8. Start Zookeeper
[root@localhost bin]# ./zkServer.sh start
9. View status:
[root@localhost bin]# ./zkServer.sh status
2.2 service provider
1. An empty maven project
2. Just provide a service interface
2.2.1 POM of service provider XML
Please strictly follow the following versions for various dependencies
<packaging>war</packaging> <properties> <spring.version>5.0.6.RELEASE</spring.version> </properties> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context-support</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-tx</artifactId> <version>${spring.version}</version> </dependency> <!--dubbo --> <dependency> <groupId>com.alibaba</groupId> <artifactId>dubbo</artifactId> <version>2.5.7</version> </dependency> <dependency> <groupId>org.apache.zookeeper</groupId> <artifactId>zookeeper</artifactId> <version>3.4.6</version> </dependency> <dependency> <groupId>com.github.sgroschupf</groupId> <artifactId>zkclient</artifactId> <version>0.1</version> </dependency> <dependency> <groupId>javassist</groupId> <artifactId>javassist</artifactId> <version>3.11.0.GA</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.tomcat.maven </groupId> <artifactId>tomcat7-maven-plugin</artifactId> <configuration> <port>8001</port> <path>/</path> </configuration> <executions> <execution> <!-- After packaging,Running services --> <phase>package</phase> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> </plugins> </build>
2.2.2 service party interface
public interface HelloService { String sayHello(String name); }
2.2.3 realization of service provider
@com.alibaba.dubbo.config.annotation.Service public class HelloServiceImpl implements HelloService { @Override public String sayHello(String name) { return "Hello," + name + "!!!"; } }
2.2.4 configuration file of service provider spring XML
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:dubbo="http://code.alibabatech.com/schema/dubbo" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://code.alibabatech.com/schema/dubbo http://code.alibabatech.com/schema/dubbo/dubbo.xsd"> <!--1.Service provider in zookeeper Alias in--> <dubbo:application name="dubbo-server"/> <!--2.Address of the registry--> <dubbo:registry address="zookeeper://192.168.204.141:2181"/> <!--3.Scan class (what class under the package will be used as the service provider class)--> <dubbo:annotation package="service.impl"/> </beans>
2.2.5 service provider's web XML
<?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.jcp.org/xml/ns/javaee" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd" id="WebApp_ID" version="3.1"> <listener> <listenerclass>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <context-param> <param-name>contextConfigLocation</param-name> <param-value>classpath:spring/spring.xml</param-value> </context-param> </web-app>
2.3 service consumers
2.3.1 consumer's POM XML
Consistent with the service provider, you only need to modify the port of tomcat to 8002
2.3.2 consumer Controller
@RestController public class HelloAction { @com.alibaba.dubbo.config.annotation.Reference private HelloService hs; @RequestMapping("hello") @ResponseBody public String hello( String name){ return hs.sayHello(name); } }
2.3.3 interface of consumer
be careful:
The controller depends on HelloService, so we create an interface;
This is the consumer side. We don't need to implement it, because the implementation will let the service side handle it for us!
public interface HelloService { String sayHello(String name); }
2.3.4 consumer's web XML
<?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.jcp.org/xml/ns/javaee" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd" id="WebApp_ID" version="3.1"> <servlet> <servlet-name>springmvc</servlet-name> <servletclass>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value>classpath:spring/spring.xml</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>springmvc</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> </web-app>
2.3.5 consumer's spring MVC XML
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:dubbo="http://code.alibabatech.com/schema/dubbo" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://code.alibabatech.com/schema/dubbo http://code.alibabatech.com/schema/dubbo/dubbo.xsd"> <!--Dubbo The name of the application, usually the project name --> <dubbo:application name="dubbo-consumer" /> <!--to configure Dubbo Registered center address --> <dubbo:registry address="zookeeper://192.168.204.141:2181" /> <!--to configure Dubbo Scan the class and publish it as a service --> <dubbo:annotation package="controller" /> </beans>
2.4 start service test
First start the service side, and then start the consumer side.
visit: http://localhost:8002/hello?name=james
3. Monitoring center
When developing, we need to know which services are registered in the registry so that we can develop and test them.
Graphically displays the list of services in the registry
We can implement this by deploying a web application management center.
3.1 service management end
3.1.1 installation management end
- Unzip Dubbo admin master zip
- Modify profile
- Return to the project root directory and use maven package: mvn clean package
- Run the jar file in the target directory under dos: Java - jar dubbo-admin-0.0.1-snapshot jar
- The browser opens and you enter: http://localhost:7001/ ;
During the first visit, you need to log in, and the account password is root
3.1.1 use of management end
- Start the service provider and register the service with zookeeper
- After starting the Dubbo server service, refresh the management end, and the service is registered successfully, but there are no consumers
- Click the service name to enter the service provider page
- Run the consumer, refresh the service, and the display is normal
- View consumer
3.2 monitoring and Statistics Center
Monitor: Statistics Center, which records how many times the service is called, etc
- Unzip dubbo-monitor-simple-2.5.3 zip
- Modify dubbo-monitor-simple-2.5.3 \ conf \ Dubbo properties
- Double click to run dubbo-monitor-simple-2.5.3 \ bin \ start bat
- Modify the spring of Dubbo server and Dubbo consumer respectively XML, add the following tag
<!-- Let the monitor go to the registration center to find the service automatically --> <dubbo:monitor protocol="registry"/>
4. Comprehensive actual combat
4.1 configuration description
4.1.1 inspection during startup
- When starting, check whether the dependent services are available in the registry. If not, an exception will be thrown
- Write the main method to initialize the container on the consumer side to start (tomcat startup mode, you must access action once to initialize spring)
public class Test { public static void main(String[] args) throws IOException { ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("classpath:spring/spring.xml"); System.in.read(); } }
<!--Default is true:Throwing anomaly; false:No throw exception--> <dubbo:consumer check="false" />
- System level logs need to be output in conjunction with log4j. Add log4j under resources Properties, as follows:
log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.Target=System.out log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %m%n log4j.appender.file=org.apache.log4j.FileAppender log4j.appender.file.File=dubbo.log log4j.appender.file.layout=org.apache.log4j.PatternLayout log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %l %m%n log4j.rootLogger=error, stdout,file
Timeout time
- Because the network or server is unreliable, it will lead to uncertain blocking state (timeout) in the calling process
- In order to prevent the timeout from causing the client resource (thread) to hang and run out, the timeout time must be set
- Add the following configuration to the service provider:
<!--Set the timeout to 2 seconds, and the default is 1 second--> <dubbo:provider timeout="2000"/>
- The service can be implemented as helloserviceimpl Add simulated network delay in Java to test:
@Service public class HelloServiceImpl implements HelloService { public String sayHello(String name) { try { Thread.sleep(3000); }catch (Exception e){ e.printStackTrace(); } return "Hello,"+name+"!!!!!"; } }
- The timeout setting is 2 seconds, while the simulated network delay is 3 seconds. If the time limit is exceeded, an error will be reported!
- Configuration principle:
○ dubbo recommends configuring as many Consumer attributes as possible on the Provider:
1. As a service provider, it knows more about service performance parameters than the service user, such as call timeout, reasonable retry times, and so on
2. After the Provider is configured, if the Consumer does not configure, the configuration value of the Provider will be used, that is, the Provider configuration can be used as the default value of consumers.
4.1.3 retry times
- In case of failure, automatically switch and retry other servers. The default value of dubbo retry is 2 times, which can be set by ourselves
- Configure in the provider:
<!-- The first time the consumer connects does not count. Try again 3 times, and try again 4 times in total --> <dubbo:provider timeout="2000" retries="3"/>
@Service public class HelloServiceImpl implements HelloService { public String sayHello(String name) { System.out.println("=============Called once==============="); try { Thread.sleep(3000); }catch (Exception e){ e.printStackTrace(); } return "Hello,"+name+"!!!!!"; } }
- Not all methods are suitable for setting the number of retries
○ idempotent method: suitable (when the parameters are the same, no matter how many times they are executed, the results are the same, such as query and modification)
○ non idempotent method: not suitable (when the parameters are the same, the execution results are different, for example: delete, add) - Set a method separately
1. Add sayNo() method to the provider interface and implement it
public interface HelloService { String sayHello( String name ); String sayNo(); }
@Service public class HelloServiceImpl implements HelloService { public String sayHello(String name) { System.out.println("=============hello Called once ==============="); try { Thread.sleep(3000); }catch (Exception e){ e.printStackTrace(); } return "Hello,"+name+"!!!!!"; } public String sayNo() { System.out.println("-------no Called once-------"); return "no!"; } }
2. Add sayNo() method declaration to the consumer interface
public interface HelloService { String sayHello( String name ); String sayNo(); }
3. Consumer controller
@Controller public class HelloAction { //@Reference this annotation has been replaced by < Dubbo: reference > in the xml file, so it can be injected automatically @Autowired private HelloService helloService; @GetMapping("hello") @ResponseBody public String sayHi(String name){ return helloService.sayHello(name); } @GetMapping("no") @ResponseBody public String no(){ return helloService.sayNo(); } }
4. Retry times of consumer configuration method
<dubbo:reference interface="service.HelloService" id="helloService"> <dubbo:method name="sayHello" retries="3"/> <dubbo:method name="sayNo" retries="0"/> <!-- Don't try again --> </dubbo:reference>
4.1.4 multi version
- One interface and multiple (version) implementation classes can be introduced by defining versions
- Define two implementation classes for the HelloService interface, and the provider modifies the configuration:
<dubbo:service interface="service.HelloService" class="service.impl.HelloServiceImpl01" version="1.0.0"/> <dubbo:service interface="service.HelloService" class="service.impl.HelloServiceImpl02" version="2.0.0"/>
- Consumers can choose the specific service version according to the version
<dubbo:reference interface="service.HelloService" id="helloService" version="2.0.0"> <dubbo:method name="sayHello" retries="3"/> <dubbo:method name="sayNo" retries="0"/> </dubbo:reference>
- Note: the control layer of the consumer should be changed to automatic injection, because the @ Reference annotation and dubbo:reference conflict here
@Controller public class HelloAction { @Autowired private HelloService helloService; }
- When the consumer's version is changed to version = "*", the service provider's version will be called randomly
-------1.0 Called once------- -------2.0 Called once------- -------1.0 Called once------- -------1.0 Called once------- -------1.0 Called once------- -------2.0 Called once-------
4.1.5 local stub
- At present, a serious problem in building our distributed architecture is that all operations are initiated by consumers and executed by service providers
- Consumers talk but don't do anything, which will make providers very tired. For example, for simple parameter verification, consumers are fully competent. Sending legal parameters to providers for execution is more efficient and providers are less tired
- For example: when you go to the real estate bureau to handle the house transfer, please bring your own certificates and materials. If you don't bring anything, it will be very troublesome to go through the transfer procedures. You have to first investigate what loans you have, whether you have a mortgage, whether the real estate certificate is you, copying materials, etc. You can't finish it in a day. I'll come tomorrow. If you can prepare these things in advance and handle the transfer of ownership, one hour is enough, which is "the reason for the high efficiency of real estate agents"
- Not much to say, the process of first processing some business logic in the consumer and then calling the provider is the "local stub"
- The code implementation must be in the consumer, create a helloservicesub class and implement the HelloService interface
- Note: it must be injected in the way of construction method
public class HelloServiceStub implements HelloService { private HelloService helloService; // Inject HelloService public HelloServiceStub(HelloService helloService) { this.helloService = helloService; } public String sayHello(String name) { System.out.println("Local stub data validation..."); if(!StringUtils.isEmpty(name)){ return helloService.sayHello(name); } return "i am sorry!"; } public String sayNo() { return helloService.sayNo(); } }
- Modify consumer configuration:
<dubbo:reference interface="service.HelloService" id="helloService" version="1.0.0" stub="service.impl.HelloServiceStub"> <dubbo:method name="sayHello" retries="3"/> <dubbo:method name="sayNo" retries="0"/> </dubbo:reference>
4.2 load balancing strategy
- Load Balance is actually to allocate requests to multiple operating units for execution, so as to complete work tasks together.
- In short, there are many servers. One server cannot always work. It should be "wet and wet"
- dubbo provides a total of four strategies, and the default is random allocation and call
- Modify the provider configuration and start three providers for consumers to access
tomcat port 800180028003
○ provider port 208812088220883
<dubbo:provider timeout="2000" retries="3" port="20881"/>
○ HelloServiceImpl01 class, server 1, server 2, server 3
public String sayNo() { System.out.println("----Server 1---1.0 Called once-------"); return "no!"; }
start consumer for testing
- Consumer modify weight
<dubbo:reference loadbalance="roundrobin" interface="service.HelloService" id="helloService" version="2.0.0" stub="stub.HelloServiceStub"> <dubbo:method name="sayHello" retries="3"/> <dubbo:method name="sayNo" retries="0"/> </dubbo:reference>
- It is best to use the management side to modify the weight
4.3 high availability
4.3.1 zookeeper downtime
- The zookeeper registry is down, and you can also consume dubbo exposed services
○ the downtime of the monitoring center will not affect the use, but only lose some sampling data
○ after the database goes down, the registry can still query the service list through the cache, but can not register new services
○ the peer-to-peer cluster of the registry will automatically switch to another one after any one goes down
○ service providers and service consumers can still communicate through local cache after the registry is completely down
○ the service provider is stateless, and the use will not be affected after any one goes down
○ after all the service providers are down, the service consumer application will not be able to use and will be reconnected indefinitely waiting for the service provider to recover - Test:
○ normal request
○ close zookeeper:/ zkServer.sh stop
○ consumers can still consume normally
4.4 service degradation
- Gecko will automatically fall off its tail in case of danger. The purpose is to lose unimportant things and keep important things
- Service degradation is to stop or deal with some services in a simple way according to the actual situation and traffic, so as to release the resources of the server to ensure the normal operation of the core business
4.4.1 why service degradation
- Why use service degradation? This is to prevent the avalanche effect of distributed services
- What is an avalanche? That is, the butterfly effect. When a request times out and waits for a service response, in the case of high concurrency, many requests wait for a response until the service resources are exhausted, resulting in downtime. After downtime, other distributed services will call the services that are down, and the services that call the downtime will also be down. This will lead to the paralysis of the whole distributed service, which is an avalanche.
4.4.2 implementation mode of service degradation
- Configuring service degradation in the management console: masking and fault tolerance
- The value returned by the caller of this method is: mock + force = null, which means that the caller does not directly mask the value of the remote service. It is used to shield the impact on the caller when unimportant services are unavailable.
- Fault tolerance: mock=fail:return+null means that the method call of the consumer to the service returns null value after failure, and no exception is thrown. It is used to tolerate the impact on the caller when the unimportant service is unstable.
4.5 integrate MyBatis to realize user registration
4.5.1 initialize database
CREATE DATABASE smd USE smd CREATE TABLE users( uid INT(11) AUTO_INCREMENT PRIMARY KEY, username VARCHAR(50) NOT NULL, PASSWORD VARCHAR(50) NOT NULL, phone VARCHAR(50) NOT NULL, createtime VARCHAR(50) NOT NULL )
4.5.2 create aggregation project - Project modularization
- Lagou Dubbo (project directory)
- Lagou Dubbo parent (parent project, aggregate project: defines the dependent version used by all modules)
<modelVersion>4.0.0</modelVersion> <groupId>com.sunguoan</groupId> <artifactId>sun-parent</artifactId> <version>1.0-SNAPSHOT</version> <packaging>pom</packaging> <properties> <spring.version>5.0.6.RELEASE</spring.version> </properties> <dependencies> <!-- JSP relevant --> <dependency> <groupId>jstl</groupId> <artifactId>jstl</artifactId> <version>1.2</version> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> <scope>provided</scope> <version>2.5</version> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>jsp-api</artifactId> <scope>provided</scope> <version>2.0</version> </dependency> <!-- Spring --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jdbc</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aspects</artifactId> <version>${spring.version}</version> </dependency> <!-- Mybatis --> <dependency> <groupId>org.mybatis</groupId> <artifactId>mybatis</artifactId> <version>3.2.8</version> </dependency> <dependency> <groupId>org.mybatis</groupId> <artifactId>mybatis-spring</artifactId> <version>1.2.2</version> </dependency> <!-- Connection pool --> <dependency> <groupId>com.alibaba</groupId> <artifactId>druid</artifactId> <version>1.0.9</version> </dependency> <!-- database --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.32</version> </dependency> <!--dubbo --> <dependency> <groupId>com.alibaba</groupId> <artifactId>dubbo</artifactId> <version>2.5.7</version> </dependency> <dependency> <groupId>org.apache.zookeeper</groupId> <artifactId>zookeeper</artifactId> <version>3.4.6</version> </dependency> <dependency> <groupId>com.github.sgroschupf</groupId> <artifactId>zkclient</artifactId> <version>0.1</version> </dependency> <dependency> <groupId>javassist</groupId> <artifactId>javassist</artifactId> <version>3.11.0.GA</version> </dependency> <!-- fastjson --> <dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> <version>1.2.47</version> </dependency> <!-- junit --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-test</artifactId> <version>${spring.version}</version> <scope>test</scope> </dependency> </dependencies>
- Lagou Dubbo entity (physical engineering, jar project)
- Lagou Dubbo Dao (data access layer project, jar project)
- Lagou Dubbo interface (service interface definition project, jar project)
- Lagou Dubbo service (privoder service provider project, jar project)
<parent> <groupId>com.sunguoan</groupId> <artifactId>sun-parent</artifactId> <version>1.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>sun-service</artifactId> <packaging>war</packaging> <dependencies> <dependency> <groupId>com.sunguoan</groupId> <artifactId>sun-interface</artifactId> <version>1.0-SNAPSHOT</version> </dependency> <dependency> <groupId>com.sunguoan</groupId> <artifactId>sun-dao</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.tomcat.maven</groupId> <artifactId>tomcat7-maven-plugin</artifactId> <configuration> <port>8001</port> <path>/</path> </configuration> <executions> <execution> <!-- After packaging,Running services --> <phase>package</phase> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> </plugins> </build>
- Lagou Dubbo web (consumer service, consumer engineering, war project)
<!-- solve post Garbled code --> <filter> <filter-name>charset</filter-name> <filterclass>org.springframework.web.filter.CharacterEncodingFilter</filter-class> <init-param> <param-name>encoding</param-name> <param-value>utf-8</param-value> </init-param> <init-param> <param-name>forceEncoding</param-name> <param-value>true</param-value> </init-param> </filter> <filter-mapping> <filter-name>charset</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <servlet> <servlet-name>springMVC</servlet-name> <servletclass>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value>classpath:spring/spring-mvc.xml</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>springMVC</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping>
4.5.3 start up test
- First select the parent project (aggregation project) and install it into jar
- Start service
- Start caller web