Docker image production and warehouse construction



1. Make docker image

Due to various reasons such as the version, the official and personal images released by docker have many vulnerabilities. According to statistics, more than 30% of the official images on Docker Hub contain high-level vulnerabilities. In addition, due to network and other reasons, the speed of docker pull to download the image is also very slow. Based on this situation, we can manually customize the docker system image. There are two ways to build a mirror image:

  • Use the docker commit command
  • Using docker build and Dockerfile

Method 1: docker commit

Step 1: Pull a base image (actually the OS)

docker pull centos
  • 1

Step 2: Create an interactive container

docker run -it --name mycentos centos /bin/bash
  • 1

The ll command here cannot be found, it may be because of the version problem, this does not affect, I want to use the reference Solve the problem that the ll command cannot be used

Step 3: Upload the required resources to the host

Notice! ! ! : This example is to make a tomcat image, so these resources are needed

Step 4: Copy the resources on the host to the container

docker cp apache-tomcat-7.0.64.tar.gz mycentos:/root

docker cp jdk-8u181-linux-x64.tar.gz mycentos:/root
  • 1
  • 2
  • 3

Step 5: Install jdk in the container, decompress and configure

# Unzip jdk to the /usr/local/ directory
tar -zxvf jdk-8u181-linux-x64.tar.gz -C /usr/local/

# Open the profile file and configure the environment
vi /etc/profile
  • 1
  • 2
  • 3
  • 4
  • 5

This is the same as installing jdk on Linux. After configuring it, refresh it and it will be fine.

Step 6: Install tomcat in the container, decompress and configure

# Unzip tomcat to the /usr/local/ directory
tar -zxvf apache-tomcat-7.0.64.tar.gz -C /usr/local/

# Edit tomcat's file
vi /usr/local/apache-tomcat-7.0.64/bin/
  • 1
  • 2
  • 3
  • 4
  • 5

Step 7: Submit the running mycentos container as a new image, and the new image name is mytomcat

docker commit mycentos mytomcat
  • 1

At this point, our mirror image has been created

  • Port Mapping

Create a container based on the mytomcat image

docker run -itd --name t1 -p 8888:8080 mytomcat /bin/bash

#Enter the container and execute the file
docker exec t1 /usr/local/apache-tomcat-7.0.64/bin/
  • 1
  • 2
  • 3
  • 4

The visit is successful when you see the following effect

  • Container/image packaging
#1. Mirror packaging
docker save -o /root/tomcat7.tar mytomcat

#2. Upload the packaged image to other servers
scp tomcat7.tar other server ip:.root

#1. Container packaging
docker export -o /root/t1.tar t1
#2. Import container
docker import t1.tar mytomcat:latest
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

The packaging is complete, it's time to import

docker load -i /root/tomcat7.tar
  • 1

So far it's done

Method 2: docker builder

Dockerfile uses basic DSL syntax-based instructions to build a Docker image, and then uses the docker builder command to build a new image based on the instructions in the Dockerfile

  • DSL syntax
DSL grammar:
1)FROM(specified basis image)
Build directives, must be specified and need to be in Dockerfile in front of other directives. Subsequent instructions are dependent on the instruction specified by the image. FROM Basis of instruction specification image It can be in the official remote warehouse, or it can be located in the local warehouse.
FROM order to tell docker The image we built is based on which(release version)Mirror based. The first instruction must be FROM instruction. And, if in the same Dockerfile When creating multiple images in , you can use multiple FROM instruction.

This command has two formats:
FROM <image> 
specified basis image for the image The last modified version of . or:

FROM <image>:<tag> 
specified basis image for the image one of tag Version.

RUN Followed by the command to be executed, for example, we want to install in the mirror vim,just in Dockfile write in RUN yum install -y vim

2)MAINTAINER(Used to specify the image creator information)
build directives for the image The creator-related information is written to image middle. when we should image implement docker inspect command, there is a corresponding field in the output to record this information.


3)RUN(for installing software)
build instructions, RUN can run any base image Supported commands. such as the basis image chosen ubuntu,Then the software management part can only use ubuntu The command.

This command has two formats:
RUN <command>  

4)CMD(set up container Actions to perform at startup)
set command for container Action specified at startup. The action can be to execute a custom script or to execute a system command. This directive can only exist once in the file, if there are more than one, only the last one will be executed.

This command has three formats:
CMD ["executable","param1","param2"]
CMD command param1 param2

when Dockerfile specified ENTRYPOINT,Then use the following format:
CMD ["param1","param2"]

ENTRYPOINT Specifies the path of an executable script or program, and the specified script or program will start with param1 and param2 Executed as a parameter.
so if CMD instruction using the form above, then Dockerfile must have matching ENTRYPOINT. 

5)ENTRYPOINT (set up container Actions to perform at startup)
Setting instruction: Specify the command to be executed when the container starts. It can be set multiple times, but only the last one is valid.

Two formats:
ENTRYPOINT ["executable", "param1", "param2"]
ENTRYPOINT command param1 param2

The use of this instruction is divided into two situations, one is used alone, and the other is used with CMD Commands are used together.
When used alone, if you also use the CMD order and CMD is a complete executable command, then CMD instruction and ENTRYPOINT will overwrite each other, only the last CMD or ENTRYPOINT efficient.

# The CMD instruction will not be executed, only the ENTRYPOINT instruction will be executed
CMD echo "Hello, World!" 

Another syntax and CMD The command is used in conjunction with specifying ENTRYPOINT The default parameters, then CMD The instruction is not a complete executable command, but only the parameter part;
ENTRYPOINT Instructions can only be used JSON The mode specifies the execution command, but not the parameters.

FROM ubuntu 
CMD ["-l"] 
ENTRYPOINT ["/usr/bin/ls"] 

6)USER(set up container container user)
Set the command to set the user who starts the container, the default is root user.

# Specify the running user of memcached 
ENTRYPOINT ["memcached"] 
USER daemon 
ENTRYPOINT ["memcached", "-u", "daemon"]

7)EXPOSE(Specify the port that the container needs to map to the host machine)
Set the directive that maps a port in the container to a port on the host machine. When you need to access the container, it is not necessary to use the container IP address, use the host machine's IP address and mapped port.
To complete the whole operation requires two steps, first in the Dockerfile use EXPOSE Set the container port that needs to be mapped, and then specify it when running the container-p option plus EXPOSE set the port like this EXPOSE The set port number will be randomly mapped to a port number in the host machine.
You can also specify the port that needs to be mapped to the host machine. At this time, make sure that the port number on the host machine is not used. EXPOSE The command can set multiple ports at one time, and when running the container, it can be used multiple times-p options
EXPOSE <port> [<port>...]

# map a port 
EXPOSE port1 

# The corresponding command to run the container 
docker run -p port1 image 

# Map multiple ports 
EXPOSE port1 port2 port3 
# The corresponding command to run the container 
docker run -p port1 -p port2 -p port3 image 
# You can also specify a port number that needs to be mapped to the host machine 
docker run -p host_port1:port1 -p host_port2:port2 -p host_port3:port3 image

port mapping is docker A more important function, the reason is that every time we run the container, the container IP The address cannot be specified but is randomly generated within the address range of the bridged NIC.
host machine's IP The address is fixed. We can map the port of the container to a port on the host machine, eliminating the need to check the port of the container every time we access a service in the container. IP the address of.
For a running container, use docker port Plus the port that needs to be mapped in the container and the container's ID Look at the mapped port of the port number on the host machine.

8)ENV(for setting environment variables)
Mainly used to set environment variables when the container is running

ENV <key> <value> 

After setting, the subsequent RUN commands can be used, container After startup, you can pass docker inspect View this environment variable, you can also pass in docker run --env key=value set or modify environment variables.

If you installed JAVA program, need to set JAVA_HOME,then you can in Dockerfile write in:
ENV JAVA_HOME /path/to/java/dirent

9)ADD(from src copy files to container of dest path)
Mainly used to add files in the host to the image
 Build instructions, all copied to container File and folder permissions in 0755, uid and gid 0: If it is a directory, all files in the directory will be added to container , excluding directories:
If the file is in a recognized compressed format, then docker Will help to decompress (note the compression format): if<src>is a file and<dest>does not end with a slash, the<dest>as a document,<src>will be written to<dest>:
if<src>is a file and<dest>ending with a slash in the<src>copy the file to<dest>Under contents.

ADD <src> <dest> 

<src> It is a relative path relative to the source directory being built, which can be a file or directory path, or a remote file url;
<dest> Yes container absolute path in

10)VOLUME(Specify the mount point)
The setting specifies that a directory in the container has the function of persistently storing data. This directory can be used by the container itself or shared with other containers. We know that the container uses the AUFS,
This kind of file system cannot persist data, and when the container is shut down, all changes will be lost. When the application in the container needs to persist data, it can be used in Dockerfile use this command in

VOLUME ["<mountpoint>"]
VOLUME ["/tmp/data"]

run through the Dockerfile generate image the container,/tmp/data The data in the directory still exists after the container is closed. For example, another container also needs to persist data, and wants to use the container shared by the above container/tmp/data directory, you can run the following command to start a container:
docker run -t -i -rm -volumes-from container1 image2 bash

in: container1 for the first container ID,image2 for the second container run image name.

11)WORKDIR(switch directory)
Set the command, you can switch multiple times(equivalent to cd Order),right RUN,CMD,ENTRYPOINT take effect.

WORKDIR /path/to/workdir 

# Execute vim a.txt under /p1/p2 
WORKDIR /p1 WORKDIR p2 RUN vim a.txt 

12)ONBUILD(executed in a submirror)

ONBUILD <Dockerfile keywords> 

ONBUILD The specified command is not executed when building the image, but is executed in its sub-image
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144

dockerfile build image:

Step 1: Create a directory

mkdir rw-test
  • 1

Step 2: Edit the Dockerfile file, pay attention to capital D

vim Dockerfile
  • 1

The edited content is as follows:

#pull down centos image
#install nginx
RUN yum install -y pcre pcre-devel openssl openssl-devel gcc gcc+ wget vim net-tools
RUN useradd www -M -s /sbin/nologin
RUN cd /usr/local/src && wget && tar -zxvf nginx-1.8.0.tar.gz
RUN cd /usr/local/src/nginx-1.8.0 && ./configure --prefix=/usr/local/nginx --user=www --group=www --with-http_stub_status_module --with-http_ssl_module && make && make install

ENTRYPOINT /usr/local/nginx/sbin/nginx && tail -f /usr/local/nginx/logs/access.log
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

Step 3: Build the image under the rw-test directory:

docker build -t rw_nginx --rm=true .

-t	Indicates that you choose to specify the user name, warehouse name and tag
--rm=true	Indicates that the temporary container generated in the middle is deleted during the image generation process
 NOTE: The last of the above build commands.Do not miss the match, it means use the current directory Dockerfile build image
  • 1
  • 2
  • 3
  • 4
  • 5

Each step of the Dockerfile will be run in turn

Step 4: Test

docker run -ti -d --name test_nginx -p 8899:80 rw_nginx /bin/bash
docker exec test_nginx /bin/bash
Access via browser: http://ip:8899
  • 1
  • 2
  • 3
  • 4

2. Build docker warehouse

The Docker warehouse (Repository) is similar to the code warehouse, and it is the place where Docker centrally stores image files.

  • docker hub

1. Open
2. Register an account, omit
3. Create a warehouse (Create Repository): slightly
4. Set the mirror label
docker tag local-image:tagname new-repo:tagname
Example: docker tag hello-word:latest myqxin/test-hello-world:v1
5. Log in to the docker hub
docker login (enter, enter account and password)
6. Push image
docker push new-repo:tagname
Example: docker push myqxin/test-hello-world:v1

I changed test-hello-world to test-hell-world here, please pay attention

  • Ali Cloud

Omission: refer to the official documentation

1. Create an Alibaba Cloud account
2. Create command space
3. Create a mirror warehouse
4. Operation Guide
$ sudo docker login --username=[account name]
$ sudo docker tag [ImageId][image version number]
$ sudo docker push[image version number]

  • Build a private warehouse

1. Start the docker Registry, and use the official Registry image provided by Docker to build a local private image warehouse. The specific instructions are as follows.

docker run -d \
-p 5000:5000 \
--restart=always \
--name registry \
-v /mnt/registry:/var/lib/registry \
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

Command parameter description:
-d: means to run the container in the background
-p 5000:5000: Indicates mapping the default exposed port 5000 inside the private mirror warehouse container to port 5000 of the host
–restart=always: Indicates that the local private mirror warehouse is automatically started after the container starts
–name registry: Indicates that the generated container is named registry
-v /mnt/registry:/var/lib/registry: means to mount the data in the container's default storage location /var/lib/registry to the host's /mnt/registry directory, so that when the container is destroyed, The data in the /var/lib/registry directory in the container will be automatically backed up to the specified directory on the host
Docker Registry currently has two versions, v1 and v2. The v2 version is not a simple upgrade of the v1 version, but has improved and optimized many functions. The v1 version is developed using Python, while the v2 version is developed in the go language. The default mount point of the data in the v1 version of the local mirror warehouse container is /tmp/registry, while the data in the v2 version of the local mirror warehouse container is mounted by default. The point is /var/lib/registry

Rename the image: When pushing the image before, it was pushed to the remote image repository by default, but this time the specified image is pushed to the local private image repository. Since the name of the image pushed to the local private image repository must conform to the format of "warehouse IP: port number/repository", the image name needs to be modified as required. The specific operation instructions are as follows.

docker tag hello-world:latest localhost:5000/myhellodocker
  • 1

Push image: After the local private image warehouse is built and started, and the image to be pushed is ready, the specified image can be pushed to the local private image warehouse. The specific operation instructions are as follows

docker push localhost:5000/myhellodocker
  • 1

View the local warehouse image
http://localhost:5000/v2/myhellodocker/tags/list (note: pay attention to the image name when using this address)
Since the directory is mounted, it can be viewed in the local directory:
Push, no need to enter account password (unsafe)

  • Configure private warehouse authentication

1. View the server address where the Docker Registry private warehouse is built: ipconfig
For example: the server address is:
2. Generate a self-signed certificate (after executing the above instructions in the home directory)
To ensure the security of the Docker Registry local mirror warehouse, a security authentication certificate is also required to ensure that other Docker machines cannot access the Docker Registry local mirror warehouse on the machine at will, so it needs to be installed on the Docker host where the Docker Registry local mirror warehouse is built. Create a self-signed certificate (if you have purchased a certificate, you don’t need to generate it), the specific operation instructions are as follows:

mkdir registry && cd registry && mkdir certs && cd certs

openssl req -x509 -days 3650 -subj '/CN=' \    
 -nodes -newkey rsa:2048 -keyout domain.key -out domain.crt
  • 1
  • 2
  • 3
  • 4

Command parameter description:
-x509: x509 is a self-signed certificate format
rsa: 2048, is the certificate algorithm length
domain.key and domain.crt: are the generated certificate files
3. Generate username and password
After generating a self-signed certificate on the Docker host where the Docker Registry local mirror warehouse is located, in order to ensure the interaction between the Docker machine and the Docker Registry local mirror warehouse, it is also necessary to generate a connection authentication username and password, so that other Docker users can only pass the username and Only after password login is allowed to connect to the Docker Registry local mirror warehouse

cd .. && mkdir auth

docker run --entrypoint htpasswd registry:2 -Bbn ruanwen 123456 > auth/htpasswd
  • 1
  • 2
  • 3

4. Start the Docker Registry local mirror warehouse service (delete the previously created container)

docker run -d \ 
 -p 5000:5000 \  
 --restart=always \  
 --name registry \  
 -v /mnt/registry:/var/lib/registry \  
 -v `pwd`/auth:/auth \  -e "REGISTRY_AUTH=htpasswd" \  
 -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \ 
 -v `pwd`/certs:/certs \  
 -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \  
 -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \ 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

5. Configure the Docker Registry access interface
After starting the Docker Registry local mirror warehouse service, you need to configure the interface for other Docker machines to access on the Docker host where the Docker Registry local mirror warehouse is built. The specific instructions are as follows:

sudo mkdir -p /etc/docker/certs.d/
sudo cp certs/domain.crt /etc/docker/certs.d/
  • 1
  • 2

6. Docker Registry private warehouse use registration
Use the sudo vim /etc/docker/daemon.json command to edit the daemon.json file on the Docker machine terminal, and add the following content to the file
7. Restart and load the docker configuration file
sudo /etc/init.d/docker restart
2. Verification test 1. Equipment mirroring
docker tag hello-world:latest
2. Push image
docker push
An error occurs during the sending process, and the message prompt is: no basic auth credentials (that is, the authentication has not passed), so the push cannot be performed, which means that the authentication configuration is valid. In order to push successfully, you need to log in successfully before pushing
3. Log in to the Docker Registry image warehouse
docker login
4. Push again
docker push
5. Result verification


Tags: Kubernetes

Posted by bios on Mon, 05 Dec 2022 17:57:57 +1030