An article teaches you to build an automated deployment website with front-end and back-end separation (gitlab-cicd+docker+vue+django), full of dry goods!

An article teaches you to build an automated deployment website with front-end and back-end separation (gitlab-cicd+docker+vue+django), full of dry goods!

Speak ahead

Because I build a website with my friends as a platform and personal blog for technical realization, as a front-end graduated for two years, it is a long way to go to complete a whole set of website processes such as front-end-back-end operation and maintenance. , But there are goals to have motivation, so when I took on the content of project operation and maintenance, I started to study crazy, and finally after dozens of video explanations and countless Baidus, I finally built a relatively stable front-end separation Automate the deployment of projects!

Because front-end automation is relatively simple, I will first use front-end automation as an example to explain a little bit, and finally talk about how docker+django+uwsgi+nginx background automation deployment is achieved.

It is recommended to cooperate with the video commentary to help quick understanding!

The knowledge points used in this article are:

  • gitlab
  • gitlab-cicd
  • gitlab-runner
  • docker
  • vue
  • django
  • nginx
  • uwsgi

Don t worry if there are knowledge points that you don t know and have never heard of above, this article will give a brief introduction!

Project overall structure

Can't understand the above picture now? It's ok! Can understand how much is how much, we continue to say.

The initial idea is that the code library uses the official gitlab library. The normal code management is the same as git. The front end is to create a container on the server with docker, and the container uses nginx as a proxy to run the front-end project, and the back end is to create a container with docker. For the container, nginx acts as a proxy, static files go through nginx, and dynamic requests are proxied to uWSGI, allowing uWSGI to handle dynamic requests.

Then what I want to achieve is to upload my local code and automatically update the project corresponding to the server to achieve automatic deployment.


Because my deployment to the server this time is based on docker, so here is a brief introduction to docker.

To put it simply, docker is something that can create virtual machines, but the virtual machines it creates are not complete, it is to build a complete environment with the least performance, for example, if you want to build a node environment, then you only need to run

docker run -ti --name=my-node node
The command can enter a brand new node environment, which is isolated from your current host, which is equivalent to running a virtual machine that can only use node.

Docker has three concepts, one is called a mirror, another is called a container, and there is a Dockerfile. How do we understand it? Let me make an analogy:

We know that the installation file of winbows is called a mirror. We install this mirror on a computer, then we can operate this computer through windows, then the system we boot into is equivalent to a container, so we come to the conclusion, The image is unchangeable, and we can operate it only when it becomes a container after installation.

Then let's look back

docker run -ti --name=my-node node
This instruction.

docker run
It is equivalent to the operation of installing windows

It is equivalent to booting and entering the windows desktop

It is equivalent to the name of our windows called my-node

It is equivalent to the installation file of windows, where docker will get the image of node from the dockerhub library

Then we have performed some operations in the current widows, such as installing a few software, and I want to repackage it into a mirror image for others to install, then the windows installed by others come with my software. Then you need to write a file to achieve this in docker

, This file will perform some operations based on a certain image, and finally use
docker build
Package it into a new image, and then others can use it
docker run
To execute this image, form a new container.

Let me understand briefly. If I want to learn more about docker, I also have a document here for reference.

gitlab code base

I am using the official gitlab library:

In fact, gitlab provides a local code library, but the performance required to run on the server will be relatively high, it seems to be at least 2 cores 4G, and the two servers I bought are 1 core 2G, so no local code is deployed Library.

Because the operation of this block is exactly the same as git, the local code pull/push to the gitlab code base will not be described here.


What is CI/CD?

Let's first look at the concept:

  • CI: Continuous Integration

    The process of automatically detecting, pulling, building, and (in most cases) unit testing after source code changes.

  • CD: Continuous Delivery

    Continuous delivery (CD) usually refers to the entire process chain (pipeline), which automatically monitors source code changes and runs them through construction, testing, packaging, and related operations to generate deployable versions, basically without any human intervention.

CI/CD of gitlab

After reading the concept, let's take a closer look at the implementation of gitlab's CI/CD.

The above picture is the secondary menu of CI/CD navigation on the left side of gitlab. Here we mainly look at two:

  • Pipelines

    Pipeline is a task group that is triggered every time we change the code or other situations we set. There may be multiple tasks in this task group, and each task is doing different things, such as the installation environment, Packaging, deployment, etc.

  • Jobs

    As the name implies, the Jobs here are the tasks mentioned above. There can be multiple tasks under one assembly line, all of which depend on our needs.

Create CI/CD pipeline

Now we know that every time we change the code, it will trigger our own pipeline (Pipelines) on gitlab, and then there will be one or more tasks in the pipeline.

So how to create a pipeline?


1. we need to create a file called

The file is shown in the figure below.


Here I will directly put one that has been written

, And then I interpret it.

# Here is the docker image, indicating that our entire pipeline is completed in docker's node:alpine container # If you don t understand docker, you can simply understand that all of our tasks below are done in a virtual machine with a node environment # this the default directory virtual machine is our current project in the Image: the Node: Alpine # here are some of our custom stage pipeline stages: - install - Build - Deploy # because the new file each task in all operations generated when the next task will be cleared # but some of us do not want to know, so we use the cache file defined in the cache paths are cached on to the next task cache: Key: modules-cache Paths: - node_modules - dist # here is our first task, its name is job_install, this name can easily write, you can also use Chinese job_install: stage: install # here to represent our current mission in the install stage Tags: - vue3 # Here is the label of the current task, the label is the script we defined later in gitlab-runner : # Every task must have a script, as the name implies, it is a statement to be executed - npm install # As mentioned earlier, we are in a virtual node environment Machine, then this sentence is to execute npm install in our current project of this virtual machine # This is our second task, and the logic is the same as the first task above, so I won t elaborate on it job_build: stage: build Tags: - vue3 Script: - npm rUN Build # this is our third task, because the package to run here the project has been completed, we are about to create a new container deployment project with Docker job_deploy: Stage: deploy Image: Docker # because so here we use docker instruction node environment should switch to docker Tags: - vue3 Script: # here is to create a new image by Dockerfile file in the root directory of our project # do not understand can be understood as packaged into an installation package - docker build -t rainbow-admin . # Here is to check if there is any project container running on the current server or there is a project container we have run before, if it is deleted - if [ $(docker ps -aq --filter name=rainbow- admin-main) ] ;then docker rm -f rainbow-admin-main;fi # here is to run the new image we just created # If you don t understand, it can be understood as installing the installation package that we just packaged. After installation, it will save you The project is running in nginx, and the external network can access your project - docker run -d -p 80 :80 --name=rainbow-admin-main rainbow-admin Copy code

The above mentioned a Dockerfile, which is a file that creates a mirror image in docker. It is a very simple file. Let's take a look here too.

# This shows that our basic image is nginx # As mentioned above, what you don t understand can be simply understood as all the tasks below are completed in a virtual machine with nginx environment FROM nginx # The default access directory of nginx is/usr/report this content share/nginx/HTML # so we just put packaged dist can be copied to the corresponding directory under cOPY dist/usr/report this content share/nginx/HTML copy the code

Okay, now that we are looking at the overall framework of the previous project, there should be another gitlab-runner not mentioned, so let's continue!

in fact

There are many rules for writing files, as well as some other operating modes of CI/CD, etc. I will introduce them in this document .


We now know that every code change we make on gitlab will trigger the CI/CD pipeline, so where is this pipeline executed?

Then we will mention gitlab-runner, which is a process executed on the target server. It is associated with the corresponding project through a token. As long as the pipeline in the project is triggered, it will be received here and then deployed on the current server. task.

How to create gitlab-runner

First of all, I am using gitlab-runner in the docker environment this time. Of course, there are many other ways to create the runner, so I won't go into details here.

Then we need a server with a docker environment. If you ask me how to install docker on a centos server, I happen to have a document here for you to refer to. Just follow the above commands one by one.

Well, the docker environment is also there, and we officially started to deploy gitlab-runner.

The first step is to pull and run a gitlab-runner image.

docker run -d --name gitlab-runner --restart always/ -v/srv/gitlab-runner/config:/etc/gitlab-runner/ -v/var/run/docker.sock:/var/run/docker.sock/ gitlab/gitlab-runner:latest Copy code

Waiting for the installation and running to complete, we can use

docker ps
Command to check if there is something like the following figure.

If so, congratulations, and you can continue. If not, then unfortunately, I don't know why, but you can Baidu.

Now we just installed and ran a gitlab-runner, which has not been associated with our code base, then we continue to run the following commands.

docker exec -it gitlab-runner gitlab- runner register copy the code

After executing this command, we will be asked to enter a few things (the order is not necessarily correct, pay attention to the prompt):

  1. The first one is the domain name, which is the domain name where your code base is located. If you don t know, you can see the picture below
  2. The second is the token of your project, which is in the position shown in the figure below
  3. The third is a description of the current runner (write it casually)
  4. The fourth is to add tags to the current runner, the tags will be used in the .gitlab-ci.yml file (also write casually)
  5. The fifth is the environment in which the runner runs, and what we fill here is
  6. Because docker is used, a basic environment must be specified here, because it is the front end, I specified one

After successful registration as shown below

At this time, let's refresh it in the token page of gitlab setting cicd to see if there is an extra runner.

Okay, so far we have completed the entire process of front-end automated deployment!

At this time, we can modify the code upload to see whether the pipeline can be triggered, whether the server can complete the pipeline task, and whether the front page page can be seen when visiting the server.

common problem

Of course, don't worry if it fails. I tried to deploy more than a hundred times before it was completely successful. During this period, various small problems emerged in an endless stream. I will talk about the two most common ones here.

Cancel the email reminder of the pipeline

This email reminder is really annoying sometimes, you can cancel it in the personal center in the upper right corner, and set it as shown below.

Created for the first time
After triggering the pipeline, there is no permission problem

This problem is rather strange. It took a long time to find the problem on the Internet because it was enabled by default to allow the use of shared tags, which led to the problem of identity verification.

Here I choose to close it directly, which is also the page that gets the token.

Django environment construction

First of all, we need to build the Django environment. We need the python environment. Depending on the project, there may be mysql environment, nginx environment, and so on.

I have used python3.8.8, nginx, uwsgi here.

Unlike the front-end, there are many back-end environments, so we must first build a docker image that can be specifically used for our project.

And why is it used here

I have explained the layout of this document in this document .

Create a docker image of the operating environment

Here I directly post my Dockerfile to explain my thoughts.

# Here is my basic image, indicating that I created the environment in centos FROM centos # Here is the author information, you can ignore the MAINTAINER "JyQAQ" # Here is the relevant environment needed to install python, you can see that there are many, here at the end One I installed nginx by the way. RUN yum install -y zlib-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gcc make wget libffi-devel mysql-devel nginx # here is me from the official website address of python Download the desired python version RUN wget # Unzip to/usr/local/src/ RUN tar -xzvf ./Python -3.8.8.tgz -C/usr/local/src/ # Here I first created a project directory, which is the location where I wait to release the project files. RUN mkdir -p app/django_platform # Set the current working directory and explain the following sentences The commands are all executed in this directory WORKDIR /usr/local/src/Python-3.8.8 # Check the installation package RUN ./configure --prefix=/usr/local/python3 # Perform installation RUN make && make install # Set the python environment variable ENV PATH/usr/local/python3/bin :$PATH # Reset the working directory, here is where the configuration file of nginx installed by centos is located WORKDIR /etc/nginx # Overwrite the default nginx configuration with my own nginx configuration COPY django_platform.conf nginx.conf # Set the working directory to us The root directory of the project WORKDIR /opt/app/django_platform # Because we only need to configure the environment, we don t need to transfer the project files for now, so comment out the following two sentences #COPY.. #RUN pip3 install -r requirements.txt # Mount the project directory, don t care if you don t understand VOLUME /opt/app/django_platform # Expose port 80, because nginx uses this port EXPOSE 80 run-time execution of # items, default start bash command line CMD "/bin/bash" Copy the code

An nginx configuration is used in the Dockerfile above

,as follows.

I won't repeat the configuration of nginx here, I will only explain a few of my changes.

user nginx; worker_processes auto; error_log/opt/app/django_platform/nginx-error.log; pid/run/; # Load dynamic modules. See/usr/share/doc/nginx/README.dynamic. include/usr/share/nginx/modules/*.conf; events { worker_connections 1024; } http { log_format main'$remote_addr-$remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log/var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include/etc/nginx/mime.types; default_type application/octet-stream; include/etc/nginx/conf.d/*.conf; server { listen 80; server_name; root/usr/share/nginx/html; # The default directory is Django's dynamic request, we need to proxy to uwsgi location/{ # A configuration that uwsgi will use include uwsgi_params; uwsgi_connect_timeout 30; # Connect to the uwsgi of the project uwsgi_pass unix:/opt/app/django_platform/uwsgi.sock; } # Use nginx proxy for static files location/static/{ # Point to the static file directory of the django project alias/opt/app/django_platform/static_all/; } error_page 404/404.html; location =/40x.html { } error_page 500 502 503 504/50x.html; location =/50x.html { } } } Copy code

So now we run now

docker build -t jyqaq/rainbow-django.
Packaged into a new image called jyqaq/rainbow-django, and then I pushed it to dockerhub, so that I can directly in the Dockerfile
FROM jyqaq/rainbow-django
Come use this image as the basic environment.

Django's .gitlab-ci.yml file writing

The environment has been set up, so the rest is very simple, not much bb, just go to the code.

# The basic environment is docker image: docker Stages: - install - the Clear # here is the same and the deployment of the front end is not to go into the deployment environment: Stage: install Tags: - Django Script: - Docker Build -t django_platform . - IF [ $ (Docker PS -aq --filter name =django_rainbow) ] ; then docker rm -f django_rainbow;fi - docker run -d -p 80 :80 --name=django_rainbow django_platform # This is because if the assembly line fails, it will produce discarded and useless images. Clean up here and fix the commands below I won t go into details cleaning Docker: Stage: Clear Tags: - Django Script: - Docker PS -a | grep "the Exited" | awk '{Print $ 1}' | xargs Docker STOP - Docker PS -a | grep "the Exited" | awk '{Print $ 1 }' | xargs docker rm - docker images|grep none|awk '{print $3}' |xargs docker rmi Copy code

be careful

In particular, the above basic environment is in docker, which means that we are now running docker in docker . Of course we don't want this. The effect we want is to use docker on the server to create containers, so we need to modify the configuration file of gitlab-runner, and review the previous commands we created gitlab-runner:

docker run -d --name gitlab-runner --restart always/ -v/srv/gitlab-runner/config:/etc/gitlab-runner/ -v/var/run/docker.sock:/var/run/docker.sock/ gitlab/gitlab-runner:latest Copy code

You can see that we have mounted the configuration file on the server

Directory, then we find it, and then
vi config.toml
Edit it.

[runners.docker] # Find the volumes configured by runners.docker, add the following two directories volumes = [ "/cache" , "/usr/bin/docker:/usr/bin/docker" , "/var/run/docker.sock:/var/run/docker.sock" ] Copy code

Dockerfile is used in the above deployment, post it here.

FROM jyqaq/rainbow-django MAINTAINER "JyQAQ" # Copy the project files to the container COPY .. # Download django dependency RUN pip3 install -r requirements.txt # Open uwsgi and nginx when the container is running, nginx -g "daemon off;" command can keep the container continuously background EntryPoint uwsgi --ini/opt/app/django_platform/uwsgi.ini nginx && -g "daemon OFF;" copy the code

One last reminder, what is used in the article

They are all placed in the root directory of the project. Don't put them in the wrong place and you won't find them.

bingo, so far we have completed the automated deployment of the front and back ends!