In the past, every time there was a new server, I had to reinstall the software. Later, I found docker and felt very good. So I read some tutorials, understood some basic concepts, and also tried to make my own container based on debian’s image. I put all kinds of software in it, and then submitted it. The capacity reached 4GB. At first, I thought it was nothing. Later, I thought it was wrong. Originally, this was the wrong usage (as if many blogs were the way to teach you to submit). I wanted to use Dockerfile to make the image.
I think some people say that each software should be separated and put in a container. Assuming that my project is written in python and uses nginx, mongodb, Redis, etc., should I run one software in a container and operate it through a port?
I’d like to know how to use docker reasonably if you decide to use it in actual projects.
If a software runs in a container, is it all made of Dockerfile? The software needs to be upgraded and then submitted, so it can also be returned. Then the configuration file and so on also modify the container to submit it. This is the difference and usage between Dockerfile and commit that I can think of at present.
In fact, each software can be run in a container and then linked together in the way of link. I personally feel bad about packaging them together. The “non-pluggable” packaging method is no different from using a large virtual system, and it is not easy to maintain.
For example, when I was studying, what I did was that nginx was a container, and then it could be enabled. mysql was also a container, and the start and stop could be operated separately. In this way, each configuration file could also be mapped in the folder location of the corresponding host computer. The configuration could be done without starting docker. Indeed, many tutorials are packaged. I don’t know why.