- Docker makes it really easy to install and run software without worrying about setup or dependencies.
- Docker Ecosysystem:
- Docker Client
- Docker Hub
- Docker Machine
- Docker Image
- Docker Server
- Docker Compose
- Docker gets the images from Docker Hub.
- Docker image is a single file with all the dependencies and configurations required to run a program.
- Docker container is an instance of an image. It runs a program. There could be multiple containers linked to one image. It is a program with its own isolated set of hardware resources (memory, networking, hdd vs.).
- Docker installation has "Docker CLI" & "Docker Server/Daemon"
- Output of
docker run hello-world
:Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal.
- Namespacing: let's say program A runs in Py2, while program B in Py3. With namespacing, we can redirect these A & B which sends system call to kernel in the hard drive where we installed Py2 and Py3 to different partitions/segments. (i.e. namespacing: isolating resources per process or group of processes)
- Control groups (cgroups): limit amount of resources used per process
- Namespacing & cgroups belong to Linux only! We can run Docker in our MacOS or Windows too because they are running a Linux Virtual Machine to create a Linux kernel. Run
docker version
on Windows and the terminal will tell you that Docker is running onOS/Arch: linux/amd64
. - Container: A process or set of processes that have a grouping of resources specifically assigned to it. i.e. Portion of hard drive, network, ram, cpu etc. made available to a process.
- Image: Snapshot of the file system along with very specific startup commands. (i.e. FS Snapshot + Startup command)
docker run <image name>
docker
: Reference the Docker Clientrun
: Try to create and run a container<image name>
: name of image to use for this container
docker run <image name> some_command
some_command
overrides the default command!
docker run busybox ls
- Lists the default folders in busybox such as bin, dev, etc, home, proc, root. However, for the hello-world image,
ls
command throws error sincels
command doesn't exist in its file system image.
- Lists the default folders in busybox such as bin, dev, etc, home, proc, root. However, for the hello-world image,
docker ps
- Lists all the running containers.
docker ps --all
- Lists all the containers we have ever created.
- docker run = docker create + docker start
docker run
- Used for creating and running a container from an image.
docker start -a abcd1234
- -a attaches to the container and watch for output coming from it and show it on my terminal!
- By default,
docker run
will show all the logs etc. butdocker start
won't show you what is coming from the container. - If you run a container via
docker start -a abcd1234
, it will run the container's default command! For example, the busybox image comes with a default commandsh
but let's say we overrode it withecho hi there
and created a container out of it. When we initiate that container again, it runsecho hi there
and we cannot put extra command to it likedocker start -a abcd1234 echo hello world
. It throws an error.
-
Output of
docker container prune
(Delete all stopped containers):WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] -
Output of
docker system prune
(Delete all stopped containers):WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all dangling build cache
Are you sure you want to continue? [y/N]
docker logs <container id>
- It's not restarting or rerunning the container, but gets the log of that initiated container.
docker stop <container id>
- Sends
SIGTERM
-terminate signal- message. It gives the process a bit of time to clean up, ends the job or prints a message etc. 10 secs later, it kills the process anyway!
- Sends
docker kill <container id>
- Sends
SIGKILL
-kill signal- message. Immediately shuts down the process, no grace period.
- Sends
- Without docker,
- we start
redis-server
(and in-memory, it is ready to store data now with this command), - then open a
redis-cli
to issue commands to the redis server.
- we start
- When we run
docker run redis
, we can't runredis-cli
in another terminal because it runs inside the container! We need to get inside this container to enter this command!
docker exec -it <container id> <command>
- Execute an additional command in a container (Check
help
file for the flags.)
- Execute an additional command in a container (Check
docker exec -it 2e532f124ba8 redis-cli
- When you are running Docker on your machine, every single container you are running is running inside of a VM running Linux.
- Every process we create in a running Linux env has 3 communication channels attached to it:
- STDIN: Stuff you type
- & 3. STDOUT & STDERR: Stuff that shows up on the screen
-i, --interactive
: Attaches to our terminal's STDIN.-t, --tty
: Basically, prettifies/formats and shows up the output on our screen (actually, it does a bit more than that. "Allocates a pseudo-TTY [terminal]" as stated in help file).
- We'll open up a shell and not use
docker exec
over and over again. docker exec -it 2e532f124ba8 sh
: Opens up a shell. Then comes:
# echo "hi there"
# export b=5
# echo $b
--> Outputs5
# redis-cli
# apt list
and thenexit
or Ctrl+D and so on...- What is
sh
? Well, bash, powershell, zsh, sh are command processors/shell.
docker run -it busybox sh
- Directly starts with a shell, but most probably you are not be running any another process. So, it's more common to start your container and then later attach to it by running
docker exec
command.
- Directly starts with a shell, but most probably you are not be running any another process. So, it's more common to start your container and then later attach to it by running
- Between two containers, they don't automatically share their file systems.
- For example, open two different terminals and run the same command:
docker run -it busybox sh
. Create a file in one of them and typels
in the other one. Yep, it's not there. They use two different, isolated FS. They have two distinct CONTAINER ID anyway.
Dockerfile
: Configuration to define how our container should behaveDockerfile
-->Docker Client (cli)
-->Docker Server
(Take the dockerfile and builds a usable img) -->Usable Image
- Dockerfile flow:
- Specify a base img (
FROM
command) - Run some commands to install programs (
RUN
command) - Specify a startup command (
CMD
command)
- Specify a base img (
- Note: Students who have the most recent versions of Docker will now have
Buildkit
enabled by default. If so, you will notice a slightly different output in your terminal when building from a Dockerfile.- To see the legacy outputs/logs, type:
docker build --no-cache --progress=plain .
- To match the course output, you can disable
Buildkit
.- Click the Docker Icon in the system tray (Windows) or menu bar (macOS)
- Select Settings
- Select Docker Engine
- Change
buildkit
fromtrue
tofalse
. And apply & restart.
- To see the legacy outputs/logs, type:
{
...
"features": {
"buildkit": false
},
"experimental": false
...
}
- Mert: I won't disable it. I liked the
buildkit
output more. docker build .
- This uses the Dockerfile in that folder.
FROM
: Specifies the docker image we want to use as a baseRUN
: Used to execute some command.CMD
: Specifies what should be executed when our image starts up a brand new container.- So, overall the structure is like that:
INSTRUCTION argument
- e.g.
FROM alpine
,CMD ["redis-server"]
,RUN apk add --update redis
- e.g.
- Base image, like
alpine
contains useful set of programs. It's like an OS.
- The
.
indocker build .
is the build context. It's the set of files and folders that belong to our project which we want to encapsulate or wrap in the container. - Every command in the Dockerfile can be observed as
STEP 1/3: FROM alpine
etc. - After the first line of code (e.g.
FROM alpine
), following commands create an intermediate container (e.g. from the terminal log:---> Running in 30as98d7fh
) to run the command on it. Then, the intermediate container gets shut down (e.g. from the terminal log:Removing intermediate container 30as98d7fh
). - Each command gets the image from the previous step and creates a container out of it to build on top of it. When the commands end, the last container modifies the file system and the newest image is presented to us as our final image!
- Assume we added another
RUN
command (e.g.RUN apk add --updage gcc
) to the Dockerfile and build it once more. This time, Docker will use the cache up to the changed line and don't create a new intermediate container as mentioned above to fetch and install that package previously built. See the---> Using cache
line in the terminal log. - This makes Docker installs much more faster.
- Order of the operations matters even though the packages stay the same! If you change the order, Docker won't use the cached versions.
- It is easier to use an image tag instead of its image ID. So, you can use the
-t
tag flag to define a name to your image. docker build -t mert/mydockerproject:latest .
- This is a naming convention:
your_docker_id/repo_or_project_name:version
- It's better to use a version number instead of
latest
.
- Then you use:
docker run mert/mydockerproject
(If you don't specify version, it'll use the latest by default.)
Quick Note for Windows Users: In the upcoming lecture, we will be running a command to create a new image using docker commit with this command:
> docker commit -c 'CMD ["redis-server"]' CONTAINERID
If you are a Windows user you may get an error like "/bin/sh: [redis-server]: not found"
or "No Such Container"
Instead, try running the command like this:
> docker commit -c "CMD 'redis-server'" CONTAINERID
- We are not going to use this method much or at all but it's fun to learn.
- So what's the deal here? We create and run a container with
alpine
base, modify it andcommit
it with and create a new image out of that container! Fun, huh?docker run -it alpine sh
> apk add --update redis
(we are in that container's shell!)
- - - Switch to another terminal - - -docker ps
(Get the running container ID)docker commit -c "CMD ['redis-server']" 42d67b4a3bcc
sha256:598729347yuıh54jb43k25hj4b6........
docker run 598729347yu
(Docker gets the rest of the ID. Don't need to write it all.)
- Remember the legacy builder command for more verbose:
docker build --no-cache --progress=plain .
npm install
: the dependencies for Node JS Apps.npm start
: to start up the server.- npm: Node Package Manager
- You can look for specific tags from images on the Hub.
alpine
version/tag means that you are using the most stripped down version of that image.
- The previous container doesn't have your local build files! Thus, you need to copy them into that container.
- In the command
COPY ./ ./
- the first
./
is the path to folder to copy from on your machine relative to build context (which comes fromdocker build .
), - the second
./
is the place to copy stuff to inside the container.
- the first
docker run -p 8080:8080 image_id_name/project_name
: Route incoming requests to the (first #) port on local host to the (second #) port inside the container.- The ports don't need to be identical. Our local port could be 5000. (e.g.
-p 5000:8080
) - A container can reach out to the outer world (like the internet in
npm install
etc.) but local computer cannot get inside the docker container! This is why there needs to be a port mapping specified.
- Copying local project folders and files into the container's root directory (with
COPY
command in theDockerfile
) may cause problems.- For example, there could be a folder named
lib
in your project and it might overwrite thelib
folder inside the container! - Hence, it is a better idea to specify a working directory in the container so you can work in a safe place (which you specified) without any conflict.
- For example, there could be a folder named
- The command for it is
WORKDIR
.WORKDIR /usr/app
:
- So, we changed the
Dockerfile
and performed these steps to confirm we are in the working directory:docker build --no-cache --progress=plain -t mert/simpleweb .
- [in a second terminal]
docker ps
: To get the running container id. - [in the second terminal]
docker exec -it c41e0a33dee3 sh
- Here we are
/usr/app #
, waiting for a command in the terminal.
- All of the files we copied including the dependencies installed with
npm install
is currently in/usr/app
safely.
- If you change the
index.js
file, you have to build the image once again. - ...and install all the dependencies. This is tiresome!
- We split the
COPY
command [see the latest version of the Dockerfile in thesimpleweb
folder] so that unless the dependencies are changed, the rebuild process will use their cached versions and only copy the updatedindex.js
to the image. The rebuild process will be much faster.
- Let's say we have a simple website with a Node app + Redis server, where Redis server counts the number of visits to the page. We can gather these two in a single container, but when we need to scale things up, creating multiple containers with these two might cause us trouble since the containers count the visits separately on their own. Thus, it's better to scale Node apps up and then connect these multiple containers to our Redis container.
- The dependency
redis
is a JS client library for connecting to the Redis server to pull/update information.
- In the
javascript
code we wrote, I couldn't grasp what.set
method does in either the last time I studied the course or now. Then I searched about it and the part I was struggling with dawned on me: I thought that constant was a string. No! That Redis method returns a map object! Thus, it has a key & value pairs.client.set('visits', 0);
---> Here, we setvisits
key to a value, which is zero.client
is a map object. I thought it was related to a URL or something on the server we were building but no! It's a simplekey: value
pair (as in Python).
- Also, I changed a line to format the js string. BUT REMEMBER: To format a string in js, the brackets should be `, not ' or ".
- When we tried to run the node app container, it throws an error about failing to connect Redis server. We run a Redis server in another terminal but it still throws the error. Why? Because these two containers don't form any communication automatically. They are two isolated processes, i.e. separate containers! To connect them, we have 2 options:
- Docker CLI's Networking Features
- It's pain in the arse! Need to handle bunch of commands and rerun every single time!
- The teacher said that he had never seen a person that did this in the industry.
- Docker Compose
- It's a separate CLI tool.
- Used to start up multiple containers at the same time.
- Automates some of the long-winded arguments we were passing to
docker run
.
- Docker CLI's Networking Features
- About docker-compose file versioning: "If only the major version is given (
version: '3'
), the latest minor version is used by default." - From Docker docs docker-compose.yml
file contains all the options we'd normally pass todocker-cli
. We're gonna tell that file:- Here are the containers I want created:
- redis-server
- Make it using the
redis
image.
- Make it using the
- node-app
- Make it using the Dockerfile in the current directory.
- Map port 8081 to 8081.
- redis-server
- Here are the containers I want created:
- Inside the
yml
file, under theservices
, we usebuild: .
to build the Dockerfile inside our directory. services
are like container. Thus, we list the containers under it but it's not like containers exactly!- Dash (
-
) inyml
file specifies an array. So we can map many ports etc. we want.
- About the line
host: 'redis-server'
in theindex.js
: Docker will see this host name as a http request and understand that it is looking for the container namedredis-server
. If it was a normal node app, the value would be a regular URL.- Also, we specified the
port: 6379
in the same dictionary underhost
. It is the default port for redis server but we added it anyway.
- Also, we specified the
- Then docker-compose will automatically connect those containers/services.
docker-compose up
:- The equivalent of
docker run myimage
- If it is not built already, docker-compose builds it.
- We don't specify an image name. It finds them in the docker-compose file.
- The equivalent of
docker-compose up --build
:docker build .
+docker run myimage
--build
: It builds the services once again if something has been changed. Without this, the already built services will run and the changes will be ignored.
- Launch in the background:
docker-compose up -d
.-d
or--detach
: detach
- Since we have multiple containers running in our docker-compose, it would be a pain to stop them all one by one with
docker stop container_id
. So, we have:docker-compose down
- We changed the
index.js
to make our server crashed. (see the error:visits_node-app_1 exited with code 0
)code 0
means that we exited and everything is OK. 1, 2, 3, etc. means that we exited because something went wrong!
- However, only the node-app has crashed and redis server is still online. How about we restart the container?
- Restart Policies:
- "no": Never attempt to restart this container if it stops or crashed.
- Note the quotes around
no
! In ayml
file,no
without the quotes is a boolean value. Thus, we make it a string"no"
, with single or double quotes.
- Note the quotes around
- always: If stops for any reason, always attempt to restart it.
- on-failure: Only restart if the container stops with an error code.
- Hence, it won't restart on
process.exit(0)
sincecode 0
means we exited and everything is OK. It'll restart other thancode 0
. - We can use it with workers doing a scheduled job like in Pubtimer. When everything successfully finish, it stops the container. Or else, it restarts the job/worker. It's pretty useful when I think about the database issues in Pubtimer. I have to restart them manually when things are fucked up!
- Hence, it won't restart on
- unless-stopped: Always restart unless we (the developers) forcibly stop it.
- "no": Never attempt to restart this container if it stops or crashed.
- Restart Policies:
- These policies are defined under the
yml
file.- We added
restart: on-failure
under thenode-app
service.
- We added
docker-compose ps
is the equivalent ofdocker ps
BUT it only works in where your relatedyml
file is. Thus, it doesn't work globally asdocker ps
. It throws an error if it can't find the file in the working directory.- I found this repo for docker-compose syntax and keywords.
npm run start
starts up a development server. For development use only!npm run build
builds a production version of the application.npm run test
tests our app and we are only concerned here that all the test are passed or not.
========================================================
============OLD NOTES BELOW. WILL BE REVISED.===========
========================================================
===========OLD NOTES BELOW. WILL BE REVISED.===========
========================================================
- We created
Dockerfile.dev
for development purposes. For the prod, good old Dockerfile is sufficient. - To run a custom Dockerfile name, use
docker build -f Dockerfile.dev .
- We deleted
node_modules
folder for duplicate dependencies.build
became much faster.
- Instead of copying the local files to our Docker image, we can give reference to them so that when we change something in our local files (like the React app in the tutorial), it is also reflected to the container. So, we don't have to build the image again and again.
- Using the
volume
feature is a bit pain in the arse considering the syntax. docker run -p 3000:3000 -v /app/node_modules -v $(pwd):/app mert/frontend
- First
-v
puts a bookmark on thenode_modules
folder.- Without
:
, we say "Don't try to map it up against anything. It's a placeholder and we don't want to accidently overwrite this directory.".
- Without
- Second
-v
maps thepwd
into the/app
folder.:
maps the folder inside the container (/app
) to the one outside the container ($PWD
).
- First
- We can wrap things up (see the previous lesson) in a
docker-compose.yml
.
docker run 61a63368e07c npm run test
performed the test but we weren't connected to STDIN so we couldn't give any command afterwards. Thus, please remember to add the-it
flags!
- Every different process inside the container has their own instances of STDIN, STDOUT, STDERR.
> docker attach 38agsas8g45
: This attaches to the STDIN, STDOUT and STDERR to the primary process in that container. However, this doesn't work as we expected. We see nothing.
- We write
> docker exec -it 38ags.. sh
and connect to the container and see the running processes byps
in thesh
. We understand that the primary process isnpm
which STARTS the run test command. Thus, we are attaching tonpm
command instead ofrun test
process, which would communicate with STDIN etc. actually. That's why we cannot interact with the STDIN on the terminal.
> npm run build
: Builds a production version of the application. Takes all the js files, processes them together, puts them all together in a single file etc.
nginx
: Pronounced as engine x. It takes incoming traffic and somehow routing it or somehow responding to it with some static files.
- We are going to create another Dockerfile, for the production this time!
- So, these are the steps for building our prodution image:
- Use
node:alpine
- Copy the
package.json
file - Install dependencies
- Run
npm run build
- Start
nginx
- Use
- However, there are problems with these type of building. First, the dependencies just to create the
main.js
files fill up the disk because we don't need them after we build the image. Second, how do we startnginx
for God's sake!? We need another base image for that! - Thus, we separate the phases as Build Phase and Run Phase. There will be two different builds:
- Build Phase
- Same as the steps 1 to 4.
- Run Phase
- Use
nginx
- Copy over the result of
npm run build
- Start
nginx
- Use
- Build Phase
- [See the Dockerfile] Except the
/app/build
folder, everything on thebuilder
container will be dumped. This saves some space for us. - You can read the comments in the
Dockerfile
as well.
- We build the image (
docker build .
) without specifying the file with-f
flag because we have created a regularDockerfile
. nginx
uses 80 as its default port.
> docker run -p 8080:80 51hj2g3k1j
Note from the lecturer: Travis CI is not totally free anymore. On the website, select Monthly Plans and then the free Trial Plan. This will give you 10,000 free credits to use within 30 days. If you run out of credits, are unable to register for a Travis account, or, if you simply do not wish to use Travis - We have provided guidance to use Github Actions instead. Click here to get to the featured question.
Note from Mert: I am going to use Travis CI to learn about the tool until the credits expire. Then, I might switch back to the Github Actions later.
- Remeber the leading dot in the name of
.travis.yml
file. - In the
yml
file, we used the Docker username underbefore_install
section but it's not essential really. Only Travis is going to use it. It's just a good naming convention.
- We added the property
language: generic
to the.travis.yml
file to include necessary services and languages. Click here to get more information about it. - The flag
-e
under thescript
section of theyml
file is to Set environment variables. See the "ENV (environment variables)" sectionin Docker documents. - We use it instead of
--coverage
flag now and set an env varCI=true
because The test command will force Jest to run in CI-mode, and tests will only run once instead of launching the watcher. See here for more information.
- AWS configuration has been changed since the last recording of the lectures. Thus, check the cheat sheet on this specific lesson if required.
- We didn't need the
EXPOSE
parameter to map the default port to the environment. - The
Dockerfile.dev
is for development, regular one is for production stage. - The repo for the build can be found here: gulmert89/docker-react