Docker Concepts
If you need to have multiple instances of different ROS or Ubuntu versions, one solution is a virtual machine which puts lots of overhead on the hardware and makes it very slow. A good alternative for this problem is Docker. Docker is a tool that can package an application and its dependencies in a virtual container that can run on any Linux server [1].
Some analogies to OOP:
Images are similar to classes. Images are made of layers, conceptually stacked on top of each other, each layer can be added, changed, removed. Images can share layers. An image is read-only.
Layers are similar to inheritance.
Containers are similar to instances. It is a copy of an image
Basically, you create images and spin of containers from images that you can work with.
Installation
Here, I have installed the docker community edition on Ubuntu 18.04 (Xenial Xerus) [2].
1) First, make sure you don’t have any old version of docker on your system (docker, docker.io, or docker-engine):
1 |
sudo apt-get remove docker docker-engine docker.io containerd runc |
2) Add required packages:
1 |
sudo apt-get update && sudo apt-get install ca-certificates curl gnupg lsb-release |
3) Add Docker’s official GPG key:
1 2 |
sudo mkdir -p /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg |
4) Add the repository:
1 2 3 |
echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null |
5) Install the docker:
1 |
sudo apt update && sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin |
6) Now you are ready to start, but first fix the permission problem:
1 |
sudo usermod -a -G docker $USER |
Docker image
The run
command will start a container from a given image:
1 |
docker run --name <name_for_container> -it <image_name:tag_name> |
-i: means interactive
-t: means tty
If you don’t provide a container name with the --name
, then the daemon generates a random string name for your container such as “loving_tereshkova”.
If the image doesn’t exist it will pull it from hub.docker.com
Basically, it runs the pull command beforehand:
1 |
docker pull <image_name:tag_name> |
for instance, to install ubuntu 20.04
1 |
docker pull ubuntu:20.04 |
to check the version, just run:
1 |
lsb_release -a |
To see available images on your machine:
1 |
docker images |
or
1 |
docker image ls |
You can search directly from the terminal for images:
1 |
docker search <image_name> |
If you need to expose the ports, so your docker would be accessible from outside:
1 |
docker run -p 6379:6379 --name redis-server -it redis |
and you can type exit to leave.
to remove all dangling images:
1 |
docker image prune |
to remove all unused images, not just dangling ones
1 |
docker image prune -a |
Docker container
To see the running containers:
1 |
docker ps -a |
To see all running containers:
1 |
docker ps |
The following command has the same output as docker ps
, but it is a more updated command to it’s better to use this one
1 |
docker container ls |
To list all containers (default shows just running)
1 |
docker container ls -a |
To terminate a container:
1 2 |
docker kill docker stop |
“kill” will terminate the container immediately.
Next time you want to run your container, you can use the name you gave to it :
1 2 3 |
# find the name of the stopped docker first with: docker container ls -a docker start -i myubuntu |
Now, if we install some packages inside our container, we can use docker diff to the changes we made to our container:
1 |
docker diff <container_id> |
to get into a running docker container:
1 |
docker exec -it <container_id> bash |
to find where the images and dockers are stored:
1 |
docker info |
which on Ubuntu they are under:
1 |
Docker Root Dir: /var/lib/docker |
to remove a container:
1 2 |
docker rm <container_id> docker rm <given-name> |
Remove all containers:
1 |
docker rm $(docker ps -a) |
To remove all the unused containers at once:
1 |
docker container prune |
to remove an image:
1 |
docker rmi <your_image_tag_name> |
if you have a container that has been spun from an image, you need to delete it before removing an image, or you can use the force option:
1 |
docker rmi -f <your_image_tag_name> |
to list dangling images:
1 |
docker images -f dangling=true |
to remove dangling images:
1 |
docker rmi $(docker images -f dangling=true -q) |
Creating Images
We can use the following to build an image from our container and make a new image:
- commit
- build
Commit
commit will create an image from your container:
1 |
docker commit <container_id> <image_name> |
Build
What build does is, pulling images, running an intermediate container from them, run your commands on them, and finally commit that and build an image, so it is basically an automated commit.
Example 1
First, search in the hub.docker.com for an image i.e. “Ubuntu” to find the right name then create a file and rename it “Dockerfile” and add the following to that:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
# ubunutu is the base image FROM ubuntu:20.04 # this is for making the installation non interactive ENV DEBIAN_FRONTEND=noninteractive # this is for timezone config ENV TZ=Europe/Berlin RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone RUN apt-get update #-y is for accepting yes when the system asked us for installing the package RUN apt-get install -y cmake RUN git clone https://github.com/gflags/gflags RUN mkdir -p gflags/build && cd gflags/build WORKDIR "gflags/build" RUN cmake ../ && make -j8 all install |
Now you can build your custom image:
1 |
docker build -t <my_image_tag_name> <path_to_Dockerfile> |
The default file that docker looks for is Dockerfile without any extension. After building your image you can a container from that with:
1 |
docker run --name <container_name> -it <image_name> |
Example 2
First search in the hub.docker.com for PHP and from the list pick Apache, at the current time the latest stable Apache is 7.1.11-apache. Just like CMake, create a file and name it Dockerfile and add the following line into that:
1 2 3 |
FROM php:7.1.11-apache COPY src/ /var/www/html EXPOSE 80 |
create a src directory and add the index.php and simply put a hello world message in it:
1 |
<?php echo "hello world";?> |
now we are ready to build it:
1 |
docker build -t <your_image_tag_name> . |
After a successful build, list all your images, and you should the image that you just created:
1 |
docker images -a |
but in our case, since we need to expose port 80 we need to add a line:
1 |
docker run -p 80:80 <your_image_tag_name> |
Now you should be able to see your running container:
1 |
docker container ls |
to stop your container:
1 |
docker container stop <Container_ID> |
If you make changes to your project, you need to build it and create another image, which might be time-consuming. To avoid this, you can share directories, which are called volume:
1 |
docker run -v <path_to_source_directy:path_to_directory_in_image> <your_image_tag_name> |
which, in my case:
1 |
docker run -p 80:80 -v /home/behnam/php_demo/src/:/var/www/html/ php_demo |
Now any changes to src directory in your docker project will be seen immediately in the corresponding image file and consequently in the running container.
Transferring Files From/Into Containers
1 2 |
docker cp <OPTIONS> <CONTAINER:SRC_PATH> <DEST_PATH_ON_HOST> docker cp <OPTIONS> <SRC_PATH_ON_HOST> <CONTAINER:DEST_PATH> |
We can also use volumes.
Volume
If you delete your container, your data stored on the container will be lost. You can use volume to store your data outside of containers. Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.
We can map a directory on the host machine into a directory into the docker. This is called volume. If you delete the container, the volume won’t be deleted. To get information about the command, just run:
1 |
docker volume |
to get a list of existing volumes:
1 |
docker volume ls |
to display detailed information on a volume
1 |
docker volume inspect <volume-name> |
to create a new volume:
1 |
docker volume create <volume-name> |
attach a volume to a container
1 |
docker run --name ubuntu-container1 -v my-volume:/home -it ubuntu |
Now if we create some files on the home directory of “ubuntu-container1” and create run the second instance, with the same volume, the data will be shared between these two instances:
1 |
docker run --name ubuntu-container2 -v my-volume:/home -it ubuntu |
Access docker volume data from host machine
Volume drivers let you store volumes on remote hosts or cloud providers, encrypt the contents of volumes, or to add other functionality.
Add a volume to an existing Docker container
OK this is a little hacky, First, stop the container, then find the corresponding file config.v2.json
, located at /var/lib/docker/containers/<container-id>/config.v2.json
Find MountPoints
section, copy proper contents from another container with proper settings Restart the docker service:
1 |
service docker restart |
Refs: [1]
Bind mount
When you use a bind mount, a file or directory on the host machine is mounted into a container. Bind mounts have limited functionality compared to volumes. If you bind-mount into a non-empty directory on the container, the directory’s existing contents are obscured by the bind mount. For instance, binding the container’s /usr/
directory with the /tmp
directory on the host machine would result in a non-functioning container.
1 2 3 |
cd ~ mkdir dir-pointing-container-home docker run --name ubuntu-container -v /home/behnam/dir-pointing-container-home:/home -it ubuntu |
Difference between bind mounts and volume
When you use a volume, a new directory is created within Docker’s storage directory on the host machine. bind mounts are dependent on the directory structure and OS of the host machine, volumes are completely managed by Docker. Volume does not increase the size of the containers using it, and the volume’s contents exist outside the lifecycle of a given container.
Developing docker with GUI (X Window System)
To do that, you should mount your host’s X Server socket into the Docker container. This allows your container to use the X Server you already have. GUI applications running in the container would then appear on your existing desktop.
Just more information for geeks: X Server sockets, are Unix domain socket or IPC socket (inter-process communication socket) which are for exchanging data between processes executing on the same host operating system.
The Unix domain sockets are similar to Internet sockets, but rather than using an underlying network protocol, all communication occurs entirely within the operating system kernel.
Refs: [1]
Providing a Docker container with access to your host’s X socket is a straightforward procedure. The X socket can be found in /tmp/.X11-unix
on your host. The contents of this directory should be mounted into a Docker volume assigned to the container. You’ll need to use the host
networking mode for this to work.
You must also provide the container with a DISPLAY
environment variable. This instructs X clients – your graphical programs – which X server to connect to. Set DISPLAY in the container to the value of $DISPLAY
on your host.
Refs: [1]
Ok, First let’s create our container:
1 |
docker run -it --env="DISPLAY" --env="QT_X11_NO_MITSHM=1" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" ubuntu:latest |
Now inside the container, install the xeyes
1 |
apt update && apt install x11-apps -y |
Then run the following command on the host:
1 |
export containerId=$(docker ps -l -q) |
Hint:
-l, –latest: Show the latest created container (includes all states)
-q, –quiet: Only display container IDs
Now on the host, we open up xhost only to the specific system that you want, (for instance with the container’s ID stored to the shell variable containerId
)
1 |
xhost +local:`docker inspect --format='{{ .Config.Hostname }}' $containerId` |
Now if run xeyes
on the docker, it will be forwarded to the host:
Refs: [1]
Developing your project on Docker
I have used VSCode for that purpose, first install the following extensions:
- Remote Containers
- Remote explorer
- Docker Explorer
Now run your container and in the VScode click on the remote explorer:
Now right click on the running container and click on attach to container:
if this is the first time that you are doing this, it will initialize a new instance of VSCode, now in the new VSCode, at the bottom, you can see the connected container:
Now you should reinstall your extension such as C/C++ CMake, CMake Tools, CMake Integration in the container:
Now press ctrl+shit+p to open the pallet windows, and type: cmconf
and now by clicking on the CMake icon on the left, we build/ debug our applications
Docker Compose
Docker Compose Vs Dockerfile
Docker Compose is a tool for defining and running multi-container Docker applications, whereas a Dockerfile
is a simple text file that provides the commands a user could use to create an image.
Docker Compose allows you to specify the services that make up your app in a docker-compose.yml
file so that they can run in a separate environment. By simply running docker-compose up, you can get an app up and running in one command. If you add the build command to your Dockerfile, Docker compose will use it.
Example, Dockerfile
1 2 3 4 5 |
FROM ubuntu:latest RUN apt-get update RUN apt-get install -y x11-apps build-essential ADD hello.cpp /home/hello.cpp WORKDIR /home |
Example, docker-compose.yml
1 2 3 4 5 6 7 8 |
version: '3' services: web: build: . ports: - '5000:5000' redis-server: image: 'redis' |
Your Docker workflow should be to build a suitable Dockerfile for each image you wish to create, then use compose to assemble the images using the build command.
In the above example version: '3'
tells Docker the version of docker-compose
you want to use. You can specify the path to your individual Dockerfiles using the build /path/to/dockerfiles/blah
where /path/to/dockerfiles/blah
is where blah’s Dockerfile lives. In our example, you will have separate containers, one is redis-server
and the second one is web
.