DevOps Tips

Creating a Dockerfile? These are 7 things you should not forget!

This checklist will help you prevent certain deploy-errors or even outages in the future when working with Docker or Dockerfiles. To help you get started, we have also included a template.

Albert Heinle
Written by
Albert Heinle

This checklist will help you prevent certain deploy-errors or even outages in the future when working with Docker or Dockerfiles. To help you get started, we have also included a template.

Starting FROM the top

The benefit of Docker is that images can be built on top of each other, expanding the functionality of the container.

Your first line usually states `FROM <IMAGE NAME>` , where `<IMAGE NAME>` is an image name as found in a Docker repository.

It is crucial to either add a tag (i.e. `FROM <IMAGE NAME>:<TAG>`), or a digest there (i.e. `FROM <IMAGE NAME>@<DIGEST>`). Otherwise, the default `latest` tag is being applied, and certain breaking changes may make it into your current Dockerfile. It is to be noted that the digest method is far more specific on the image to be used than the mere tag (which may be updated).

If you download it, you need to verify the signature

In some cases, data and even executables are downloaded in a Dockerfile via either wget or curl. The risk in this approach is that your external resource may change, or get compromised. This is why you should keep a local signature file of the file you aimed to download (small in size), and verify that the file(s) you downloaded match the given signature. This ensures stability in future builds.

If the files are signed by the third party, you can use `gpg –verify` to verify the integrity. If the author does not have a signed package, at least keep a hash you expect and compare it against the file you just downloaded.

“Keep the image small” strategies

Small distributions

While you may be choosing your Linux distribution based on your desktop Linux experience (e.g. Ubuntu or Fedora), it is advisable to use a smaller base-image. Here are some of the smallest images out there:

  • alpine
  • busybox

All the other images are at least 10x the size.

Package manager cleanups

For non-trivial containers, you are not getting around running something like

`apt update && apt install <SOME LIBRARY>`.

This will create a cache and a list of resources which unnecessarily reside on your system, and use up space.

For the apt package manager, the files in the folder `/var/lib/apt/lists` can be removed at the end.

Do multistage builds

One of the uses for Docker images is to build a software project in a reproducible environment. The requirements to build are likely much larger than the requirements to execute. Hence, use multi-stage builds to ensure that the final image does not have any extra packages which are not needed.

This also fits in the category of “keeping the images small”, but it also has implications beyond. Remember that every additional package on your system is one more potential security risk.

Setting a non-root user as default user when running the image

By default, the root user is executing all commands (including the CMD directive) in a Docker container. For installing packages, setting the user to root inside the Dockerfile may make sense, but by the end, the USER directive should switch to a different user on the system.

This should also be done, even if the parent image is already setting this user to a non-root user. By explicitly always having a USER directive at the end of your Dockerfile, you remove the reliance on upstream settings to ensure that this security critical setting is set.

Have a HEALTHCHECK directive

Understanding what it means for your application to run and be in a “healthy” state ensures that errors are caught early. We recommend using the HEALTHCHECK directive of Dockerfiles to assist your monitoring system in assessing the overall health of your microservice-driven application. The only thing to keep in mind is to keep those checks computationally cheap, as they are regularly executed.

Mount /var/log as a volume

The general spot on a Linux based system where logs are being stored is /var/log. These logs should be captured by your monitoring and analysis systems, as they capture important information which can help with security analysis.

Scan existing Dockerfiles and configurations

Already created a Dockerfile and are not sure if it has been configured correctly? Check out and use our open source CLI tool:

https://github.com/coguardio/coguard-cli

Final Template suggestion

Feel free to use this template when you create your Docker files.


# This is an Dockerfile template suggestion by the coguard.io team
FROM :
# Use @ in order specify the concrete image
# compile version. CoGuard (coguard.io) will accept either
# of these strategies

# Either create a new user and group here using e.g. useradd
# or assume that you are getting one from the parent image.
# Here is an example for alpine:
# RUN groupadd -g  
# RUN useradd -u  -g  -s /bin/bash 

# If you installed anything via apt, run the following command
# (see the above suggestion to keep images small)
# RUN rm -rf /var/lib/apt/lists

HEALTHCHECK CMD # YOUR COMMAND HERE.
VOLUME [“/var/log”]
# Although it may be already set in the parent image, it is important
# to ensure the user is set downstream. The CoGuard CLI will check
# for this property
USER 

Visit CoGuard's blog for more great articles: https://www.coguard.io/blog

Email us anytime: info@coguard.io

Auto-remediation has launched! Contact us to get started.

Free up your time by getting instant fixes applied to up to 75% of the issues flagged by CoGuard with our new AUTO-REMEDIATION feature.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.