Multi-Container Docker with Flask, Redis & Celery

Mahedi Hasan Jisan
5 min readNov 12, 2021
Ref

→ What is Flask?

Flask is a micro web framework that can be used to build a web application. Flask is providing every tool and library to build the application. To my understanding, the flask can be very useful for API-based applications or multiple microservices-based products. Some big company such as Netflix, Uber is using flask to develop their products.

How to create a virtual environment? https://mahedihasanjisan.medium.com/django-in-virtual-environment-python-54b91ca0f0e9

What is actually “app = Flask(__name__)”?

__name__ is just a convenient way to get the import name of the place the app is defined. Flask uses the import name to know where to look up resources, templates, static files, instance folders, etc. When using a package, if you define your app in __init__.py then the __name__ will still point at the "correct" place relative to where the resources are. However, if you define it elsewhere, such as mypackage/app.py, then using __name__ would tell Flask to look for resources relative to mypackage.app instead of mypackage.”

→ What is Celery?

“Celery is a task queue implementation for Python web applications used to asynchronously execute work outside the HTTP request-response cycle. Task queues are used as a mechanism to distribute work across threads or machines. A task queue’s input is a unit of work called a task. Dedicated worker processes constantly monitor task queues for new work to perform.”

Overall, if you want to process something behind the scene in your application, then celery could come in handy. An example would be: Uploading large-size files to your application! More Details.

The first container will be based on a Flask Application, which can be found here:

Our main focus would be Dockerfile.

Dockerfile:

# Base Image
FROM python:3.9-alpine
# Settind celere broker which is redis (default config)
ENV CELERY_BROKER_URL redis://redis:6379/0
ENV CELERY_RESULT_BACKEND redis://redis:6379/0
ENV C_FORCE_ROOT true
ENV HOST 0.0.0.0
ENV PORT 5000
ENV DEBUG true
# Copying everything to the container folder
COPY . /app
WORKDIR /app
# Installing all the dependencies in the container
RUN pip install -U setuptools pip
RUN pip install -r requirements.txt
# Exposing the app by port: 5000
EXPOSE 5000
RUN pip install gunicorn# Startup command for this app
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "3", "app:app"]

→Why Celery needs Redis?

“Celery communicates via messages, usually using a broker to mediate between clients and workers. To initiate a task the client adds a message to the queue, the broker then delivers that message to a worker. Redis is used to store messages produced by the application code describing the work to be done in the Celery task queue. Redis also serves as storage of results coming off the celery queues which are then retrieved by consumers of the queue.”

→ What is gunicorn?

“Gunicorn is a WSGI server · communicating with multiple web servers · reacting to lots of web requests at once and distributing the load · keeping multiple processes of the web application running.”

The second container will be based on the celery queue. The code is given here:

Dockerfile:

# Base Image
FROM python:3.9-alpine
# Setting the broker for celery
ENV CELERY_BROKER_URL redis://redis:6379/0
ENV CELERY_RESULT_BACKEND redis://redis:6379/0
ENV C_FORCE_ROOT true
# Moving every files to the container
COPY . /queue
WORKDIR /queue
# Installing the dependencies
RUN pip install -U setuptools pip
RUN pip install -r requirements.txt

Pretty straightforward, right? Well, let’s look into the docker-compose file to handle the multiple containers together. The code can be found here:

Docker-compose.yml:

version: "3.7"services:
web:
build: context: ./app
dockerfile: Dockerfile
restart: always
ports:
- "5000:5000"
depends_on:
- redis
volumes: ['./app:/app']
worker:
build: context: ./celery-queue
dockerfile: Dockerfile
command: celery -A tasks worker -l info -E
environment:
CELERY_BROKER_URL: redis://redis
CELERY_RESULT_BACKEND: redis://redis
depends_on:
- redis
volumes: ['./celery-queue:/queue']
monitor:
build: context: ./celery-queue
dockerfile: Dockerfile
ports:
- "5555:5555
command: ['celery', '-A', 'tasks', 'worker']
environment:
CELERY_BROKER_URL: redis://redis:6379/0
CELERY_RESULT_BACKEND: redis://redis:6379/0
depends_on:
- redis
- worker
volumes: ['./celery-queue:/queue']
redis:
image: redis:alpine
ports:
- "6379:6379"

Let’s explain the docker-compose file. Basically, we have two containers which are web and worker. “monitor” is used to monitor the celery queue task by an entity called “worker”.

  • We have loaded the redis image from the docker hub as we need that to use celery. By default, redis is using port 6379. ports: 6379 (host) : 6379 (container).
  • In the web container, the build context is referring to the container file location, which is “./app”. We have also mentioned the dockerfile but it is optional as docker-compose is powerful enough to figure that by itself as long as the dockerfile is in the root folder (build →context). Ports are set to 5000 (host): 5000 (container). This container will depend on redis because this app is using celery which needs redis as the msg broker.
  • Volumes: “In order to be able to save (persist) data and also to share data between containers, Docker came up with the concept of volumes. Quite simply, volumes are directories (or files) that are outside of the default Union File System and exist as normal directories and files on the host filesystem.”
  • The worker container is for the celery. Same concept for “build → context”. The “command” will invoke the command to start the celery worker to do an asynchronous task. Then, the celery broker is set up by the environment unit. Celery obviously depends on redis. The worker container will set up the celery.
  • Monitor container is used to monitor the celery worker to validate if it’s working properly. The configuration is basically the same as the worker container.

Download the whole application from here:

Start the app:

docker-compose -f docker-compose.yml up — build

This command will start the whole application. To validate: docker ps -a

Tips: docker exec -it <container-id> /bin/bash

Supporting Material:

Debug the project and do the rest! 🙌 That’s it for today! Cheers! 😃

--

--