Configuring Httpd Server and Setting up Python Interpreter and run it on Docker Containers

Dipaditya Das
Geek Culture
Published in
7 min readMar 14, 2021

--

Applications are getting more complex. Demand to develop faster is ever-increasing. This puts stress on your infrastructure, IT teams, and processes. Linux® containers help you alleviate issues and iterate faster — across multiple environments.

What are Linux containers?

Linux containers are technologies that allow you to package and isolate applications with their entire runtime environment — all of the files necessary to run. This makes it easy to move the contained application between environments (dev, test, production, etc.) while retaining full functionality. Containers are also an important part of IT security. By building security into the container pipeline and defending your infrastructure, you can make sure your containers are reliable, scalable, and trusted.

Why use Linux containers?

Linux containers help reduce conflicts between your development and operations teams by separating areas of responsibility. Developers can focus on their apps and operations teams can focus on the infrastructure. And, because Linux containers are based on open source technology, you get the latest and greatest advancements as soon as they’re available. Container technologies — including CRI-O, Kubernetes, and Docker — help your team simplify, speed up, and orchestrate application development and deployment.

What can you do with containers?

You can deploy containers for a number of workloads and use cases–big to small. Containers give your team the underlying technology needed for a cloud-native development style, so you can get started with DevOps, CI/CD (continuous integration and continuous deployment), and even go serverless.

Container-based applications can work across highly-distributed cloud architectures. Application runtimes middleware provides tools to support a unified environment for development, delivery, integration, and automation.

You can also deploy integration technologies in containers, so you can easily scale how you connect apps and data, like real-time data streaming through Apache Kafka. If you’re building a microservices architecture, containers are the ideal deployment unit for each microservice and the service mesh network that connects them.

When your business needs the ultimate portability across multiple environments, using containers might be the easiest decision ever.

Source: docker

🐳Docker

The Docker technology uses the Linux kernel and features of the kernel, like Cgroups and namespaces, to segregate processes so they can run independently. This independence is the intention of containers — the ability to run multiple processes and apps separately from one another to make better use of your infrastructure while retaining the security you would have with separate systems.

Container tools, including Docker, provide an image-based deployment model. This makes it easy to share an application, or set of services, with all of their dependencies across multiple environments. Docker also automates deploying the application (or combined sets of processes that make up an app) inside this container environment.

These tools built on top of Linux containers — what makes Docker user-friendly and unique — gives users unprecedented access to apps, the ability to rapidly deploy, and control over versions and version distribution.

👨‍💻 Docker vs. Linux containers: Is there a difference?

Although sometimes confused, Docker is not the same as a traditional Linux container. Docker technology was initially built on top of the LXC technology — which most people associate with “traditional” Linux containers — though it’s since moved away from that dependency. LXC was useful as lightweight virtualization, but it didn’t have a great developer or user experience. The Docker technology brings more than the ability to run containers — it also eases the process of creating and building containers, shipping images, and versioning images, among other things.

Traditional Linux containers use an init system that can manage multiple processes. This means entire applications can run as one. The Docker technology encourages applications to be broken down into their separate processes and provides the tools to do that. This granular approach has its advantages.

🚀 Advantages of Docker containers 🚀

  1. Modularity
  2. Layers and image version control
  3. Rollback
  4. Rapid deployment

💡 So, Docker technology is a more granular, controllable, microservices-based approach that places greater value on efficiency.

⚡ Installation Of Docker Engine

In this practical, I am going to use RedHat Enterprise Linux 8.3 as my Operating System but there is no restriction about the choice of Operating System.

After installation of RedHat Enterprise Linux, we have to configure the yum/dnf repository to install the docker community edition.

Inside the docker-ce.repo file we will add the official repository URL as baseurl and in order to make the process bit faster, we will disable the software signature by providing 0 to gpgcheck.

Now, using the yum package manager, we will install the Docker Community Edition. We will use the --nobest long option to install the software which doesn’t have broken dependencies.

After successful complete installation, it will look like.

Now, we have to start the docker engine to launch our required containers.

As we can see that our Docker Application Container Engine is up and running. 😃

Necessary Changes in RedHat Enterprise Linux 8

Before launching the containers on top of Docker Engine we have to make sure that the ingress and egress traffic is enabled for the Containers.

In order to do that we need to enable masquerading. Focusing on firewalling, I realized that disabling firewalld seemed to do the trick, but I would prefer not to do that. While inspecting network rules with iptables, I realized that the switch to nftables means that iptables is now an abstraction layer that only shows a small part of the nftables rules. That means most - if not all - of the firewalld configuration will be applied outside the scope of iptables.

Long story short — for this to work, I had to enable masquerading. It looked like dockerd already did this through iptables, but apparently, this needs to be specifically enabled for the firewall zone for iptables masquerading to work:

Configuring Httpd(Apache2) Server in Docker Container

First, we will download an O.S. image from the docker hub, which is a service provided by Docker for finding and sharing container images with your team. It is the world’s largest repository of container images with an array of content sources including container community developers, open-source projects, and independent software vendors (ISV) building and distributing their code in containers.

We are going to pull the official centos image. By default, it will download the latest tag.

Now, with the help of docker ps command, we see what container is running state. Then we will launch a container in interactive terminal mode, and name the container as “webserver”.

Since the docker containers are isolated from the host machine. Port 8080 is on the host machine and port 80 is inside the docker container. So basically we can request our webserver by forwarding our request from Port 8080 on the host machine to port 80 inside the container.

Now we will start the httpd daemon by running the following command: /usr/sbin/httpd -k start .

Then we created hello.html file inside /var/www/html . Now in order to find the IP address of the container, we need to install net-tools and run ifconfig .

To check whether the web server is running correctly or not, we will open a new terminal of the Host O.S. and with the help of curl, we will request our web server running in the container.

Even if we open the URL in the firefox browser, it will show the same output.

Setting Up Python Environment in Docker Container

Just like above, we will also launch a container with the centos image that we pulled earlier with the name pythondemo. Then we will install python 3.8 language packages.

Now we will write a python program for example to print a heart using special symbols and then run it.

Thanks to Vimal Daga Sir for giving me the opportunity to research this topic and to spread the knowledge that really matters.

--

--

Dipaditya Das
Geek Culture

IN ● MLOps Engineer ● Linux Administrator ● DevOps and Cloud Architect ● Kubernetes Administrator ● AWS Community Builder ● Google Cloud Facilitator ● Author