Top 4 Benefits of Docker for APIs and Microservices

Docker rocks!

Container technology, namely Docker, has swept across the technology industry. Since Docker’s first release just over three years ago, it has permeated even the most conservative enterprises. The reason it is so compelling is because it addresses multiple long-standing issues that have plagued software developers and operations engineers. Of those issues, four stand out as especially important. Let’s examine them in a bit more detail.

Packaging

A piece of software usually consists of a number of components. The components may be bundled together or distributed separately and are usually themselves composed of various files. This can include executables, libraries, documentation, etc. In order to make software distribution convenient, there must be some way assembling all of these pieces together into one cohesive whole.

Traditionally, this is done using one of a number of mechanisms:

  • Zip files / tarballs
  • Deb or RPM packages
  • Language-specific packages, for example: NPM, Python Wheels, Ruby Gems

These all have their own trade-offs, but they have one thing in common: they do not solve the entire problem. Zip files need an agreed-upon format for their contents; Deb and RPM packages are OS specific and notoriously difficult to work with; and the rest are language-specific systems that do a poor job of handling application dependencies outside of the scope of their language.

Docker solves this problem by providing a consistent, OS-independent (within the realm of Linux, at least), metadata-rich application packaging format. It’s a polygot world and API services are being developed in a variety of platforms and languages. Docker makes it possible to achieve the granularity of portable microservices through its packaging.

Distribution

In order to share software with others, or to deploy it somewhere, the software package needs to be tracked in a central place, and accessible across the organization. Docker solves this problem by providing a standardized API for a registry service, as well as an open-source implementation.

For now, the standard Docker registry is not quite user-friendly and lacks some features available in other systems, such as Yum and Apt. However, this will improve with time.

Runtime Isolation

If you worry about efficient resource utilization, you’re likely tempted to run multiple microservices per host. This is definitely doable, but can present some challenges.

One such challenge is dependency management. For example, if Service 1 requires a library with version 1.x, while Service 2 requires the same library, but with an incompatible version 2.x, you will likely run into a problem if you are using a package manager that installs packages system-wide, such as APT or Yum. Some ecosystems provide a way to isolate the application’s environment into a specific directory. One example of this is Python’s virtualenv. Node.js NPM also does this by default. However, the isolation offered by these systems is far from perfect – any dependency on libraries or executables outside of the language’s sandbox can still result in conflicts.

Another issue that comes up when running multiple microservices on the same host is protection against rogue processes that use more than their fair share of memory or CPU. Prior to container technology, the main way to achieve this would have been to run on multiple VMs.

Docker solves the dependency isolation problem by packaging the entire OS image, along with all of the dependencies. It solves the resource isolation problem by providing and enforcing explicit CPU and memory constraints.

Similarly, Docker’s runtime isolation allows you to run multiple API microservices side by side on the same host with greater ease and maintainability. Different teams working on different microservices will not affect the runtime of others. For example – if a revision of an API is needed, the API microservice running in a Docker container can be redeployed without affecting other APIs.

Installation process

The last major benefit of Docker is how it simplifies the installation process. This is a big one if you are a DevOps engineer, or a software developer who interfaces with the operations team.

The state-of-the-art approach to installing software on production systems prior to Docker was using a configuration management system, such as Puppet or Chef. Compared to the hand-written, manually-executed scripts that came before them, these systems were a boon for operations. The task of these systems is to take a host that is in an unknown state and bring it into some well-defined order. As it turns out, this is actually very difficult, especially for complex systems. There is always something you didn’t think of that could be affecting the host in an adverse way. Most of the templates are not written in a way that allows for multiple services on the same host. Clean up or removal of resources from hosts is often an afterthought.

The more complex the service, and the more dependencies it has, the more complex the installation process is. In many cases, the installation code is spread across scripts embedded in OS packages and the code in the config management system.

In many organizations, the group of people who deploy the software are distinct from the people who write it. Since a large part of the installation process happens during deployment, this means it is the deployers who are faced with the prospect of debugging the install automation that the developers wrote. This can create stressful situations (or even outages) and cross-departmental strife.

Docker solves this problem by moving the entire installation process from an API’s (or application’s) deployment phase into its build phase. Installation always occurs on top of a well-known base image, and is therefore repeatable and can be tested offline – for example, in a CI pipeline. This means that simple shell scripts can be quite sufficient as there is no need for complex decision making based on the current state of the system. The deployment process, on the other hand, becomes a single step: “docker run”.

 

Conclusion: an opportunity for Container Orchestration

The above four major features of Docker are all centered around standardization: of the packaging, installation, distribution, and resource management of applications. APIs benefit from Docker’s standardization by bundling their functionality in packaging that moves through the lifecycle with much greater ease, independence and runtime reliability and manageability. With a standard like this in place, there is a great opportunity to automate the deployment and runtime management of applications and APIs across many machines. This is something that previously would have been incredibly complex due to the myriad edge cases that would have to be covered. As a result, in many organizations this has been (and continues to be) a mostly manual process.

We will cover this kind of automation (and its emerging name “container orchestration”) in a future post.


Come see how Docker can help you compose and manage an API runtime at LunchBadger…

  • Check out the demo to see how LunchBadger provides a Docker container based runtime for APIs that works natively in your cloud and also harnesses the simplicity of a serverless experience.
  • Read about the features modeled after the API lifecycle as a solution from start to end all on the same Docker container runtime.
  • Register and joint the FREE private beta and become and early participant to realize the simplicity and speed of having APIs work for you and your business.
  • For more information please contact us at [email protected].