In today's blog post we would like to introduce you to the software stack of a company named Hashicorp. This post will only cover the basics and concepts of their systems.

At the beginning of Minecraft Legend's development time, we mapped the hostnames and login credentials of databases and all other external and internal services into every container we launched via Docker Environment variables. Due to the rapid growth of our infrastructure, there were more variables every week and we lost track of them quite quickly. Therefore we needed something to handle and reduce those environment variables.

Consul

Microservices or services in general should, if technically possible, always be coupled with each other as little as possible.

In Consul, services (databases, message brokers, ...) can be registered and also discarded.

If, for example, a new Minecraft server is started and wants to write something to a registered Redis database after startup, the server only has to connect to Consul and request the address of a running and healthy version of Redis. With the returned host and port, the Minecraft server is now able to connect to the Redis instance and write its data.

Now, in our small scenario, one Redis instance is no longer sufficient because too much data is read and written every second. Another Redis instance must be started. This instance simply registers itself in Consul and is accessible to all services in addition to the existing one. If another Minecraft server now wants to write something in Redis, Consul returns all running instances of Redis and the Minecraft server randomly connects to one of the two instances. Thus, it is easily possible to distribute the load across multiple instances of a service.

Consul offers us the following advantages:

  • Loose coupling of the services with each other
  • Dynamic addition of further instances in case of bottlenecks
  • Health checks, which help us to identify stopped or disturbed instances of services.

Below you can find a screenshot of the Consul web interface. There you can see which services are currently registered in Consul.

Consul Webinterface

With Consul, we were able to reduce the number of environment variables somewhat, since all hosts and ports of any service were eliminated. But a large part of the variables were access data. How we were able to remove them, is described in the following section.

Vault

Like the hostnames of services, the login data was also simply mapped into the container at the beginning via environment variables. This also led to problems when changing a database password, for example, because the password had to be changed in all Docker images manually. Additionally, all images had to be rebuilt and deployed after the change.

Through Vault, we were able to solve this problem elegantly.

Each Docker container now has only one Vault token, which it uses to authenticate and authorize itself on our Vault cluster. Then, the container can load itself the credentials it needs from Vault.

We have two different types of credentials stored in Vault: Dynamic and Static.

Dynamic credentials are regenerated each time a service makes an authorized request. This means that Vault, for example, creates a new user for the requesting service in our MongoDB cluster and returns it. Dynamic credentials can also be valid for a limited time.

Static credentials are fixed in Vault. This includes, for example, the password to the Redis cluster, since Redis does not offer sophisticated user management, such as MongoDB.

Vault offers us the following advantages:

  • The environment variables of a Docker container are shrunk to a minimum.
  • If we change access data of static services like Redis or Kafka, these can be adapted directly in Vault. The next time we restart a Minecraft server, it will directly read the latest credentials from Vault.
  • The credentials for our MongoDB are valid for a limited time. After a fixed time, the access data expires and can no longer be used.

Nomad

For us it has always been clear that we will not start our Minecraft servers and other services via screen or something similar. Unfortunately, this approach is very common in the Minecraft scene. In the time of Docker and Kubernetes, however, one should rely on modern solutions.

Therefore, most of our services are started in Docker. Initially, we used Docker Swarm to build a Docker network across multiple machines and started our services via docker-compose. With this approach, however, we had to start and stop all services manually in case we needed another Minecraft server.

Accordingly, we wanted to switch to a scalable and also easy-to-use solution as quickly as possible. Since Kuberenetes was too complex for our use case and we already used Consul and Vault from Hashicorp, we finally decided on Nomad. With the exception of our MongoDB cluster, we currently run all services via Nomad. Thus we are able to scale up or down the required services dynamically. The services register themselves automatically in Consul and are also restarted automatically if the health check of Consul fails.

Nomad also makes it possible to easily add more machines to the existing cluster. For example, we can continue to book game instances at high player load and automatically include them in the cluster.

Nomad Webinterface