Using Docker to instantiate local databases

A minimalist guide on the Docker way of setting up local databases

From tutorials to enterprise-level software, knowing how to set up a local database instance is a mandatory skill. You’ve probably had to go through tons of installers to instantiate databases as services on your machine. However, this can come with a few drawbacks.

First of all, the service is always consuming your machine resources because it never stops running. Versioning is particularly annoying; have you ever installed any software and were hit in the face with an incompatibility issue? Not to mention how painful it is when a service refuses to start. I’ve wasted tons of hours trying to troubleshoot a MongoDB’s server service, once.

But have you ever wondered if there was a quicker way of instantiating databases, say, as easy as running a single command? If your answer is yes, this post is just for you! By using Docker, we can achieve just that. And with a bonus advantage: it works on everyone’s machines!


Prerequisites

To start, you’ll need Docker and Docker Compose installed on your machine and a form to test the database connection, such as a CLI or Database Management System.

For our example, we’ll make use of the general SQL database manager Beekeeper Studio.

Instantiating databases with the Docker CLI

You’ll frequently find yourself needing a database running so you can run tests, explore commands, follow a tutorial, or work on a project. We can use the Docker CLI for those cases. We can start MySQL, Postgres, MongoDB, Redis, and more. So let’s dive right into it.

Let’s start with MySQL. The command is going to look like this:

$ docker run --name local-mysql -e MYSQL_ROOT_PASSWORD=secret -p 3306:3306 -d mysql:latest

Let’s break this command down to understand better what it’s doing:

  • docker run is used to instantiate a new container from a specified image.
  • --name local-mysql is used to give our container a custom name. This makes management easier for us.
  • -e MYSQL_ROOT_PASSWORD=secret is setting up an environment variable within the container. MySQL requires that we set a password. Otherwise, the container won’t start*.
  • -p 3306:3306 configures which of our machine ports points to the container port. If unspecified, we won’t be able to access our running container from localhost.
  • -d, also known as detached mode, tells the container to run in background mode. In short, it means that we don’t want to open another terminal to stop our container.
  • mysql:latest means from what image and version we’ll build our container. The latest released version will be the default if it’s not specified.

*Many database images require some environment variables to be instantiated. You can check which variables are required or optional in Docker Hub.

And just like that, you have an entire instance to work with on your machine! You can even define the version of the image, so you’ll have no compatibility issues with your project.

Using Docker CLI for various instances

Keep in mind that once you understand how to set up one instance, you can repeat this process for different database instances. Let’s take a brief look at what setting up MongoDB, Postgres, and Redis looks like:

# For MongoDB
$ docker run --name local-mongo -p 27017:27017 -d mongo:latest

# For Postgres
$ docker run --name local-pg -e POSTGRES_PASSWORD=secret -p 5432:5432 -d postgres:latest

# For Redis
$ docker run --name local-redis -p 6379:6379 -d redis:latest

I like to define a pattern for my container names such as local-mysql, local-pg, and local-mongo. You can give a name to your containers in a patterned way to make them easier to search for.

Instantiating databases with Docker Compose

Let’s say you’re running compatibility tests for ten different database versions or that you’re working on multiple projects at once… Is there a way to manage containers or environments while still keeping the simplicity of running a single command? For our delight, yes! And this is when Docker Compose comes into play.

Docker Compose is a tool built to manage multi-container applications. Let’s see take a look at it.

Defining a Compose file

First, at the root of your project, create a file called docker-compose.yaml and add the following content:

version: '3'

services:
  db:
    image: 'mysql'
    ports:
      - 3306:3306
    environment:
      MYSQL_ROOT_PASSWORD: 'secret'

You can notice that the structure is very similar to the CLI commands, but let’s break it down for better understanding:

  • version is which Compose file version we’re using. You can check the Compose file version reference here.
  • services holds the specification of how we’re going to build our containers. That simple!
  • db is the name of our service. If you’re using multiple instances, you can name them such as db and cache, or mysql and redis. It’s up to the programmer/team to decide this.
  • ìmage refers to what image we’ll build our container. You can also use : for versioning.
  • ports is no different from our previous Docker command. It specifies what external (your machine) port points to the internal (container) port.
  • environments specifies which environment variables your container shall have.

You might notice that an option to name our container is missing. I prefer not to use it since Docker Compose names the container using the directory and service name.

After you finish writing your compose file, you can execute it with the following command:

$ docker-compose up -d

The up command will pull the images and start the containers with the given configurations. The -d flag, as we saw before, starts the containers in background processes.

Defining multiple instances

Here’s what makes Docker compose so great. Adding a new instance to your project is as simple as defining it under services:

version: '3'

services:
  db:
    image: 'mysql'
    ports:
      - 3306:3306
    environment:
      MYSQL_ROOT_PASSWORD: 'secret'
  cache:
    image: 'redis'
    ports:
    - 6379:6379

And now you have an instance of MySQL running! But how do we test if everything is working as expected? Let’s see how we can use Beekeeper Studio to test our database.

Testing the connection

To check if everything is working, we’ll use a DMS (Database Management System) to test our connection and run some queries. There are a lot of database-specific or general management systems. In our example, we’ll use Beekeeper Studio.

When you first open Beekeeper Studio, you’ll see this screen:

beekeeper-studio-screen-1

Let’s create a new connection by selecting MySQL in the connection type box.

Using the previous MySQL example, let’s connect to the "root" user with "secret" as the password:

Then, click "Test" to test if everything is working with your connection. You’ll see a small message popping up like so:

If you get an error message, don’t panic. You can use docker logs or docker-compose logs and refer to your container/service to check the instance’s health. Logs are always the way to go to troubleshooting since you can quickly see if some variables are missing, if the connection failed for any other reason, or if the container is still booting up. Although they can be verbose, it won’t take long to figure out what happened.

Finally, if you want to go even further and run SQL queries, click "Connect" to have your ready-to-go workspace.

Conclusion

Maintaining local database instances does not need to be a hard, let alone painful experience. Docker is a simple and effective solution, while Docker Compose helps you define multiple database instances for a project.

If you’d like to see an example of a simple docker-compose.yaml file implemented in a project, please refer to this repository on GitHub.

We want to work with you. Check out our "What We Do" section!