You’ve probably heard about serverless architecture by now, and you’re probably wondering, what’s going on? In this post, we’ll dig into what serverless really means for developers and system operators.
To frame the exploration, let’s take a quick peek back at Docker’s canonical example-voting-app which you may have used to try out some of the new features in Compose or Swarm. The architecture for this application is simple. It consists of a Python web application with two voting options, a Redis queue that collects votes, a .NET worker that consumes the votes and stores them in a Docker volume backed PostgreSQL database, which is then displayed through a Node.js web application.
Here’s a handy visual.
With the original model, all five of these pieces run on persistent containers. Let’s fire up the cluster and a take a look.
In this case, let’s start the Compose stack in one window, so we can watch the process while checking on the various infrastructure pictures.
$ docker-compose up
The application runs itself on port 5000, while the results live at 5001. Let’s open those up via a separate terminal window.
$ open http://localhost:5000 && open http://localhost:5001
Great! What pieces are currently running? Run the ‘docker-compose ps’ command to take a look at the 5 containers we described above.
Try out the voting mechanism to get a sense of how the vote is moving through the containers, and via the Docker Compose logs. Here, the logs show the Python container recording the vote, the .NET worker first processing it, which is then stored on the PostgreSQL container.
You can get a sense of what containers are required to run this stack by checking out what the Docker engine is running, too.
That’s five containers running for five services, all of which are persistent. At least, for now.
Let’s see what we can do about those persistent containers with a serverless code update. Here’s what we’re looking to accomplish, with persistent containers in red, and ephemeral Docker functions as serverless components in green.
In this design the green blocks are Docker containers that run on-demand, when a particular function is called. This means that there are fewer long-running services to debug, and we’re also able to leverage the scaling capabilities of a Swarm. Let’s fire up an example.
There’s a lot going on here, so let’s take a look at the output first, and then we’ll build the stack ourselves. I’m also going to recommend that you actually run the instantiation of this program backwards, so you can more easily see what’s going on. Let’s break it down starting with the completed product, and we’ll work our way backwards to the start, and then I’ll have you run the Makefile out of sequence.
Trust me, you’ll see!
So, the output.
Shortly after firing up the stack via ‘make’, you’ll see the first container create:
This container is merely a method through which the initial entrypoint container runs, which contains a reference to a Golang image that is compiled ‘onbuild.’ You can see this in the Makefile as a reference here in the “build” step, which is called as the first step to the “run” step, which is the default.
Run on the command line, this would look like so:
This creates an image for the stack to use:
After that image is created, the Makefile runs the standard docker-compose.ml file via ‘docker-compose up —build’, which takes all of the serverless code and deploys it for us. Now we’ll see our two persistent containers that we’ll use to form the foundation of the serverless architecture.
We should expect to see a few things. If we go to the voting page, we should expect to see a container blip in and out of existence as our vote is recorded. First we see that in the logs:
And then in checking the last created container:
The ‘/vote’ endpoint is a fairly simple Python Flask application, so it doesn’t stick around too long, and runs the record-vote-task.
The ‘/results’ endpoint is slightly more complex, which runs a Perl application that keeps instantiating and exiting containers as long as you have a webpage open to http://localhost/result/. It should look something like the following.
First, you have your container creation.
Then the container finishes.
To see this more clearly in action, I recommend running the Makefile manually, and in reverse. This will allow you to see the serverless pieces of software get built first via this command.
$ docker-compose -f docker-compose.build.yml build
Once that completes, you can run the ‘make run’ command locally to see the containers spin up.
$ docker-compose up —build
Great, now you’re finished! If you’d like to read more about how Docker is thinking about serverless application, I recommend checking out Ben Firshman’s Funker, which has an example of a Funker-specific implementation of our above application, which simplifies the stack further by removing the entrypoint.