API Series Part 3 - Adding VS2017 Docker Support

So far we have a relatively simple ASP.NET Core 2.0 Web API that runs directly on the Windows operating system. But the next few posts in the series are going to need Docker. We'll be using Linux containers and looking at configuration, secrets management, and adding a second API. We'll be looking at using Docker Secrets, Docker configuration files, using Hashicorp Vault and creating our first Swarm with two services. So in this post we'll look at the built in Docker support in Visual Studio 2017, look at the various files that get added and what they do.

The Visual Studio 2017 Docker support allows us to run our application in a container during debugging which makes the development process much easier. You can validate your application works correctly in a container before deployment which can help you catch some of the little gotchas you have when running in a Linux container. For example, file paths are different, connection strings cannot reference localhost anymore. You can also include other services in your development Swarm so that when you hit F5 it also starts up other microservices you need. Docker support is primarily there for development. It does include a release version of the Docker artefacts but likely you'll use a completely different set of tools for actually deploying the whole satellite of services to an environment.

The current docs are little out of date as the docker-compose.yml files that are created do not match the documentation but the basics are more or less the same. We'll dive into what files get created in your solution when you add Docker support as well as a quick intro to what containers and container images are. I am using VS2017 version 15.3.5.

Images, Containers, and the Various Docker Files

A container is basically a process or set of processes and files packaged up into a single image. The Docker image is a standard image format that all the various tooling and orchestrators agree on and work with. You package up your application and all its dependencies into this single image. This means that containers are portable and easily deployable. You don't need to install .NET, Python etc on your servers, you just need to install what Docker itself needs to run on the host, each container has all the other dependencies and is self-dependent.

Images vs Containers

Images are immutable files that are created via the build command and when they are run via the run command they produce a container. Containers are basically instantiated, running images.


Dockerfiles are basically like shell scripts that are used to create a container image. When we add Docker Support to our WebApi project, a Dockerfile gets created in that project. This is because a Docker image will be generated from our WebApi project so it needs its own Dockerfile. Let's look at the one created by Visual Studio in our Govrnanza.Registry.WebApi project.

FROM microsoft/aspnetcore:2.0
ARG source
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "Govrnanza.Registry.WebApi.dll"]

Container images are layered. You can take an existing image and add your own layers on top and the result is your own image. In the above Dockerfile, we see that we take the microsoft/asp.netcore:2.0 base image which basically contains all of ASP.NET Core 2.0, then we layer in our own application on top. Each line or instruction in the Dockerfile is a layer, see the Docker docs for a more in depth explanation of this layering.

Let's go over each instruction individually:

  • FROM microsoft/aspnetcore:2.0 -> Use this base image. The image naming format is username/image-name:tag. So the username is microsoft, the image name is aspnetcore and the tag is 2.0

  • ARG source -> Declares the argument "source". The argument is supplied by Visual Studio depending on whether you have Debug or Release configuration set and will be the path to the compilation output.

  • WORKDIR /app -> Sets the working directory of the image to /app. This is where all the application files will be placed and run from.

  • EXPOSE 80 -> Tells Docker that the container will listen on port 80. It does not make port 80 accessible to the host as you need to publish the port for that. You'll see more about that later.

  • COPY ${source:-obj/Docker/publish} . -> Copy the files at the source (argument) directory and copy them to the working directory in the image. If no source argument is supplied, instead copy from obj/Docker/publish. Using obj/Docker/publish as a backup location can make your build system simpler, if you do dotnet publish to the obj/Docker/publish directory you don't need to supply a source argument.

  • ENTRYPOINT ["dotnet", "Govrnanza.Registry.WebApi.dll"] -> Upon creating a container from this image, this command will be executed, starting up our application.

The Dockerfile reference page is very informative and goes into much greater detail about the various commands and constraints.

Docker Compose Files

You can start up and stand down containers individually using Docker commands if you want. But this makes knowing the current state of your containers more difficult and makes the general management of containers more complex. Instead we can go for a declarative way of managing containers where we describe in a single YAML file all the containers, networks and the like. This means we can just look at our YAML file and know what containers we have and we have a single Docker command that will set it all up for us. These files are generally named docker-compose.yml.

When using Docker compose, we stop talking about containers and start talking about services. A service is basically a potentially replicated set of containers that are all of the same docker image. With Docker compose, we can say we want to take our image and run X number of containers, and each container behaves in the same way. Let's look at the docker compose files located at the solution level, created by the Docker support.

  • docker-compose.ci.build.yml

  • docker-compose.yml

  • docker-compose.override.yml

The first one allows us to build our application from inside a container. We get all the advantages of containers in the build system, that is that you don't need to install different versions of SDKs, runtimes etc on the build server as the build container contains all the dependencies required to build the application. We'll come back to this yml filel when we look at VSTS.

The second two get merged into a single compose file by Visual Studio and when you run F5 it creates a new image and runs it using the docker compose up command.


version: '3'

    image: govrnanza.registry.webapi
      context: ./Govrnanza.Registry.WebApi
      dockerfile: Dockerfile


version: '3'

      - "80"

The docker-compose.yml basically tells Docker to how to create a service called "govrnanza.registry.webapi":

  1. build a container image called "govrnanza.registry.webapi"

  2. in order to build the image, use the Govrnanza.Registry.WebApi directory as its context. The context is where all the files are that Docker needs and Docker cannot escape that context and go to other parts of the file system.

  3. use the dockerfile in the Govrnanza.Registry.WebApi directory to build the image

The docker-compose-override.yml tells Docker to configure the govrnanza.registry.webapi service as follows:

  1. Set the environment variable ASPNETCORE_ENVIRONMENT to development

  2. Publish port 80 so our ASP.NET Core application can be accessed via port 80 from the outside.

This is about as basic as our Docker compose can get. There is a myriad of extra configuration we can set for our service, in addition to creating networks, volumes, creating other services etc. For now we'll leave it there and add more as and when we need it.

As we left it in the last post, the Govrnanza.Registry.WebApi application is not compatible with Linux or containers! We need to fix a couple of things.

Fixing Govrnanza to Run in a Linux Container

File Paths

When loading the markdown file, I used a Windows file path. This will fail when we run the application in Docker.


So we need to use Path.Combine instead:

File.ReadAllText(Path.Combine("Docs", "ApiVersion1Description.md"));

Connection String to Localhost

I am not putting SQL Server in Docker, though I could easily do that. In the future when it gets hosted in the Cloud or on premise, we'll be using either a database as a service like RDS, or a database installation carefully managed by an ops team and dba. During development, I am happy to use my local SQL Server 2016 Developer Edition. That said, using a containerised database has its advantages during development and testing so I might change my mind later. Though in production it will remain outside of Docker for sure. 

So back to localhost. Localhost inside a container refers to the container, not the host operating system. So you can't put "." or "(local)" in your connection string. Instead you need to put the IP address of your PC. Run ipconfig /all, get the IP address and put it in your connection string. Your local SQL Server may not be set up to accept connections via TCP/IP so check your configuration, I explain how in this post.

Run it with Docker

Let's run the application from Docker. There is a project file for Docker called "docker-compose". Make sure that is it set up as the startup project of the solution.


Now press F5 and see your default browser open a new tab for the application. But we see that it loads the wrong url:


We need to change it to load our Swagger UI page instead. But you are thinking, "I have a launchSettings.json" file! When you run the application Docker this file is no longer used.

The launch URL is now set in the docker-compose.dcproj file. Let's have a look at it:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="15.0" Sdk="Microsoft.Docker.Sdk">
  <PropertyGroup Label="Globals">
    <None Include="docker-compose.ci.build.yml" />
    <None Include="docker-compose.override.yml">
    <None Include="docker-compose.yml" />

We need to change the value of the DockerServiceUrl element to http://localhost:{ServicePort}/api-docs. We can edit the XML of the dcproj file or open the properties window.


So we change the URL there and press F5 again.

Now a browser tabs opens to our Swagger UI page.

Next Steps

In the next posts we are going to look at configuration and secrets. Docker offers some nice functionality for both, but we'll also look at Consul and Vault. I love Vault as it offers extra capabilities on top of just storing secrets such as managing the creation of database users and providing powerful revocation features that can allow you to react fast when an intrusion occurs. These capabilities are not offered by Docker. But the combination of both provides a real sweet spot for security.