Desktop and Application Streaming
How to configure Amazon WorkSpaces with Windows and Docker
Customers are increasing adoption of container technologies, and Docker is one of the most popular providers. With containers, developers can ensure that application modules are self-contained, runtime environments are agnostic, and external dependencies are well documented. This is especially important as more enterprises adopt DevOps principles and deploy microservices.
Developers are looking for speed, flexibility, and remote access to their development environments. Many use AWS End User Computing services, Amazon WorkSpaces, and Amazon AppStream 2.0.
Customers often ask how they can set up software development environments with Amazon WorkSpaces and Docker? The recommendation is to use remote Docker daemons and Amazon EC2 instances to act as Docker hosts. This setup is common in non-Desktop-as-a-Service (DaaS) deployments (laptops or desktop computers). With Amazon WorkSpaces, customers can improve flexibility, performance, and remote collaboration.
Overview
In this blog, I describe the architecture, and how it can be built on AWS. The solution is based on the following components and configurations:
- Amazon WorkSpaces running Windows, and Docker CLI.
- An Amazon EC2 instance with Docker daemon running, listening for external traffic on a specific network interface, or all network interfaces (0.0.0.0). This can be a Windows or Linux host, depending on your application requirements. It’s recommended to use tags in order to pair the Docker daemon EC2 instance with its corresponding WorkSpace ID. Tags help with documentation and audit, and can be used for automation, as shown later in this blog.
- A DOCKER_HOST environment variable on the Amazon WorkSpace, which configures Docker CLI to connect to the remote Docker daemon. This runs on an EC2 instance. A value similar to
tcp://<EC2_ADDRESS>:<PORT>
is required for TCP connectivity.
At this stage, a minimal configuration is operant. To extend Docker functionality for enterprise workloads, and optimize costs, consider additional components.
- A Route 53 private hosted zone with a DNS entry for the EC2 daemon. docker.internal is used as a hosted zone, and DNS entries together with the WorkSpaces computer name:
<MY-WORKSPACE-NAME>.docker.internal
. This can be configured at your discretion. The%COMPUTERNAME%
environment variable is available on Amazon WorkSpaces Windows instances, and is a suitable candidate your dynamic DNS name. - An Active Directory Group Policy Object (GPO) with a
%DOCKER_HOST%
environment variable, to be applied to the parent OU containing your Amazon WorkSpaces. In the example, this is a variable dynamic, as it better suits large deployments.
%DOCKER_HOST% = tcp://%COMPUTERNAME%.docker.internal:2375
- Two Amazon EventBridge events, and corresponding Lambda functions for the following:
- Starting the EC2 instance with the daemon, once the assigned user logs into their Amazon WorkSpace. This event contains the
WorkspaceId
. Based on this value, you can use tags to find the paired EC2 instance needed. - Stopping the EC2 instance if the Amazon WorkSpace instance is STOPPED. This event checks each hour for stopped Amazon WorkSpaces (either manually stopped, or configured for AUTO_STOP behaviour). Once confirmed, this function stops the corresponding EC2 to reduce costs.
- Starting the EC2 instance with the daemon, once the assigned user logs into their Amazon WorkSpace. This event contains the
Walkthrough
Prerequisites:
- Two subnets that support Workspaces, with sufficient IP availability for the number of WorkSpaces you intend to provision.
- An AD Connector or AWS Managed Microsoft AD registered with your Amazon WorkSpaces deployment, and which references the assigned subnets for WorkSpaces.
- A Microsoft Active Directory account with permissions to change Group Policies.
- An Amazon WorkSpace with the Remote Server Administration Tools (RSAT) for Windows installed.
- An active Amazon Windows WorkSpace configured for AUTO_STOP to test.
- Network connectivity with DNS resolution between the WorkSpaces deployment’s parent VPC and your Active Directory. For more information review Amazon Route 53 resolver for hybrid clouds.
Step 1: Set the environment variable using a Group Policy.
- From an environment with Remote Server Administration Tools installed, open the Group Policy Management console. Create a new Group Policy Object (GPO).
- In the GPO, configure new Environment Variable under User Configuration → Preferences → Windows Settings → Environment.
Action: Replace
Type: User Variable
Name: DOCKER_HOST
Value: tcp://%COMPUTERNAME%.docker.internal:2375
- Link the newly created GPO to the Organizational Unit in which your WorkSpaces are provisioned.
Step 2: Provision the Docker EC2 instance
- Create a new EC2 Linux instance and configure Docker so that the Docker daemon will listen on all network interfaces.
The following User Data script installs Docker daemon, configures the listener, and enables the service so that Docker daemon starts automatically.#!/bin/bash -xe sudo yum update -y sudo amazon-linux-extras install docker sed -i 's/dockerd\ \-H\ fd\:\/\//dockerd/g' /lib/systemd/system/docker.service echo "{\"hosts\": [\"tcp://0.0.0.0:2375\", \"unix:///var/run/docker.sock\"]}" > /etc/docker/daemon.json sudo systemctl daemon-reload sudo service docker restart sudo usermod -a -G docker ec2-user
- Attach a tag “Workspaces-instance” to the EC2 instance with the WorkSpaces id this EC2 is paired with. This is necessary to start and stop the EC2 instance automatically to optimize costs.
- Configure the Security Group assigned to the EC2 instance for communication with the Docker daemon. By default the ports are 2375 and 2376 for SSL/TLS.
Step 3: Configure Route 53
In this walkthrough, you have configured the environment variable with docker.internal
as the domain. Create a Route 53 private hosted zone for docker.internal
, and configure it to resolve the private IP address of the EC2 instance.
- Open the AWS Management Console and navigate to Route 53 → Hosted Zones → Create Hosted Zone.
Domain name: docker.internal
Type: Private hosted zone
Region: <YOUR-REGION>
VPC ID: VPC where WorkSpace instance is deployed. - Create a record pointing to the EC2 instance’s private IP. For the record name, use the computer name of the WorkSpace. To find the computer name, from your test Workspace, open a command prompt and type
echo %COMPUTERNAME%
.
Step 4: Configure EventBridge and Lambda.
- For the Lambda functions, an IAM role is required. For this demonstration, name the role
workspaces-lambda-role
and grant the following permissions:- ec2:StartInstances
- ec2:StopInstances
- ec2:DescribeInstances
- workspaces:DescribeWorkspaces
- Create a Node.js Lambda function
workspaces-stopped
, using the following code snippet, and assign the IAM roleworkspaces-lambda-role
.
This code requires that the EC2 Docker instance is tagged withWorkspaces-instance
with a value of the WorkspaceId.const AWS = require('aws-sdk'); const EC2 = new AWS.EC2(); const WORKSPACES = new AWS.Workspaces(); exports.handler = async function(event, context) { console.log(JSON.stringify(event)) let params = { Filters: [{ Name: 'instance-state-name', Values: [ 'running' ], }, { Name: 'tag:Workspaces-instance', Values: [ '*' ], }, ] } let { Reservations } = await EC2.describeInstances(params).promise(); let instanceWorkspaceMap = Reservations.flatMap(r => r.Instances).map(i => { return { 'InstanceId': i.InstanceId, 'WorkspaceId': i.Tags.filter(t => t.Key === 'Workspaces-instance').map(t => t.Value)[0] } }); if (instanceWorkspaceMap.length != 0) { let { Workspaces } = await WORKSPACES.describeWorkspaces({ WorkspaceIds: instanceWorkspaceMap.map(iw => iw.WorkspaceId) }).promise(); let workspacesNotAvailable = Workspaces.filter(w => w.State !== 'AVAILABLE').map(w => w.WorkspaceId); let instancesToStop = instanceWorkspaceMap.filter(iw => workspacesNotAvailable.includes(iw.WorkspaceId)).map(iw => iw.InstanceId) if (instancesToStop.length != 0) { console.log('Following instances will be stopped: ' + JSON.stringify(instancesToStop)) let ec2ToStopParams = { InstanceIds: instancesToStop } return EC2.stopInstances(ec2ToStopParams).promise() } } console.log('No instances to stop.') return }
- Create a Node.js Lambda function called
workspaces-access
, and assign the IAM roleworkspaces-lambda-role
using the following code:const AWS = require('aws-sdk'); const EC2 = new AWS.EC2(); exports.handler = async function(event, context) { let params = { Filters: [{ Name: 'instance-state-name', Values: [ 'stopped' ], }, { Name: 'tag:Workspaces-instance', Values: [ event.detail.workspaceId ], }, ] } let { Reservations } = await EC2.describeInstances(params).promise(); if (Reservations.length != 0) { let ec2ToStartParams = { InstanceIds: [ Reservations[0].Instances[0].InstanceId ] } return EC2.startInstances(ec2ToStartParams).promise() } console.log('No stopped instances to start.') return }
- Navigate to the EventBridge Console and create an hourly rule. Attach the Lambda function
workspaces-stopped
as the target.
- Create another EventBridge rule based on the event pattern of the source
aws.workspaces
and detail-typeWorkspaces Access
. Attach the Lambda functionworkspaces-access
as the target.
Step 5: Test
- Log on to the test WorkSpace.
- Install Docker CLI to test the configuration.
- Reboot the test WorkSpace.
- At the command line, type
echo %DOCKER_HOST%
and press enter. - At the command line, type
docker ps
and press enter.
- Log out of the WorkSpace and wait one to two hours. Once the scheduled event fires, the Lambda function will stop the EC2 instance. Confirm that the EC2 instance is stopped by navigating to the EC2 console and validating that instance status shows as “Stopped”.
- Log on to the test WorkSpaces. The EC2 instance will start automatically.
Final remarks
If you need Docker to encrypt data in transit and enforce client authentication, you can configure the Docker daemon on your EC2 instance to verify incoming connections for certificates signed by a common Certificate Authority. For more details, refer to the Docker documentation. Keys and certificates can be configured centrally and distributed across Amazon WorkSpaces using Group Policies.
All configuration done on the EC2 Instances can be automated with AWS Systems Manager, Chef or Puppet. You can also use a CloudFormation template to provision the environment.
Cleanup
To remove resources created in this blog
- Navigate to EC2 Console and delete the EC2 instance with Docker deamon. Navigate to Amazon WorkSpaces console and delete the WorkSpaces windows instances.
- Navigate to Route 53 Console, locate the
docker.internal
private hosted zone and delete it. You might need to delete hosted zone DNS records first. - Navigate to Amazon EventBridge console and then to Rules. Remove the rules for events of type
Workspaces Access
and the scheduled one for stopping the WorkSpaces. - Remove any prerequisites (AD Connector, Amazon WorkSpace with RSAT) if not needed anymore elsewhere.
Conclusion
In this post, I’ve demonstrated how Docker and Amazon WorkSpaces may be used together to provide a flexible and performant development environment for containerized applications, with following principles in mind:
- Keeping operational costs to a minimum, through stopping and starting both WorkSpaces and EC2 instances automatically.
- Using techniques to enable deployments at scale with minimum operational effort.