Assumptions
This tutorial assumes that:
- you have some knowledge of:
- AWS ECS with a Fargate deployment configuration
- the AWS CDK
- you have an AWS account
- you have the AWS CLI installed with adequate permissions to deploy AWS resources such as S3, ECS and IAM permissions
The Need
As every developer knows, using “ssh” to connect to instances/containers deployed in the cloud is a very bad practice. It opens a large number of vulnerabilities and exposes you to a number of potential issues.
Nevertheless, it is sometimes necessary, especially during the development phase or in a staging environment, to access an instance or a container for a variety of reasons. For example:
- to figure out why the IAM policy is not giving you the permissions you want
- to check the network connectivity or the access to other services
- to verify that a storage system is properly mounted
This also applies if you are deploying containers by using AWS ECS.
The Problem
If you use ECS with an EC2 deployment configuration, then achieving such a connection is relatively easy. You have 2 options:
- the old-fashioned way: configure your EC2 instance to accept ssh connections, then login into it and use docker Exec to access a container running on it and debug from there
- the safer and modern way: use the System Manager’s (SSM) Session Manager feature to get into the instance, then use docker Exec
Unfortunately if you use ECS with Fargate, then this is not possible, since there is no instances to access.
The Solution
To allow access to ECS Fargate containers (and to ECS EC2 based containers), AWS provides the so-called ECS Exec feature. This AWS blog post provides all the details you need to implement such a solution via the CLI. However, this does not cover how you would implement this via infrastructure as code (CloudFormation, CDK).
The purpose of the next section is to demonstrate how to configure ECS Exec when deploying an ECS Fargate cluster with the CDK.
Configuration via CDK
The code mentioned in this article can be found on GitHub in this repository. The simplest way to try it is to clone the repository and deploy the sample ECS cluster.
We will start by creating a basic ECS Fargate cluster, with a single task and service, deploying an Apache Server.
Once we have that, adding the configuration required to allow connection via SSM Sessions is pretty simple.
First, create a new KMS key (or use an existing one) and give it to the ECS Cluster. It will be used when establishing the tunnel necessary for a secured SSM session.
Then add the Task permissions that will enable the connection via SSM and allow the Task to read the KMS key.
And finally, explicitly enable the ExecuteCommand (or ECS Exec) feature on the Service.
Connection to the container via the AWS SSM CLI plugin
Once you have deployed this configuration via cdk deploy , what remains is to try to open a session inside the container.
You will first need to install the SSM plugin, for the AWS CLI.
You will also need to find out your ECS cluster ARN, as well as your Task ARN. You can do this by using the following commands.
Then use the SSM plugin to open a session inside your container.
Conclusion
Hopefully this tutorial gave you all the information you need to configure ECS Exec via CDK, and establish a secure connection via the SSM Session Manager. The amount of changes it requires is in fact minimal and provides a powerful debugging and administrative tool. And since we have configured this via CDK, you can easily replicate this or even make it part of your standard staging deployment set.
If you found this article useful, do share it with your friends! And if you have more solutions that can be applied for this use case from the AWS ecosystem, let me know. I’d be happy to learn more about it.