AWS Cloud Operations Blog

Use the power of script steps in your Systems Manager Automation runbooks

Customers have been using AWS Systems Manager Automation documents for years to define to define a sequence of actions to take on their AWS infrastructure such as invoking an AWS Lambda function or copying an Amazon Machine Image (AMI). These documents, now referred to as runbooks, are simple to use, yet powerful. The aws:executeScript action allows you to embed Python and PowerShell directly into your runbooks.

The aws:executeScript action can:

  • Save you the step of standing up resources such as an Amazon Elastic Compute Cloud (Amazon EC2) instance just to house your logic, which could additionally require network, AWS Identity and Access Management (IAM) and security configurations.
  • Provide the full power of programming logic such as looping, string, JSON, and error handling.
  • Allow for SDK and PowerShell cmdlet calls using syntax that developers might already be familiar with.

In this post, I will show you some examples of how to use this action as part of an AWS Config automatic remediation. First, we use aws:executeScript to retrieve information for a resource, and then we use the action to integrate with Slack to send the information to a Slack channel.

Solution overview

In this post, I use the encrypted-volumes rule in AWS Config, which monitors Amazon Elastic Block Store (Amazon EBS) volumes to find those that are unencrypted. When one is found, the rule triggers an automatic remediation, which invokes an Automation runbook that I created to gather information on the volume, and then sends the information on to a Slack channel. In this scenario, the Slack channel would be monitored by an Operations administrator or a team of developers, who can then take action on the offending resource.

Note: I have created separate steps to keep the code reusable. Feel free to collapse both steps into a single step if you prefer.

AWS Config monitors EBS volumes, invokes Systems Manager Automation documents on those that are not encrypted, and then sends relevant information to a Slack channel.

Figure 1: Solution flow

Here’s the process:

  1. AWS Config runs the encrypted-volumes rule to find EBS volumes where the encryption flag is not set.
  2. For each unencrypted EBS volume, AWS Config invokes an automatic remediation that executes a Systems Manager Automation runbook.
  3. The runbook uses aws:executeScript to retrieve information about the EBS volume.
  4. The runbook uses aws:executeScript to:
    1. Retrieve an AWS Secrets Manager secret that contains the Slack URL and channel information.
    2. Post the information to the Slack channel.

 

Prerequisites

To complete the tutorial in this post, you need the following:

Slack integration

For the integration with Slack, follow these instructions to add a Slack Incoming Webhook. Choose a Slack channel to send information to and receive a URL with a prefix of https://hooks.slack.com/workflows/…. for posting the information. Store the Slack channel and the URL for use later.

AWS Config configuration

Enable AWS Config in your AWS account. Under Resource types to record, make sure you are either monitoring all resources or, at a minimum, have included EC2:Instance and EC2:Volume under Specific types. These settings are required for the encrypted-volumes rule to work. For more information, see Getting Started with AWS Config in the AWS Config Developer Guide.

Create a secret in AWS Secrets Manager

Because the Slack URL you created as a prerequisite is considered sensitive information, store it in AWS Secrets Manager.

  1. Sign in to the AWS Secrets Manager console and choose Store a new secret.
  2. For the secret type, choose Other type of secret.
  3. On the Plaintext tab, paste the following:
{
  "URL": "TheSlackUrl",
  "channel": "TheSlackChannel"
}

Make sure to replace TheSlackUrl and TheSlackChannel with the values you configured in Prerequisites, and then choose Next.

  1. For Secret name, enter SlackInfo.
  2. Accept all defaults on the other pages.
  3. You will use the unique identifier (ARN) for the secret later, so on the Secrets page, choose the name of the secret and copy the value under Secret ARN.

Use the CloudFormation template to create a stack

Next, download the CloudFormation template from EncryptedVolsToSlack.yaml and save it to a file on your computer.

  1. In the AWS CloudFormation console, choose Stacks, choose Create stack, and then choose With new resources (standard).
  2. On the Create stack page, choose Upload a template file, choose the YAML file you saved, and then choose Next.
  3. In the Specify stack details page:
    • For Stack name, enter UnencryptedVolToSlackStack.
    • In Parameters, under SlackSecretARN, enter the name of the SlackInfo secret ARN.
    • If you already have an IAM role, enter its name inExistingRoleName. Otherwise, leave this field blank.
  4. Choose Next.
  5. On the next page, choose Next.
  6. On the Review page, select the I acknowledge check box, and then choose Create stack.

After a few minutes, refresh the page. The status displayed for your stack should be CREATE_COMPLETE. AWS Config will run the newly created encrypted-volumes rule on the stack, which might take several minutes.

Note: The IAM role used in the CloudFormation template allows the ec2:DescribeVolumes and ec2:DescribeInstances actions on all resources (Resource: ‘*’) in the account. It is an example only. You might want to restrict it further according to your organization’s security policies.

Review your new AWS Config rule

  1. In the AWS Config console, choose Rules.
  2. Select the UnencryptedVolToSlackStackrule. This is the rule configured in the CloudFormation stack.
  3. On the page for the encrypted-volumes rule, under Resources in scope, choose Noncompliant to see a list of unencrypted EBS volumes.

The details page for the AWS Config rule includes sections for rule details, parameters, remediation action, and resources in scope. You can use the dropdown to filter for all noncompliant resources.

Figure 2: Unencrypted EBS volumes

If the list is populated, after the automatic remediation (that is, the Automation runbook created by the stack is executed), Action executed successfully will appear under Status. If the remediations aren’t yet complete, keep refreshing the page.

Check out your new AWS Systems Manager Automation runbook

Now let’s see what resources were created in Systems Manager.

  1. In the AWS Systems Manager console, choose Documents.
  2. Choose the Owned by me tab and choose the Automation runbook prefixed with UnencryptedVolToSlackStack*. This runbook was created by the CloudFormation template.
  3. On the Content tab, you can see the content of the Automation document, including the two steps that invoke an aws:executeScript action. Let’s examine these in detail:

Retrieve information about the EBS volume:

- name: extractInfo
  action: 'aws:executeScript'
  outputs:
       - Name: ebsInfoMsg
         Selector: $.Payload.message
         Type: String
   inputs:
         Runtime: python3.6
         Handler: script_handler
         Script: |-
                import json
                import boto3
                        
                def script_handler(events, context):
                  ec2 = boto3.client('ec2')
                          
                  response = ec2.describe_volumes(
                      Filters=[
                          {
                              'Name': 'volume-id',
                              'Values': [
                                events['ebsVolumeId']
                              ],
                          }
                      ],
                  )
                  
                  state = response['Volumes'][0]['State']
                  iops = 'N/A'
                  if 'Iops' in response['Volumes'][0]:
                      iops = response['Volumes'][0]['Iops']
                  volumeType = response['Volumes'][0]['VolumeType']
                  attachments = response['Volumes'][0]['Attachments']
                  ec2Attached = "No EC2 attached"
                  if attachments:
                      ec2Attached = attachments[0]['InstanceId']
                  
                  theMsg = 'Account {} - Unencrypted Volume {} found! ' \
                          '[type:{} state:{} Iops:{} EC2-attached:{}' \
                          .format(events['accountId'], events['ebsVolumeId'],volumeType,state,iops,ec2Attached)
                  return {'message': theMsg }
         InputPayload:
             ebsVolumeId: '{{ResourceId}}'
             accountId: '{{global:ACCOUNT_ID}}'
      description: Gather information for the resource

Three parts of the action are worth noting:

  1. The inputs section specifies the Python script to run, which executes the ec2.describe_volumes call to retrieve information about the EBS volume.
  2. The InputPayload section declares variables for use in the Python script. The variables are then accessible inside the Python script through the AWS Lambda function handler’s events argument.
    1. ebsVolumeId is initialized with the argument ResourceId, which is passed into the Automation document by the AWS Config rule.
    2. accountId is initialized with the global Automation system variable, ACCOUNT_ID, which is available during runtime.
  3. The outputs section declares variable ebsInfoMsg and initializes it with $.Payload.message. The Payload variable is populated with the JSON object the Python script returns (that is, return {'message': theMsg}). Therefore, ebsInfoMsg will be populated with the value of theMsg.

Publish the message to a Slack channel:

- name: publishToSlack
  action: 'aws:executeScript'
  inputs:
    Runtime: python3.6
    Handler: script_handler
    Script: |-
        import json
        import urllib
        from botocore.exceptions import ClientError
        import boto3

        def script_handler(events, context):
            slack_secret_arn = events['SlackSecretARN']
            slack_info = json.loads(get_slack_secret(slack_secret_arn))

            slack_message = {
                'channel': slack_info["channel"],
                'Content': events['theMsgToSend']
            }
            data = json.dumps(slack_message).encode('utf-8')
            req = urllib.request.Request(slack_info["URL"], data)

            try:
                response = urllib.request.urlopen(req)
                the_page = response.read()
            except HTTPError as e:
                print('Request failed: {} {}'.format(e.code, e.reason))
            except URLError as e:
                print('Server connection failed: {}'.format(e.reason))

            return {"msg": 'Message posted to Slack Channel {}'.format(slack_message['channel'])}

        def get_slack_secret(secret_arn):
            ………
    InputPayload:
      theMsgToSend: '{{extractInfo.ebsInfoMsg}}'
      SlackSecretARN: !Ref SlackSecretARN
  description: Send message to Slack channel

This second action has two important sections:

  1. The inputs section holds the Python script to run. In this case, it sends a notification to a Slack channel by posting a JSON payload that contains the Slack channel and message to the Slack URL.
  2. The inputPayload section initializes two variables to pass into the Python script. The variables will then be accessible inside the Python script through the AWS Lambda function handler’s events argument:
    1. theMsgToSend will hold the value of the ebsInfoMsg variable, which was declared in the previous step, extractInfo.
    2. The parameter SlackSecretARN declared in the CloudFormation template will be used to retrieve the Slack channel and URL to send the notification to.

Check the results of the Automation executions

If there is an error during the execution of the automation, you can check the results for the executed actions.

  1. In the AWS Systems Manager console, choose Automation to find a list of all the Automation runbooks that have been executed.
  2. Your runbook’s name starts with UnencryptedVolToSlackStack*. Choose one of its executions to view details.

The execution details page for the Automation document includes execution status (in this example, Success) and executed steps.

Figure 3: Execution detail

On the details page for the execution, you can see if the two steps were executed successfully. If one step resulted in an error, choose the step ID of the failed execution to see details.

Cleanup

To delete the AWS resources you created in this tutorial, in the CloudFormation console, choose Stacks. Choose the stack you created (for example, UnencryptedVolToSlackStack), choose Delete, and then choose Delete stack to confirm.

If you deployed the template in several AWS Regions, keep in mind that any IAM roles created by a stack will only be deleted by the stack that created it. For this reason, make sure that the stack that created the IAM role is deleted last, to minimize the risk of leaving incomplete implementations of this solution.

Conclusion

In this post, I showed you two ways to use aws:executeScript in a Systems Manager Automation runbook to execute blocks of Python code. I used familiar AWS SDK for Python commands to gather information about noncompliant resources and standard Python logic to integrate with third-party tools, all without provisioning extra resources, such as EC2 instances.

In this case, the scripts ran in AWS Config, but you can also run Automation runbooks through other AWS services, such as AWS Service Catalog and Amazon EventBridge. You can also run standalone Automation runbooks. We set up the infrastructure easily through a CloudFormation template, with an example IAM role that includes policies that allow access to EC2 and EBS resources. You can tailor this role according to your organization’s needs.

Systems Manager and AWS Config are available in all major AWS Regions. For more information, see Creating runbooks that run scripts in the AWS Systems Manager User Guide and Remediating Noncompliant AWS Resources by AWS Config Rules in the AWS Config Developer Guide.

About the author

Melina Schweizer

Melina Schweizer

Melina Schweizer is a Senior Specialist Solutions Architect Builder at AWS. She works on creating simple and effective solutions to facilitate business outcomes for customers. In her spare time, Melina enjoys playing the piano, gardening, and vacationing with her family.