AWS Cloud Operations Blog

Use AWS CloudFormation Macros to create multiple resources from a single resource definition

AWS CloudFormation macros are used for the custom processing of your template. They use the features of imperative programming, which are not natively available while writing CloudFormation templates. In this blog post, I show you how to create and deploy a CloudFormation macro that provisions identical resources iteratively and uses a unique resource property to differentiate them. If you’re new to CloudFormation macros, be sure to start with the first post in this two-part series, deep dive on AWS CloudFormation macros to transform your templates.

When you write a macro and reference it in your template, you are asking the AWS Lambda that is backing the macro to take the template, look for a keyword, and apply the transformation rules. In this post, I use the ReplicateWith keyword, which is not a reserved keyword in CloudFormation. This keyword contains the resource property that is unique between the duplicated resources.

Here is the syntax:

ReplicateWith: ResourcePropertyName: ParameterName

Here is an example:

ReplicateWith: InstanceType: pMLInstanceTypes

ResourcePropertyName should be any of the property names supported by the CloudFormation resource type you selected for your resource (for example, the InstanceType property of the Amazon EC2 resource type).

ParameterName can be comma-separated string values (one or many). The macro parses the list of values and creates a unique resource for each value. For example, if the list has four different EC2 instance types, then four EC2 instances with the provided instance types will be created.

The macro code looks for the ReplicateWith keyword and applies the replication rules on that resource. Figure 1 shows how the macro (Lambda function) processes the template and replicates the resources based on the comma-separated values passed by the user through the Parameters section.

Solution overview

In this solution, I create an AWS::Sagemaker::NotebookInstance CloudFormation resource. A list of comma-delimited instance types are provided as input parameters to the template and a call to the macro is made to process them.

The solution involves two steps:

Step 1 Create the macro definition: The first template contains the macro definition, which refers to the Lambda function. The permissions required for Lambda to run the code are defined in this template.

Step 2 – Reference the macro: The second template has an Amazon SageMaker notebook instance resource with a ReplicateWith keyword and a reference to the macro defined in step 1. When CloudFormation runs this template, a call to the macro is made.

In this solution, the macro is called from the Transform section to send the entire template to the macro. In some use cases, you can send only a snippet of the template to the macro for processing.

Step1 is macro definition. Step 2 is to reference the macro in a template. When the template is executed, the macro is invoked. It transforms and returns the processed template to CloudFormation.

Figure 1: Steps involved in the macro workflow

Deployment walk-through

This walk-through shows how to deploy an end-to-end solution and how AWS CloudFormation interacts with the Lambda function (macro) to transform a template.

  • In the AWS Management Console, search for CloudFormation and then click on it to open the CloudFormation console.

The AWS Management Console provides a search field you can use to find AWS services.

Figure 2: Finding CloudFormation in the AWS Management Console

  • Create the macro definition (Step 1 in solution overview) using the following template (Create Stack Wizard).

rTransform is the macro definition.

rTransformFunction is the Lambda function code behind the macro.

rTransformExecutionRole is the execution role for the Lambda function.

rTransformFunctionPermissions are the invoke permissions for the Lambda function.

# Copyright 2021 Amazon.com, Inc. or its affiliates. All Rights Reserved. # https://thinkwithwp.com/agreement # SPDX-License-Identifier: MIT-0
AWSTemplateFormatVersion: 2010-09-09
Resources:
  rTransformExecutionRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service: [lambda.amazonaws.com]
            Action: ['sts:AssumeRole']
      Path: /
      Policies:
        - PolicyName: root-global
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - logs:CreateLogGroup
                Resource:
                  - !Sub 'arn:aws:logs:*:${AWS::AccountId}:log-group:/aws/lambda/replication-macro'
              - Effect: Allow
                Action:
                  - logs:CreateLogStream
                  - logs:PutLogEvents
                Resource:
                  - !Sub 'arn:aws:logs:*:${AWS::AccountId}:log-group:*'
  rTransformFunction:
    Type: AWS::Lambda::Function
    Properties:
      FunctionName: replication-macro
      Code:
        ZipFile: |
          import boto3
          import copy
          import json
          import logging

          log = logging.getLogger()
          log.setLevel(logging.INFO)

          DEFAULT_KEYWORD = 'ReplicateWith'

          def process_template(event):
              """
              Creates identical resources iteratively with one unique resource property
              """
              try:
                fragment = event['fragment']
                final_fragment = copy.deepcopy(fragment)
                parameters = event['templateParameterValues']
                resources = fragment['Resources']
                for resource_name, resource_values in resources.items():
                    current_resource = resources[resource_name]
                    log.info('Current Resource:: ' + str(current_resource))
                    if DEFAULT_KEYWORD in resource_values['Properties']:
                        log.info('Resource_name:: ' + str(resource_name))
                        log.info('Resource_value:: ' + str(resource_values))

                        # Move to a different block
                        for key, value in resource_values['Properties'][DEFAULT_KEYWORD].items():
                            log.debug('Replicate Key:: ' + key)
                            log.debug('Replicate Value:: ' + value)

                            #Split the comma separated properties
                            resource_properties = parameters[value].split(',')
                            log.debug('New Properties:: ' + str(resource_properties))

                            #Pop the DEFAULT_KEYWORD
                            resource_values['Properties'].pop(DEFAULT_KEYWORD)
                            length_resource_properties = len(resource_properties)
                            log.debug('length of properties:: ' + str(length_resource_properties))

                            # Duplicating resources with unique property values
                            if length_resource_properties > 0:
                                for x in range(0, length_resource_properties):
                                    final_fragment['Resources'][resource_name + str(x+1)] = copy.deepcopy(current_resource)
                                    final_fragment['Resources'][resource_name + str(x+1)]['Properties'][key] = resource_properties[x].strip()
                                final_fragment['Resources'].pop(resource_name)
                return final_fragment
              except Exception as e:
                log.error('Error occurred:: ' + str(e))


          def handler(event, context):
              """
              Returns processed template back to CloudFormation
              """
              log.info(json.dumps(event))
              processed_template=process_template(event)
              log.info('Processed template' + json.dumps(processed_template))

              r = {}
              r['requestId'] = event['requestId']
              r['status'] = 'SUCCESS'
              r['fragment'] = processed_template

              return r

      Handler: index.handler
      Runtime: python3.7
      Role: !GetAtt rTransformExecutionRole.Arn
  rTransformFunctionPermissions:
    Type: AWS::Lambda::Permission
    Properties:
      Action: 'lambda:InvokeFunction'
      FunctionName: !GetAtt rTransformFunction.Arn
      Principal: 'cloudformation.amazonaws.com'
  rTransform:
    Type: AWS::CloudFormation::Macro
    Properties:
      Name: 'Replication-Macro'
      Description: Replicates the resources based on parameters provided
      FunctionName: !GetAtt rTransformFunction.Arn
  • Deploy the second template, which references a macro (Step 2 in solution overview) to create replication resources. Use the Parameters section shown in Figure 3.

In Parameters, under Machine Learning Instance Types, enter comma-separated values. The default value is ml.t2.medium, ml.t3.medium, ml.c4.4xlarge.

Figure 3: Parameters supplied to the CloudFormation template

# Copyright 2021 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# https://thinkwithwp.com/agreement
# SPDX-License-Identifier: MIT-0
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Creates SageMaker Notebook Instance Lifecycle Configuration'

Transform:
  - Replication-Macro
Metadata: 
  AWS::CloudFormation::Interface: 
    ParameterGroups: 
    - Label: 
        default: ""
      Parameters: 
        - pMLInstanceTypes
    ParameterLabels: 
      pMLInstanceTypes:
        default: "Machine Learning Instance Types"
Parameters:
  pMLInstanceTypes:
    Type: String
    Description: "Comma separated machine learning instance type names - Ex: ml.t2.medium, ml.t3.medium"
Resources:
  rNotebookInstance:
    Type: "AWS::SageMaker::NotebookInstance"
    Properties:
      # InstanceType: "ml.t2.medium"
      ReplicateWith:
          InstanceType: pMLInstanceTypes
      RoleArn: !GetAtt rExecutionRole.Arn
      LifecycleConfigName: !GetAtt rNotebookInstanceLifecycleConfig.NotebookInstanceLifecycleConfigName
  rNotebookInstanceLifecycleConfig:
    Type: "AWS::SageMaker::NotebookInstanceLifecycleConfig"
    Properties:
      OnStart:
        - Content:
            Fn::Base64: "echo 'hello from OnStart'"
  rExecutionRole: 
    Type: "AWS::IAM::Role"
    Properties: 
      AssumeRolePolicyDocument: 
        Version: "2012-10-17"
        Statement: 
          - 
            Effect: "Allow"
            Principal: 
              Service: 
                - "sagemaker.amazonaws.com"
            Action: 
              - "sts:AssumeRole"
      Path: "/"
      Policies: 
        - 
          PolicyName: "root"
          PolicyDocument: 
            Version: "2012-10-17"
            Statement: 
              - 
                Effect: "Allow"
                Action: "sagemaker:ListAlgorithms"
                Resource: "*"
  • On the final page of the CloudFormation console, under Capabilities and transforms, select the check boxes, and then choose Create change set. Choose Create change set again to confirm. This is when a call to macro is made.

Note: Create change set is not a mandatory step. It is used to verify whether the resources that will be created by the processed template, returned by the macro are valid or not. You can ignore creating change set if you don’t need the verification and just click on create stack. A call to the macro is made when you choose Create stack.

Each of the three checkboxes is selected. The first says, "I acknowledge that AWS CloudFormation might create IAM resources." The second says, "I acknowledge that AWS CloudFormation might create IAM resources with custom names." The third says, "I acknowledge that AWS CloudFormation might require the following capability: CAPABILITY_AUTO_EXPAND."

Figure 4: Capabilities and transforms

  • The original template is transformed when the macro is called. Look at the request sent to the Lambda function behind the macro. Because the macro is referenced in the Transform section, the entire original CloudFormation template is sent to the Lambda function for processing.
{
    "accountId": "111122223333",
    "fragment": {
        "AWSTemplateFormatVersion": "2010-09-09",
        "Description": "Creates SageMaker Notebook Instance Lifecycle Configuration",
        "Parameters": {
            "pMLInstanceTypes": {
                "Type": "String",
                "Description": "Comma separated machine learning instance type names - Ex: ml.t2.medium, ml.t3.medium"
            }
        },
        "Metadata": {
            "AWS::CloudFormation::Interface": {
                "ParameterGroups": [
                    {
                        "Label": {
                            "default": ""
                        },
                        "Parameters": [
                            "pMLInstanceTypes"
                        ]
                    }
                ],
                "ParameterLabels": {
                    "pMLInstanceTypes": {
                        "default": "Machine Learning Instance Types"
                    }
                }
            }
        },
        "Resources": {
            "rNotebookInstance": {
                "Type": "AWS::SageMaker::NotebookInstance",
                "Properties": {
                    "ReplicateWith": {
                        "InstanceType": "pMLInstanceTypes"
                    },
                    "RoleArn": {
                        "Fn::GetAtt": "rExecutionRole.Arn"
                    },
                    "LifecycleConfigName": {
                        "Fn::GetAtt": "rNotebookInstanceLifecycleConfig.NotebookInstanceLifecycleConfigName"
                    }
                }
            },
            "rNotebookInstanceLifecycleConfig": {
                "Type": "AWS::SageMaker::NotebookInstanceLifecycleConfig",
                "Properties": {
                    "OnStart": [
                        {
                            "Content": {
                                "Fn::Base64": "echo 'hello from OnStart'"
                            }
                        }
                    ]
                }
            },
            "rExecutionRole": {
                "Type": "AWS::IAM::Role",
                "Properties": {
                    "AssumeRolePolicyDocument": {
                        "Version": "2012-10-17",
                        "Statement": [
                            {
                                "Effect": "Allow",
                                "Principal": {
                                    "Service": [
                                        "sagemaker.amazonaws.com"
                                    ]
                                },
                                "Action": [
                                    "sts:AssumeRole"
                                ]
                            }
                        ]
                    },
                    "Path": "/",
                    "Policies": [
                        {
                            "PolicyName": "root",
                            "PolicyDocument": {
                                "Version": "2012-10-17",
                                "Statement": [
                                    {
                                        "Effect": "Allow",
                                        "Action": "sagemaker:ListAlgorithms",
                                        "Resource": "*"
                                    }
                                ]
                            }
                        }
                    ]
                }
            }
        }
    },
    "transformId": "111122223333::Replication-Macro",
    "requestId": "ea2aace-4cca-4bg4-a051-9d6608d611jh",
    "region": "us-east-1",
    "params": {},
    "templateParameterValues": {
        "pMLInstanceTypes": "ml.t2.medium, ml.t3.medium, ml.t2.large"
    }
}
  • After the Lambda function runs, a response is sent to CloudFormation. In Figure 6, you’ll find three Amazon SageMaker notebook instances have been created. That’s because you provided three comma-separated instance types in step 3 even though the original template has a resource definition for only one notebook resource.

There are five resources under Changes: the execution role, three notebook instances, and rNotebookInstanceLifecycleConfig.

Figure 5: Change set result after macro execution

  • Choose Execute to create the resources in the processed template.

When clicked, the Execute button executes the change set that contains the three machine learning notebook instance types.

Figure 6: Running change set

  • Under Notebook instances, check the Status column for each notebook instance.

In the Sage Maker console, the three notebook instances created by the transformed template are displayed. The status of each is InService.

Figure 7: Notebook instances in the Amazon SageMaker console

  • In order to troubleshoot macro, you can refer to Amazon CloudWatch Logs created by the Lambda Function. Search for Lambda in the AWS Management Console and then click on it to open the Lambda console (Figure 8). On the Lambda console, click on Functions and then click on replication-macro. Now switch to the Monitor tab and then click on view logs in CloudWatch (Figure 9). This will open the CloudWatch log group created by the replication-macro Lambda Function.

The AWS Management Console provides a search field you can use to find AWS services.

Figure 8: Finding Lambda in the AWS Management Console

You can navigate to CloudWatch log group associated to a Lambda Function from the Lambda Console

Figure 9: Navigating to Amazon CloudWatch Logs from Lambda Console

 

Macro best practices

When you’re managing resources through AWS CloudFormation, macros add another layer, so it is important to follow the best practices when you write macros.

  • Set up an authoring environment.

You can create and maintain all your AWS Lambda functions in one account and share them across your organization. This way, you can isolate and abstract the macros and have a separate CI/CD process.

  • Determine if your macro should be a snippet or template-wide.

For efficient processing, the Lambda function code has to know if the request is for the entire CloudFormation template or just a snippet.

  • Macros should do one thing well.

Macros should be used for a specific use case. For example, a macro should either do all kinds of string manipulations or it should perform resource replication.

  • Break larger macros into smaller ones.

Because you can reference multiple macros in a CloudFormation template, it’s a best practice to break large macros into smaller ones and reference all the smaller macros instead of one huge macro with hundreds of lines of code.

  • Keep the functions in a macro small.

Macros are same as AWS Lambda functions, so test them thoroughly and follow the principle of least privilege for the Lambda execution role.

Cleanup

Remove the resources created by this solution by deleting the AWS CloudFormation stacks to avoid incurring costs.

Conclusion

In this blog post, I discussed the process to build and deploy an AWS CloudFormation Macro. I also provided the best practices that need to be followed to create and manage a macro within your account. Use the solution in this blog post as the basis for more complex macros. For example, you might want to use AWS CloudFormation macros to validate if the resources created in your organization are complying with your security standards. You can do this by enforcing the users to always have a reference to the compliance macros when using an AWS CloudFormation template to create resources.

You can write a macro to do the following:

  • Check if Amazon RDS resources and Amazon DynamoDB tables have encryption enabled.
  • Check if an Amazon S3 bucket definition is configured to be private.
  • Apply specific resource policies on S3 buckets or Amazon SNS topics based on which AWS Region or environment they are being deployed to.
  • Standardize resource properties across your organization. For example, if an S3 bucket is being deployed in the production environment, you can use a macro to apply production S3 bucket settings to it (for example, which AWS KMS encryption key to use for encrypting object, enabling Cross-Region Replication, managing lifecycle configuration on a bucket, if logging should be enabled, and so on).
  • Apply your organization’s default values to the resource rather than allowing CloudFormation to set its own default values when a resource property is not specified by the user. For example, whether an EC2 instance in a subnet by default should have a public IP address or not.

Additional resources

There are other ways to extend the CloudFormation’s functionality. Private resource types and custom resources allow you to call custom code written in imperative languages such as Python, Java, and so on. With a private resource type, you can create your own CloudFormation resource type and register it in AWS CloudFormation registry. At re:Invent 2020, CloudFormation announced the release of modules, which help you package resource configurations that can be reused across AWS CloudFormation stacks.

About the Author

Wilson Dayakar Puvvula is a Cloud Applications Architect at AWS. He helps customers adopt AWS services and offers guidance for AWS automation and cloud-native application implementations. Learning new things and building solutions at AWS scale is what excites him. Outside of work, he watches Liverpool F.C. and enjoys the team anthem, You’ll Never Walk Alone. You can find him on twitter at @PuvvulaWilson