AWS Cloud Operations Blog

How to peer an AWS Migration Hub Refactor Spaces orchestrated AWS Transit Gateway to your existing enterprise AWS Transit Gateway

AWS Migration Hub Refactor Spaces helps customers incrementally refactor applications while shielding end users from the changes of infrastructure and using the strangler fig pattern. This enables customers to refactor their legacy applications into a series of microservices while continuing to operate the existing application in production.

Refactor Spaces achieves this by orchestrating a number of underlying services – in particular AWS Transit Gateway, Amazon API Gateway, and Network Load Balancer. Many customers have already implemented solutions on AWS with a network fabric in place before they begin refactoring their applications.  With this, they often ask “how can we integrate Refactor Spaces infrastructure with our existing enterprise network architecture?”

In this blog, we address how to peer an existing ‘Hub-and-Spoke’ architecture with Transit Gateway with a provisioned Refactor Spaces’ Transit Gateway.  The following sample architecture diagram shows a ‘hub and spoke’ model.  In this model, virtual private clouds (VPCs) and applications are communicating through the central enterprise AWS Transit Gateway.  Refer to the Transit Gateway documentation to learn how to implement the hub and spoke pattern.

In the next section, we provide a brief overview of the networking components of a Refactor Spaces environment in a typical multi-account set-up.  We also explain how to peer this with your existing enterprise account set-up.

Architecture diagram depicting an enterprise Transit Gateway with multiple VPCs attached

Figure 1. Architecture diagram depicting an enterprise Transit Gateway with multiple VPCs attached

Networking in Refactor Spaces environment

A Refactor Spaces environment contains Refactor Spaces applications and services. It’s a multi-account network fabric consisting of bridged VPCs.  This fabric is provided so that resources within are able to interact through private IP addresses. The environment provides a unified view of networking, applications, and services across multiple AWS accounts.

First, you create a Refactor Spaces environment in the account chosen as the environment owner. Then, you share the environment with the other two accounts using AWS Resource Access Manager (the Refactor Spaces console does this for you).

After you share the environment with another account, Refactor Spaces automatically shares the resources that it creates within the environment with the other accounts. It does so by orchestrating AWS Identity and Access Management (IAM) resource-based policies.

Now, let’s have a look our Refactor Spaces environment.

When you create a Refactor Spaces environment (account A), it automatically provisions the Amazon API Gateway, Network Load Balancer, and AWS Transit Gateway. These resources are then shared with other accounts (account B and account C) to configure the required connectivity routing.

Architecture diagram showing the resources deployed by Migration Hub Refactor Spaces, and the connectivity to applications in other accounts.

Figure 2. Architecture diagram showing the resources deployed by Migration Hub Refactor Spaces, and the connectivity to applications in other accounts.

At this stage, our monolith application can be accessed through API Gateway’s URL. Once you go to API Gateway in your Refactor Spaces account, you will see that the Rest API is created. Note that, in this scenario, we used private API Gateway.  If necessary, you can also select public.

You can read more about private API Gateway integrations and best practices here.

The API Gateway console showing the API deployed by Refactor Spaces

Figure 3. The API Gateway console showing the API deployed by Refactor Spaces.

Select VPC Links.  You should see that the VPC Link is created by Refactor Spaces. VPC links are used to provide a VPC access for your API Gateway. In this case, you will notice that Target is set as Network Load Balancer (NLB).

The VPC link for the API showing the target load balancer

Figure 4. The VPC link for the API showing the target load balancer.

Inside your API settings, you should see that ‘execute-api’ endpoint URL is created and you can use this URL to access your API. Later we will use this URL to test our connection.

The API Gateway console showing the invoke URL

Figure 5. The API Gateway console showing the invoke URL.

Note: In order to access your API from your existing enterprise applications that are deployed in your enterprise accounts, you must set up a VPC endpoint. We will cover the required steps in private API access through VPC endpoint section.

If you check the VPC routes in accounts B and C, you should see the routes created by Refactor Spaces directing traffic towards the Transit Gateway. These routing configurations in both accounts are already handled by Refactor Spaces for us.

The VPC route tables in accounts B and C that Refactor Spaces has configured.

Figure 6. The VPC route tables in accounts B and C that Refactor Spaces has configured.

The routing between our accounts:

An architecture diagram showing the routing between accounts B and C and the Refactor Spaces account

Figure 7. An architecture diagram showing the routing between accounts B and C and the Refactor Spaces account.

Preparing for integration

Now, it is time to look at our enterprise account where we want to build connectivity with our Refactor Spaces environment.

In order to provide an end-to-end connectivity, we must complete three main steps;

  1. Transit Gateway peering between Refactor Spaces’ Transit Gateway and our existing enterprise Transit Gateway
  2. VPC and Transit Gateway routing configuration
  3. Private API access through interface VPC endpoints
The architecture diagram shows the desired connectivity between the enterprise account and the Refactor Spaces monolith, and microservices accounts.

Figure 8. The architecture diagram shows the desired connectivity between the enterprise account and the Refactor Spaces monolith, and microservices accounts.

1. Transit Gateway peering

The high-level steps for peering two Transit Gateways are as follows:

  1. From either a shared services account or the Refactor Spaces account, navigate to the VPC console, and select Transit Gateway attachments.
  2. Create a new Transit Gateway attachment, select the Transit Gateway to peer, and peering connection from the attachment type dropdown.
  3. In Peering connection attachment, select Other account and add the target account ID, target Region, and the ID of the other Transit Gateway as the accepter. Add any relevant tags and then click Create Transit Gateway Attachment.
The Transit Gateway console showing how to configure the peering between the two Transit Gateways

Figure 9. The Transit Gateway console showing how to configure the peering between the two Transit Gateways.

  1. In the other account, navigate to the Transit gateway attachments section in the VPC console. We can see an attachment that is in the Pending Acceptance state. Select it, click Actions, and then Accept Transit Gateway Attachment.
The Transit gateway attachments console, showing how to accept the peering connection from the second account.

Figure 10. The Transit gateway attachments console, showing how to accept the peering connection from the second account.

The two transit gateways are now peered. The full steps for how to achieve the peering can be found in Transit Gateway documentation.

2. VPC and Transit Gateway routing configuration

Once Transit Gateway peering is in place, we must add static routes on both Transit Gateways to make sure the corresponding VPCs are reachable. We also must add these routes in all of the necessary VPC route tables.

From the enterprise AWS account:

  • All the VPCs that must communicate with either the monolith or microservices accounts must be attached to the Transit Gateway. For details on how to do this, refer to the documentation.
Screenshot creating the transit gateway attachments for the VPCs

Figure 11. Screenshot creating the transit gateway attachments for the VPCs.

  • Add static routes to the Transit Gateway route table for both the monolith and microservices VPC CIDR blocks to be routed via the Transit Gateway peering attachment. To see how to create a static route in a Transit Gateway route table, check the section “Create a static route” in our documentation – https://docs.thinkwithwp.com/vpc/latest/tgw/tgw-route-tables.html
Creating the static route in the Transit Gateway route table

Figure 12. Creating the static route in the Transit Gateway route table.

  • Add routes to the VPC route tables to forward requests to the monolith and microservices VPC CIDR blocks to be forwarded via the Transit Gateway. Depending on the setup of the VPCs, we may repeat this process for all the route tables associated with different subnets in the VPC.
VPC Route table configuration for the Enterprise VPC

Figure 13. VPC Route table configuration for the Enterprise VPC.

From the Refactor Spaces AWS account:

  • For all the VPC CIDR blocks that both the monolith and microservices VPCs must communicate with, add static routes for those CIDR blocks to the Transit Gateway route tables, forwarding the requests via the Transit Gateway peering attachment.
Adding the static routes to the transit gateway route table to direct traffic to the peering attachment.

Figure 14. Adding the static routes to the transit gateway route table to direct traffic to the peering attachment.

From the monolith and microservices account:

  • For all VPC CIDR blocks that both the monolith and microservices VPCs must communicate with, add routes to the VPC route tables forwarding the requests to the Transit Gateway in the Refactor Spaces account. Alternatively, you can use the default route and let the Transit gateway handle the routing. Note: Refactor Spaces has managed the resource sharing of the Transit Gateway, so you do not need to configure anything else.
Add a default route to the Refactor Spaces Transit gateway for the monolith/microservices VPCs.

Figure 15. Add a default route to the Refactor Spaces Transit gateway for the monolith/microservices VPCs.

3. Private API access through VPC endpoint

In order to create a private API deployment from all accounts that want to use the new microservices, we must create an Interface VPC Endpoint.  This will let the API Gateway endpoints be invoked. For this, we must verify that your VPC security groups allow for inbound access on port 443 from the VPC CIDR.  In addition, we must make sure that the API Gateway policy allows the other accounts to invoke the endpoints.

Note:  if your workloads are located in the same VPC as the interface VPC endpoint you are going to use for your private API access, you must configure Private DNS in your endpoints. When enabled, the setting creates an AWS managed Route 53 private hosted zone (PHZ).  This PHZ enables the resolution of the public AWS service endpoint to the private IP of the interface endpoint.

The managed PHZ only works within the VPC where the endpoint is located.  If those endpoints must be accessed by many VPCs (for example, from a central shared services VPC) you must:

  1. Enable DNS resolution in a ‘Hub-and-Spoke’ architecture
  2. Turn off private DNS in the interface endpoint
  3. MManually create a Route 53 PHZ
  4. Add an alias record with the full AWS service endpoint name pointing to the interface endpoint.

The full steps for how to build this solution can be found in the PHZ documentation.

The full architecture diagram showing networking routing between all accounts

Figure 16. The full architecture diagram showing networking routing between all accounts.

For more information regarding private API operations, see the best practices whitepaper.

Validating network access

To validate bidirectional access between the Monolith or Microservices accounts and the enterprise account, test the reachability of endpoints on both ends of the network:

  • Test that the monolith/microservices are accessible from the enterprise network. Try making requests to the API Gateway endpoint and validate that the expected response is received. If the response fails to resolve the hostname, make sure the VPC Endpoint DNS configuration is configured properly (as described earlier). If the request times out, make sure the traffic is not restricted by security groups or NACLs, and the Transit Gateway route tables and VPC route tables are configured properly (as described earlier). Also, try pinging the IP addresses directly instead of the API Gateway URL to validate the access.
  • Test that the enterprise applications are accessible from the monolith/microservices.  Ping an instance in the enterprise network from either the monolith or the microservice VPCs. Again, if it fails, make sure to check all the routing has been implemented as described earlier. Use VPC Reachability Analyzer to help troubleshoot any issues inside the VPC.  Use AWS Network Manager Route Analyzer to check the reachability between Transit Gateways.

Conclusion

In this blog, we explained how you can implement Transit Gateway peering between an existing Transit Gateway in an enterprise environment, and one provisioned by Refactor Spaces. This lets customers incrementally refactor enterprise applications while they communicate with other enterprise applications.

To learn more about Refactor Spaces, try the workshop. Alternatively, speak to your AWS account team for guidance on how to implement this in your own environment.

About the authors:

Semih Duru

Semih is a Senior Migration & Modernization Specialist Solutions Architect based in London. Semih helps AWS customers across all industries to transform their businesses by solving complex technical problems and take advantage of cloud-native technologies to achieve their business goals.
Semih is passionate about application modernization and big fan of serverless technologies.

Jake Walker

Jake is an Associate Solutions Architect based in London with a software development background, and joined AWS on a graduate scheme in September 2020. He specializes in migrations and modernization and enjoys helping customers modernize to take advantage of cloud native technologies.

Pablo Sánchez Carmona

Pablo is a Networking Solutions Architect at AWS, where he helps customers to design secure, resilient and cost-effective networks. When not talking about Networking, Pablo can be found playing basketball or video-games. He holds a MSc in Electrical Engineering from the Royal Institute of Technology (KTH), and a Master’s degree in Telecommunications Engineering from the Polytechnic University of Catalonia (UPC).