亚马逊AWS官方博客

借助 Cloud Foundations 规划设计云上多区域网络轴辐拓扑结构一键部署东西南北流量分别或合并检查丨Use Cloud Foundations to plan and design multi-regional hub-spoke network topology on the cloud and one-click deploy east-west south-north traffic inspection separated or combined

The Chinese version [9] of this blog post was originally published on November 2, 2023. We updated the network definitions based on the latest specifications when translating and republishing it in English.

Multi-regional network structure planning and design on the cloud is an essential part of building an effective cloud operating environment. There are several major difficulties with this work.

Firstly, it is about high degree of customization. Unlike other basic infrastructure services, network planning and business requirements are closely related, with every user having a different actual situation. From the connectivity relationship of Amazon Transit Gateways (hereinafter referred to as TGW) between different regions, to the network CIDRs’ capacities of Amazon VPCs (hereinafter referred to as VPC) and subnets, there are endless possibilities. However, the configurations of basic services such as Amazon CloudTrail may not differ much between different customers.

Secondly, it is about multiple regions and accounts. Modern cloud operating environments are basically based on the architecture of multi-account organizations, and slightly more complex architectures would control multiple availability zones (hereinafter referred to as AZ) in multiple Amazon Web Services (hereinafter referred to as AWS) regions at the same time. Therefore, it involves the interconnection and continuous deployment of network resources on the cloud under the “three multiples” context. Our blog post published in February 2023[1] covered how to build a network in a single-regional environment. This article will introduce enhanced features of automated network provisioning in the context of a multi-regional environment.

Thirdly, it is about traffic inspection. Among all aspects of network security, traffic inspection requires overall consideration, and it is necessary to intervene in the early stages of network planning. AWS provides services such as AWS Network Firewall (hereinafter referred to as NFW) and Gateway Load Balancer (hereinafter referred to as GWLB) to assist in configuring inspection. It involves many network cloud resources when configuring traffic inspection by availability zone, and manual configuration is time-consuming, laborious, and error-prone. This article will focus on enhancements to automatic traffic inspection configuration.

Fourthly, it is about automated deployment. After the network planning and design is completed, how to deploy various network resources in multiple regions quickly and in an orderly manner is another major challenge. In particular, in multi-regional environments, cross-regional TGW peer connections are sequentially established and accepted. In addition, complicated tasks such as sending flow logs of subnets, VPCs, and TGWs in the Network Account under corresponding prefixes to the Amazon S3 network logs bucket in the Log Account, encrypted by the Amazon KMS network customer managed key (hereinafter referred to as CMK) in the Security Account, will all be properly configured without you having to worry about.

Cloud Foundations provides VPC-sharing and TGW-sharing network connectivity products to meet your customized network building needs. Infrastructure-as-code and automated DevOps are the core skills of Cloud Foundations. Recently, we have optimized our products, increased customization flexibility, and further reduced the aforementioned four major difficulties of the network building to help you efficiently create a secure, stable and scalable cloud network environment.

New features of network products

In order to meet the broader, deeper and more flexible network planning and design needs of customers and effectively reduce the burden and simplify the process of the network building, Cloud Foundations has recently optimized and enhanced the following functions, mainly focusing on solving the above four major pain and difficult points:

Features Previous features Recent update
Number of private subnets 3 Unlimited
Multi-regional deployment Unsupported Supported
TGW peer connections Unsupported Supported
TGW route table routes Supported, predefined Supported, customizable
Internet egress centralized management Supported, no inspection Supported, with inspection
Traffic inspection routes Unsupported Supported, separate or combined east-west, south-north traffic inspection
Traffic inspection appliances Unsupported Supported, NFW or GWLB
Custom security groups Unsupported Supported
Custom network access control lists (ACLs) Unsupported Supported

The following sections explain and discuss the above new functions in accordance with common network structure definition specifications one by one. Among them, the functions relating to custom security groups and network ACLs are provided by Cloud Foundations’ Product Factory. They complement network products to help build a cloud network environment that closely matches your actual business needs. (For an overview of Product Factory and its products, please refer to our blog post published in September 2023 [2])

Hub-spoke topology

The hub-spoke topology [3], as the name suggests, is a star-shaped structure with central radiation. All spoke nodes other than the hub node are connected only to the hub node. The hub-spoke structure described in this article includes two dimensions:

  1. Between VPCs connected through TGW: There can be one hub VPC and the others are spoke VPCs;
  2. Between regions through TGW peer connections: the main region is the hub region, and the others are spoke regions;

Optimize network structure definition

Based on customer feedbacks and actual project deliveries, we have optimized and adjusted the network structure to make it easier to define the network and effectively support new features, including:

  1. Add property: tgw.enabled to support the case where only VPCs are created;
  2. Add properties: nfw.enabled to support configuring network firewalls, gwlb.enabled supports configuring gateway load balancers;
  3. Add property: peers to support configuring VPC peer connections;
  4. Change property name: amazon_side_asn to asn, default value 64512;
  5. Change property name: create_igw is changed to enable_igw, to use enable keyword uniformly;
  6. Change property content: tgw.cidr is adjusted to the common network CIDR of all spoke VPCs, must not be open CIDR 0.0.0.0/0;
  7. Change property types: vpcs and tgw.tables are changed from arrays to maps, making it easy to adjust the item locations without destroying and re-creating resources;
  8. Delete properties: endpoint_vpc_index and nat_vpc_index. After the VPC array is changed to a map, there is no need to locate through index;
  9. Change property restrictions: The length of the first dimension of the subnets property is no longer limited, and any number of private subnets are supported. The first is for intra subnet to configure elastic network interfaces for the TGW, the second is for public subnet, and from the third on, they are private subnets;
  10. Added properties: routes to support flexible definition of TGW route table routes;

If we borrow the inclusion relationship of a collection, the relationship between a spoke VPC segment, a common network segment, and an open network segment can be expressed as follows: spoke VPC segment Í common network segment open network segment, that is, common network segment must not equal the open network segment. The routes property of the TGW route table are mappings, and the range of key-value pairs is:

Key Value
CIDR Network segment (CIDR) blackhole Blackhole routing
* Open network segment 0.0.0.0/0 peer TGW peer attachment
VPC name Corresponding VPC CIDR VPC name Corresponding VPC attachment
tgw Common network segment of this region
main Common network segment of the main region
Region name Common network segment of that region

In addition to the above property changes, another major update is the enhancement of the hub VPC’s ability to inspect network traffic. By properly turning on network appliances, different centralized Internet egress management and centralized inspections for south-north and east-west network traffic are achieved:

No. Scenario enable_nat enable_igw nfw.enabled gwlb.enabled
1 Centralized Internet egress true true false false
2 NFW east-west inspection false false true false
3 NFW south-north inspection true true true false
4 GWLB east-west inspection false false false true
5 GWLB south-north inspection true true false true

Scenario 6: Separated east-west and south-north traffic inspection. At this time, a spoke VPC must be added to the south-north traffic inspection of the hub VPC where any protective appliance should be turned on to check east-west traffic. The two inspection devices can be the same or different. We explain the usage and precautions of the optimized adjustments of new network definitions and new features through several common network structure examples. By default, Cloud Foundations will enable appliance mode support for TGW to ensure that inbound and outbound traffic is routed through symmetric paths.

VPC peer connection

There are two forms of interconnection between VPCs: peer connection and via TGW. One major advantage of peer connection is that it’s free. It is recommended when the network structure is not complicated and the number of connections is limited, especially when cost is a priority. The following figure shows a situation where 4 VPCs are interconnected.

An example of a network definition is shown below. Either side can initiate a peer connection. Configure public and private subnets as needed; no intra subnets are required.

{
  "vpcs": {
    "vpc-1": {
      "cidr": "10.1.0.0/16",
      "subnets": [[], [], [[4, 1], [4, 2]]]
    },
    "vpc-2": {
      "cidr": "10.2.0.0/16",
      "peers": ["vpc-1"],
      "subnets": [[], [], [[4, 1], [4, 2]]]
    },
    "vpc-3": {
      "cidr": "10.3.0.0/16",
      "peers": ["vpc-1", "vpc-2"],
      "subnets": [[], [], [[4, 1], [4, 2]]]
    },
    "vpc-4": {
      "cidr": "10.4.0.0/16",
      "peers": ["vpc-1", "vpc-2", "vpc-3"],
      "subnets": [[], [], [[4, 1], [4, 2]]]
    }
  }
}

The definition contains 4 VPCs, namely VPC1 (rows 3 – 5), VPC2 (rows 7 – 10), VPC3 (rows 12 – 15), VPC4 (rows 17 – 20).

Centralized Internet egress management

Corresponds to scenario 1. Previously, Cloud Foundations’ network module already supported centralized Internet egress management, but some TGW routes were hard-coded. The new routes property supports static route configuration in the TGW route table, which is more flexible and convenient. Additional information about this scenario can be found in the official white paper [4] and the blog post in October 2019 [5]. Below we use common traffic as the traffic bound for the common network segment (shared CIDR of spoke VPCs under a TGW), full traffic as the traffic bound for the open network segment (0.0.0.0/0).

Inbound and outbound traffic routes:

  1. Outbound full traffic from a spoke VPC private subnet is routed to the TGW by the subnet’s route table;
  2. Associate with the pre-inspection TGW route table via a spoke VPC attachment, and routed to the hub VPC attachment;
  3. Routed to the NAT gateway in the same AZ by the hub VPC’s intra subnet route table, leave the network via the Internet gateway (IGW) in the public subnet;
  4. The inbound traffic is routed by the IGW to the NAT gateway in the same AZ;
  5. The hub VPC public subnet route tables route common traffic to the TGW;
  6. Associate with the post-inspection TGW route table via the hub VPC attachment, and routed to the destination spoke VPC attachment;
  7. Finally, routed to the traffic destination by the local routes in the spoke VPC subnet route table.

The main configurations are:

  1. The hub VPC has intra and public subnets, and private subnets are optional;
  2. The hub VPC’s public subnets place NAT gateways by AZ;
  3. The hub VPC’s public subnet route tables route common traffic to the TGW by AZ, and route full traffic to the IGW;
  4. The hub VPC’s intra and private subnet route tables route full traffic to NAT gateways by AZ;
  5. Spoke VPCs do not have public subnets, and intra and private subnet route tables route full traffic to the TGW;
  6. TGW pre-inspection route table pre:
    • a) All spoke VPC attachments are associated with this table;
    • b) The hub VPC attachment is not propagated to this table;
    • c) Add a route, routing full traffic to the hub VPC attachment;
    • d) Adding a blackhole route to route common traffic blocks communication between VPCs;
  7. TGW post-inspection route table post:
    • a) The hub VPC attachment is associated with this table;
    • b) All spoke VPC attachments are propagated to this table;
    • c) Manually configure and route other traffic to peer connections, Amazon Direct Connect (DX) connections, or virtual private network attachments;
{
  "vpcs": {
    "hub": {
      "is_hub": true,
      "cidr": "192.168.0.0/16",
      "enable_igw": true, "enable_nat": true,
      "subnets": [[[12, 0], [12, 1]], [[8, 1], [8, 2]]]
    },
    "dev": {
      "cidr": "10.0.0.0/16",
      "subnets": [[[12, 0], [12, 1]], [], [[8, 3], [8, 4]]]
    },
    "prod": {
      "cidr": "10.1.0.0/16",
      "subnets": [[[12, 0], [12, 1]], [], [[8, 3], [8, 4]]]
    }
  },
  "tgw": {
    "enabled": true,
    "cidr": "10.0.0.0/8",
    "tables": {
      "pre": {
        "associations": ["dev", "prod"],
        "routes": { "*": "hub", "tgw": "blackhole" }
      },
      "post": { "associations": ["hub"], "propagations": ["dev", "prod"] }
    }
  }
}

The definition contains 3 VPCs (row 2 – 15) and 2 TGW route tables (rows 21 – 26).

South-north traffic inspection

Corresponds to scenarios 3 and 5. Based on Scenario 1, “Centralized Internet egress management”, you can configure a private subnet in the hub VPC and enable any inspection appliance to inspect the south-north traffic inbount and outbound to the Internet.

  • nfw.enabled: Configure the AWS NFW, encrypted by the network CMK in the Security Account, logs are stored in the network bucket in the Logs Account and log group in the Network Account. The route table directly uses NFW endpoints by AZ.
  • gwlb.enabled: Configure the GWLB for third-party firewall appliances. We configure the VPC endpoint services and also the VPC endpoints by AZ in route tables. You may add listeners by yourselves.

The above endpoint is referred to below by “network appliance endpoint”. You cannot turn on both switches at the same time. The official whitepaper describes configuring NFW to inspect outbound [4] and inbound [4] traffic, and configuring GWLB to check outbound [4] and inbound [4] traffic as well. This blog post in December 2020 [6] has also thoroughly analyzed the cases. In fact, in addition to south-north traffic, the following configuration also inspects east-west traffic at the hub VPC. The next sections will discuss case 6 of the separate inspection of the east-west and south-north traffic.

South-north traffic direction:

  1. Outbound full traffic from a spoke VPC private subnet is routed to TGW by the subnet’s route table;
  2. Associate with the pre-inspection TGW route table via a spoke VPC attachment, and routed to the hub VPC attachment;
  3. Routed to the appliance endpoint in the same AZ by the hub VPC’s intra subnet route table;
  4. After the inspection, routed to the NAT gateway in the same AZ by the private subnet route table, leave the network via the IGW in the public subnet;
  5. The inbound traffic is routed by the IGW to the NAT gateway in the same AZ;
  6. The hub VPC public subnet route tables route common traffic to the appliance endpoint in the same AZ;
  7. After the inspection, the private subnet route tables route common traffic to TGW;
  8. Associate with the post-inspection TGW route table via the hub VPC attachment, and routed to a spoke VPC attachment;
  9. Finally, routed to the traffic destination by the local routes in the spoke VPC subnet route table.

The main configurations are:

  1. The hub VPC has intra, public and private subnets;
  2. The hub VPC’s first private subnets place appliance endpoints by AZ;
  3. The hub VPC’s intra subnet route tables route full traffic to appliance endpoints by AZ;
  4. The hub VPC’s private subnet route tables route full traffic to NAT gateways by AZ, and route common traffic to the TGW;
  5. The hub VPC’s public subnet route tables route common traffic to appliance endpoints by AZ, and route full traffic to the IGW;
  6. The hub VPC’s IGW route table routes traffic towards the intra subnet to appliance endpoints by AZ;
  7. A spoke VPC does not have public subnets, and intra and private subnet route tables route full traffic to TGW;
  8. TGW pre-inspection route table pre:
    • a) All spoke VPC attachments are associated with this table;
    • b) The hub VPC attachment is not propagated to this table;
    • c) Add a route, routing full traffic to the hub VPC attachment;
  9. TGW post-inspection route table post:
    • a) The hub VPC attachment is associated with this table;
    • b) All spoke VPC attachments are propagated to this table;
    • c) Manually configure and route other traffic to peer connections, DX connections, or virtual private network attachments;
{
  "vpcs": {
    "hub": {
      "is_hub": true,
      "cidr": "192.168.0.0/16",
      "gwlb": { "enabled": true },
      "enable_igw": true, "enable_nat": true,
      "subnets": [[[12, 0], [12, 1]], [[8, 1], [8, 2]], [[8, 3], [8, 4]]]
    },
    "dev": {
      "cidr": "10.0.0.0/16",
      "subnets": [[[12, 0], [12, 1]], [], [[8, 3], [8, 4]]]
    },
    "prod": {
      "cidr": "10.1.0.0/16",
      "subnets": [[[12, 0], [12, 1]], [], [[8, 3], [8, 4]]]
    }
  },
  "tgw": {
    "enabled": true,
    "cidr": "10.0.0.0/8",
    "tables": {
      "pre": { "associations": ["dev", "prod"], "routes": { "*": "hub" } },
      "post": { "associations": ["hub"], "propagations": ["dev", "prod"] }
    }
  }
}

The definition contains 3 VPCs (row 2 – 16) and 2 TGW route tables (rows 22 – 24).

East-west traffic inspection

Corresponds to scenarios 2 and 4. Based on “South-north traffic inspection”, you can change to inspect east-west internal traffic within the cloud by turning off the NAT gateway and IGW in the hub VPC. This blog post in November 2020 [7] has an in-depth discussion on east-west traffic inspection.

East-west traffic direction:

  1. Outbound full traffic from a spoke VPC private subnet is routed to TGW by the subnet’s route table;
  2. Associate with the pre-inspection TGW route table via a spoke VPC attachment, and routed to the hub VPC attachment;
  3. Routed to the appliance endpoint in the same AZ by the hub VPC’s intra subnet route table;
  4. After the inspection, the private subnet route tables route full traffic to TGW;
  5. Associate with the post-inspection TGW route table via the hub VPC attachment, and route to the destination spoke VPC attachment;
  6. Finally, routed to the traffic destination by the local routes in the spoke VPC subnet route table.

The main configurations are:

  1. The hub VPC has intra and private subnets;
  2. The hub VPC’s first private subnets place appliance endpoints by AZ;
  3. The hub VPC’s intra subnet route tables route full traffic to appliance endpoints by AZ;
  4. The hub VPC’s private subnet route tables route common traffic to the TGW;
  5. A spoke VPC does not have public subnets, and intra and private subnet route tables route full traffic to TGW;
  6. TGW pre-inspection route table pre:
    • a) All spoke VPC attachments are associated with this table;
    • b) The hub VPC attachment is not propagated to this table;
    • c) Add a route, routing full traffic to the hub VPC attachment;
  7. TGW post-inspection route table post:
    • a) The hub VPC attachment is associated with this table;
    • b) All spoke VPC attachments are propagated to this table;
    • c) Manually configure and route other traffic to peer connections, DX connections, or virtual private network attachments;
{
  "vpcs": {
    "hub": {
      "is_hub": true,
      "cidr": "192.168.0.0/16",
      "gwlb": { "enabled": true },
      "subnets": [[[12, 0], [12, 1]], [], [[8, 3], [8, 4]]]
    },
    "dev": {
      "cidr": "10.0.0.0/16",
      "subnets": [[[12, 0], [12, 1]], [], [[8, 3], [8, 4]]]
    },
    "prod": {
      "cidr": "10.1.0.0/16",
      "subnets": [[[12, 0], [12, 1]], [], [[8, 3], [8, 4]]]
    }
  },
  "tgw": {
    "enabled": true,
    "cidr": "10.0.0.0/8",
    "tables": {
      "pre": { "associations": ["dev", "prod"], "routes": { "*": "hub" } },
      "post": { "associations": ["hub"], "propagations": ["dev", "prod"] }
    }
  }
}

The definition contains 3 VPCs (row 2 – 15) and 2 TGW route tables (rows 21 – 23).

Separate east-west and south-north traffic inspection

Corresponds to scenario 6. On the basis of “south-north traffic inspection”, another inspection VPC is set up specifically for east-west traffic inspection, while the hub VPC is dedicated to south-north traffic inspection and traffic to the Internet. Set up another spoke VPC to accept incoming traffic. At this time, two inspection appliances can be configured to produce four inspection combinations:

No. East-west inspection appliance South-north inspection appliance
1 NFW NFW
2 NFW GWLB
3 GWLB NFW
4 GWLB GWLB

Another blog post in November 2020 [8] discussed a similar dual-network appliance architecture at the end of the article.

In this scenario, east-west traffic is inspected by appliances in the inspection spoke VPC, while south-north traffic is inspected by appliances in the egress hub VPC. The traffic direction is similar to the “south-north traffic direction” and “east-west traffic direction” described above, so we won’t go into detail here.

The egress hub VPC’s main configurations are:

  1. The hub VPC has intra, public and private subnets;
  2. The hub VPC’s first private subnets place appliance endpoints by AZ;
  3. The hub VPC’s intra subnet route tables route full traffic to appliance endpoints by AZ;
  4. The hub VPC’s private subnet route tables route full traffic to NAT gateways by AZ, and route common traffic to the TGW;
  5. The hub VPC’s public subnet route tables route common traffic to appliance endpoints by AZ, and route full traffic to the IGW;
  6. The hub VPC’s IGW route table routes traffic towards the intra subnet to appliance endpoints by AZ;
  7. A spoke VPC except for the ingress and egress ones does not have public subnets, and intra and private subnet route tables route full traffic to TGW;
  8. TGW pre-inspection route table pre:
    • a) All spoke VPC attachments except for the inspection spoke VPC are associated with this table;
    • b) The egress hub VPC attachment is not propagated to this table;
    • c) The inspection spoke VPC attachment is not propagated to this table;
    • d) Add a route, routing full traffic to the egress hub VPC attachment;
    • e) Add a route, routing common traffic to the inspection spoke VPC attachment;
  9. TGW post-inspection route table post:
    • a) The egress hub VPC attachment is associated with this table;
    • b) The inspection spoke VPC attachment is associated with this table;
    • c) Other spoke VPC attachments are propagated to this table;
    • d) Manually configure and route other traffic to peer connections, DX connections, or virtual private network attachments;

The inspection spoke VPC’s main configurations are:

  1. The inspection spoke VPC has intra and private subnets;
  2. The inspection spoke VPC’s first private subnets place appliance endpoints by AZ;
  3. The inspection spoke VPC’s intra subnet route tables route full traffic to appliance endpoints by AZ;
  4. The inspection spoke VPC’s private subnet route tables route full traffic to the TGW;

The ingress spoke VPC’s main configurations are:

  1. The ingress spoke VPC has intra and public subnets;
  2. The ingress spoke VPC’s intra subnet route tables route full traffic to the TGW;
  3. The ingress spoke VPC’s public subnet route tables route common traffic to the TGW, and route full traffic to the IGW;
{
  "vpcs": {
    "inspect": {
      "cidr": "192.168.0.0/16",
      "nfw": { "enabled": true },
      "subnets": [[[12, 0], [12, 1]], [], [[8, 3], [8, 4]]]
    },
    "hub": {
      "is_hub": true,
      "cidr": "10.0.0.0/16",
      "nfw": { "enabled": true },
      "enable_igw": true, "enable_nat": true,
      "subnets": [[[12, 0], [12, 1]], [[8, 1], [8, 2]], [[8, 3], [8, 4]]]
    },
    "ingress": {
      "cidr": "10.1.0.0/16",
      "enable_igw": true,
      "subnets": [[[12, 0], [12, 1]], [8, 1], [8, 2]]]
    },
    "dev": {
      "cidr": "10.2.0.0/16",
      "subnets": [[[12, 0], [12, 1]], [], [[8, 3], [8, 4]]]
    },
    "prod": {
      "cidr": "10.3.0.0/16",
      "subnets": [[[12, 0], [12, 1]], [], [[8, 3], [8, 4]]]
    }
  },
  "tgw": {
    "enabled": true,
    "cidr": "10.0.0.0/8",
    "tables": {
      "pre": {
        "associations": ["ingress", "dev", "prod"],
        "routes": { "*": "hub", "tgw": "inspect" }
      },
      "post": {
        "associations": ["inspect", "hub"],
        "propagations": ["ingress", "dev", "prod"]
      }
    }
  }
}

The definition contains 5 VPCs (rows 2 – 26) and 2 TGW route tables (rows 32 – 39).

Multi-regional TGW peer connections

We’ve enhanced Cloud Foundations’ network components to support multi-regional deployment and establish cross-regional connectivity through TGW peer connections. Referring to the new routes property of TGW route table described above, it is possible to establish a cross-regional hub-spoke network topology with the main region TGW as the hub and the governed regions’ TGWs as the spoke. When designing a multi-regional network structure, plan globally the CIDRs of regions, VPCs, and subnets and make appropriate reservations based on the actual situation of your business and workload requirements for the number of IP addresses. Appropriately design network interconnection relationships at two levels of VPCs within a region and TGWs across AWS regions within the global.

The main configurations are:

  1. Each governed region establishes a peer connection with the main region, and there is no such connection among the governed regions;
  2. The network structure within a region is the same as the “Separate east-west and south-north traffic inspection” in the previous section, and may also be different;
  3. Main regional TGW pre-inspection route table pre:
    • a) All spoke VPC attachments except for the inspection spoke VPC are associated with this table;
    • b) The egress hub VPC attachment is not propagated to this table;
    • c) The inspection spoke VPC attachment is not propagated to this table;
    • d) All TGW peer connection attachments are associated with this table;
    • e) Add a route, routing full traffic to the egress hub VPC attachment;
    • f) Add a route, routing common traffic to the inspection spoke VPC attachment;
  4. Main regional TGW post-inspection route table post:
    • a) The egress hub VPC attachment is associated with this table;
    • b) The inspection spoke VPC attachment is associated with this table;
    • c) Other spoke VPC attachments are propagated to this table;
    • d) Add a route, routing traffic towards governed regions to corresponding peer connection attachments, referring to the regional common traffic via the region name;
    • e) Manually configure and route other traffic to DX connections, or virtual private network attachments;
  5. Governed regional TGW post-inspection route table post:
    1. a) Add a route, routing traffic towards the main region to the peer connection attachments, referring to the main region’s common traffic via main;
{
  "vpcs": {
    "inspect": {
      "cidr": "100.0.0.0/16",
      "nfw": { "enabled": true },
     "subnets": [[[12, 0], [12, 1]], [], [[12, 2], [12, 3]]]
    },
    "hub": {
      "is_hub": true,
      "cidr": "10.0.0.0/20",
      "nfw": { "enabled": true },
      "enable_igw": true, "enable_nat": true,
      "subnets": [[[8, 0], [8, 1]], [[4, 1], [4, 2]], [[4, 3], [4, 4]]]
    },
    "ingress": {
      "cidr": "10.0.16.0/20",
      "enable_igw": true,
      "subnets": [[[8, 0], [8, 1]], [[4, 1], [4, 2]]]
    },
    "dev": {
      "cidr": "10.0.32.0/20",
      "subnets": [[[8, 0], [8, 1]], [], [[4, 3], [4, 4]]]
    },
    "prod": {
      "cidr": "10.0.48.0/20",
      "subnets": [[[8, 0], [8, 1]], [], [[4, 3], [4, 4]]]
    }
  },
  "tgw": {
    "enabled": true,
    "cidr": "10.0.0.0/16",
    "tables": {
      "pre": {
        "associations": ["peer", "ingress", "dev", "prod"],
        "routes": { "*": "hub", "tgw": "inspect" }
      },
      "post": {
        "associations": ["inspect", "hub"],
        "propagations": ["ingress", "dev", "prod"],
        "routes": { "us-east-1": "peer", "us-west-1": "peer" }
      }
    }
  },
  "us-east-1": {
    "vpcs": {
      "inspect": {
        "cidr": "100.1.0.0/16",
        "nfw": { "enabled": true },
        "subnets": [[[12, 0], [12, 1]], [], [[12, 2], [12, 3]]]
      },
      "hub": {
        "is_hub": true,
        "cidr": "10.1.0.0/20",
        "nfw": { "enabled": true },
        "enable_igw": true, "enable_nat": true,
        "subnets": [[[8, 0], [8, 1]], [[4, 1], [4, 2]], [[4, 3], [4, 4]]]
      },
      "ingress": {
        "cidr": "10.1.16.0/20",
        "enable_igw": true,
        "subnets": [[[8, 0], [8, 1]], [[4, 1], [4, 2]]]
      },
      "dev": {
        "cidr": "10.1.32.0/20",
        "subnets": [[[8, 0], [8, 1]], [], [[4, 3], [4, 4]]]
      },
      "prod": {
        "cidr": "10.1.48.0/20",
        "subnets": [[[8, 0], [8, 1]], [], [[4, 3], [4, 4]]]
      }
    },
    "tgw": {
      "enabled": true,
      "peer": true,
      "asn": 64513,
      "cidr": "10.1.0.0/16",
      "tables": {
        "pre": {
          "associations": ["peer", "ingress", "dev", "prod"],
          "routes": { "*": "hub", "tgw": "inspect" }
        },
        "post": {
          "associations": ["inspect", "hub"],
          "propagations": ["ingress", "dev", "prod"],
          "routes": { "main": "peer" }
        }
      }
    }
  },
  "us-west-1": {
    "vpcs": {
      "inspect": {
        "nfw": { "enabled": true },
        "cidr": "100.2.0.0/16",
        "subnets": [[[12, 0], [12, 1]], [], [[12, 2], [12, 3]]]
      },
      "hub": {
        "is_hub": true,
        "cidr": "10.2.0.0/20",
        "nfw": { "enabled": true },
        "enable_igw": true, "enable_nat": true,
        "subnets": [[[8, 0], [8, 1]], [[4, 1], [4, 2]], [[4, 3], [4, 4]]]
      },
      "ingress": {
        "cidr": "10.2.16.0/20",
        "enable_igw": true,
        "subnets": [[[8, 0], [8, 1]], [[4, 1], [4, 2]]]
      },
      "dev": {
        "cidr": "10.2.32.0/20",
        "subnets": [[[8, 0], [8, 1]], [], [[4, 3], [4, 4]]]
      },
      "prod": {
        "cidr": "10.2.48.0/20",
        "subnets": [[[8, 0], [8, 1]], [], [[4, 3], [4, 4]]]
      }
    },
    "tgw": {
      "enabled": true,
      "peer": true,
      "asn": 64514,
      "cidr": "10.2.0.0/16",
      "tables": {
        "pre": {
          "associations": ["peer", "ingress", "dev", "prod"],
          "routes": { "*": "hub", "tgw": "inspect" }
        },
        "post": {
          "associations": ["inspect", "hub"],
          "propagations": ["ingress", "dev", "prod"],
          "routes": { "main": "peer" }
        }
      }
    }
  }
}

The definition contains 3 regions, which include the main (rows 2 – 40), the US east region (rows 44 – 85), and the US west region (lines 90 – 131).

Conclusion

This article mainly introduces Cloud Foundations’ recent improvements and enhancements to the network module, including unlimited private subnets, VPC peer connections, multi-regional deployment and connectivity, and support for network protection appliances including NFW and GWLB. Furthermore, with regard to the configuration and architecture of east-west and south-north traffic inspection, six scenarios of interconnection and traffic inspection situations are explained as examples. It is believed that the network structure and definition examples above can meet most network construction requirements. You can combine this article and other articles quoted in the article to build a more secure and stable network structure that is closer to your actual business needs. Of course, we have to admit that network construction requirements and structures are ever-changing, and although the Cloud Foundations network module provides many properties and functions for you to build, it may still not meet some of your network construction requirements. We welcome and look forward to your feedback, and will continue to optimize and enhance the network module to support the construction of broader, deeper, more flexible, and complex cloud network architectures.

References

  1. Blog post: Use Cloud Foundations to holistically plan and one-click deploy two network sharing models in multi-account organizations on the cloud, 2023-02
  2. Blog post: Deploy elastic bastion hosts in one-click for secure session management and port forwarding with Cloud Foundations, 2023-09
  3. AWS Well-Architected Framework:REL02-BP04 Prefer hub-and-spoke topologies over many-to-many mesh
  4. AWS Whitepapers: Building a Scalable and Secure Multi-VPC AWS Network Infrastructure, 2023-07
  5. Blog post: Creating a single internet exit point from multiple VPCs Using AWS Transit Gateway, 2019-10
  6. Blog post: Centralized inspection architecture with AWS Gateway Load Balancer and AWS Transit Gateway, 2020-12
  7. Blog post: Introducing AWS Gateway Load Balancer: Supported architecture patterns, 2020-11
  8. Blog post: Deployment models for AWS Network Firewall, 2020-11
  9. Blog post: 借助 Cloud Foundations 规划设计云上多区域网络轴辐拓扑结构一键部署东西南北流量分别或合并检查, 2023-11

本文译者

Clement Yuan

Clement Yuan is a Cloud Infra Architect in AWS Professional Services based in Chengdu, China. He works with various customers, from startups to international enterprises, helping them build and implement solutions with state-of-the-art cloud technologies and achieve more in their cloud explorations. He enjoys reading poetry and traveling around the world in his spare time.