Networking & Content Delivery
Building Resilient IPv6 Network with SD-WANs and AWS Cloud WAN Connect with GRE
In this post we explore how you can use AWS Cloud WAN Connect with Generic Routing Encapsulation (GRE) Tunnels and Multi-protocol BGP (MP-BGP) for Equal Cost Multi-Path (ECMP) routing of IPv6 networks. We also cover route verification and failover testing best practices.
Many Amazon Web Services (AWS) users are increasingly adopting IPv6 and Software-Defined Wide Area Networking (SD-WAN) to connect their on-premises networks to AWS using AWS Cloud WAN. IPv6 has grown more essential in modern networking due to the rapid exhaustion of private IPv4 address space and Public IPv4 scarcity. To maximize resilience, highly available SD-WAN designs should implement balanced load distribution, mitigating the impact of individual node failures within the network architecture. Users can configure ECMP routing for AWS Cloud WAN Connect peers to eliminate single points of failure and to provide room to scale bandwidth across the WAN.
Prerequisites
We assume that you are familiar with the Amazon Virtual Private Cloud (VPC) constructs, IPv4 and IPv6 functionality, and configuration options for the common VPC services, as well as AWS Cloud WAN, Connect and Tunnel-less connect features. You should also be aware of the IPv6 protocol definition, types of addresses, and configuration mechanisms.
The following prerequisites are necessary to implement this solution:
- A dual-stack Amazon Virtual Private Cloud (Amazon VPC) with IPv6 and IPv4 CIDR blocks for use as a Transport VPC.
- Two discrete private subnets and corresponding Availability Zones (AZs) defined within the VPC with IPv6 CIDR blocks assigned to both.
- BGP ASNs to be assigned to the SD-WAN appliances.
- AWS Identity and Access Management (IAM) permissions to use AWS Network Manager and Amazon Elastic Compute Cloud (Amazon EC2).
- AWS Cloud WAN global network and core network.
- Two SD-WAN appliances running on EC2 instance. Use the Infrastructure Software category of the AWS Marketplace and choose SD-WAN vendor of your choice from the available listing.
Overview of the GRE-based AWS Cloud WAN Connect setup
Figure 1 shows an example SDWAN and AWS Cloud WAN Connect deployment pattern to segment a multi-environment network.
Two SD-WAN appliances run on EC2 instances in the SD-WAN VPC. We connected the appliances to the AWS Cloud WAN network using an AWS Cloud WAN Connect attachment. They are also connected to on-premises environment. We extend the on-premises network capability to connect to AWS by using the AWS Cloud WAN Connect attachment.
The Cloud WAN core network uses three segments to isolate traffic. The SD-WAN segment and Prod segment share routes with each other. Although the basic example we describe in this post spans only a single AWS Region, AWS Cloud WAN can manage much larger multi-Region networks.
This walkthrough includes the following steps:
- Create a core network attachment.
- Create a Connect attachment.
- Create a Connect peer for each appliance.
- Configure the SD-WAN appliances to use GRE and BGP.
- Check the status of the Connect peers’ associations.
- Search and display the core network routes.
- Verify failover capability.
- Clean up the resources.
Step 1: Create a core network attachment
- Create an attachment named “Transport VPC” using either the Console or AWS CLI.
- When creating your attachment, choose two subnets, one in each of your AZs.
- When creating your attachment, enable IPv6 support (Figure 2).
Step 2: Create a Connect attachment
- Create a GRE-based core network Connect attachment (Figure 3) using either the Console as shown in the following figure or AWS CLI. Choose “Transport VPC” created in the previous step as the Transport Attachment ID and US East (Ohio) as the Region.
Step 3: Create a Connect peer (GRE Tunnel) for each SD-WAN appliance
- Create a Connect peer (GRE Tunnel) for the first SD-WAN appliance (Figure 4).
- For BGP Inside CIDR blocks IPv4, enter a /29 CIDR block from the 169.254.0.0/16 range. This block will be used to assign IPv4 addresses to BGP peers of Cloud WAN Connect attachment as well as SD-WAN appliance.
- For BGP Inside CIDR blocks IPv6, enter a /125 CIDR block from the fd00::/8 range. IPv6 addresses from this CIDR will be used to configure next hop routing on SD-WAN appliances. Please note BGP peering primarily uses IPv4 addresses, but it does support IPv6 address exchange through MP-BGP.
- Create a Connect peer for the second SD-WAN appliance (Figure 5).
Step 4: Configure the SD-WAN appliances to use GRE and BGP
- In this section we use Cisco configuration statements as examples. Consult your appliance vendor’s documentation to confirm the directions for your specific appliances.
- Configure GRE as shown in the following. An AWS Cloud WAN Core Network supports an 8500-byte MTU between VPCs, such as transit gateway peering and Tunnel-less Connect VPC attachments. As necessary, adjust the tunnel MTU if you are not using jumbo frames.
! ipv6 unicast-routing ! interface Tunnel1 ip address 169.254.8.1 255.255.255.248 ip mtu 8476 ipv6 enable ipv6 address fde2:aa1a:d66:443b:d9d8:30c9:9674:6379/125 tunnel source 172.31.0.178 tunnel destination 10.0.0.8 end !
- AWS Cloud WAN only supports IPv6 address exchange using MP-BGP between IPv4 BGP peers. To establish BGP sessions for IPv6 Unicast, you must configure IPv4 Unicast addressing. In the example configuration we first define a route-map and set the IPv6 next hop to the GRE tunnel on the appliance.
! route-map IPV6_NEXT_HOP permit 10 set ipv6 next-hop fde2:aa1a:d66:443b:d9d8:30c9:9674:6379/125 end !
- Continue by using this route-map in your BGP configuration. For MP-BGP with IPv4 adjacency, make sure to activate the neighbor under the corresponding address family. In this example, the ASN 64515 is for AWS Cloud WAN Core Network, and 64525 is for the SD-WAN peer.
! router bgp 64525 bgp log-neighbor-changes no bgp default ipv4-unicast neighbor 169.254.8.2 remote-as 64515 neighbor 169.254.8.2 ebgp-multihop 2 neighbor 169.254.8.3 remote-as 64515 neighbor 169.254.8.3 ebgp-multihop 2 address-family ipv4 neighbor 169.254.8.2 activate neighbor 169.254.8.3 activate exit-address-family address-family ipv6 neighbor 169.254.8.2 activate neighbor 169.254.8.2 route-map IPV6_NEXT_HOP out neighbor 169.254.8.3 activate neighbor 169.254.8.3 route-map IPV6_NEXT_HOP out exit-address-family !
Step 5: Check the status of the Connect peers’ associations
- Check the status of the Connect peers using either the Console (Figure 6) or AWS CLI.
Step 6: Search and display the core network routes
- Within the Console view (Figure 7) the Core network Routes tab, choose the Segment and Edge location, and then choose Search routes.
- The routes are advertised by the SD-WAN appliances in the results. You may also use AWS CLI to display this information.
- Furthermore, because you advertised default IPv6 routes from both SD-WAN appliances using equal BGP attributes of ASN and AS path, you have configured the AWS Cloud WAN Core Network to perform ECMP routing.
- The Destinations details (Figure 8) indicate the presence of two properly configured Connect peers.
Step 7: Verify failover capability
- In this failover test, we initiate ping traffic from the source Amazon EC2 in Non-Prod VPC to the destination located in the on-premises environment.
- To check the network hops, run traceroute from source to destination. Observe that both SDWAN appliances inside IPv6 CIDR show up in the result. Traffic is routed through both SD-WAN-1 and SD-WAN-2 because of ECMP capability.
! sh-5.2$ traceroute6 2001:db8:0:400::2 traceroute to 2001:db8:0:400::2, 30 hops max, 80 byte packets 1 * * * 2 fde2:aa1a:d66:443b:d9d8:30c9:9674:6379 0.950 ms 0.933 ms fde2:aa1a:d66:443b:d9d8:30c9:9674:5379 1.520 ms 3 * * * 2001:db8:0:400::2: 647 ms 2.621 ms 2.769 ms !
- Initiate continuous ping from source to destination.
! sh-5.2$ ping6 2001:db8:0:400::2 !
- To simulate the loss of the AZ-A, stop the SD-WAN-1 appliance using either the Console or AWS CLI.
- Observe that SD-WAN-2 immediately routes the ping traffic using the SD-WAN-2.
! 64 bytes from 2001:db8:0:400::2 icmp_seq=1 ttl=61 time=1.74 ms 64 bytes from 2001:db8:0:400::2 icmp_seq=2 ttl=61 time=1.64 ms 64 bytes from 2001:db8:0:400::2 icmp_seq=3 ttl=61 time=1.61 ms 64 bytes from 2001:db8:0:400::2 icmp_seq=4 ttl=61 time=1.62 ms SDWAN-1 BGP down Traffic routed via SDWAN-2 64 bytes from 2001:db8:0:400::2 icmp_seq=16 ttl=61 time=2.55 ms 64 bytes from 2001:db8:0:400::2 icmp_seq=17 ttl=61 time=2.14 ms 64 bytes from 2001:db8:0:400::2 icmp_seq=18 ttl=61 time=2.27 ms 64 bytes from 2001:db8:0:400::2 icmp_seq=19 ttl=61 time=2.12 ms !
- To confirm, we can do traceroute once again. We only observe SD-WAN-2’s inside BGP CIDR, which means traffic is now routed only through SD-WAN-2.
sh-5.2$ traceroute6 2001:db8:0:400::2 traceroute to 2001:db8:0:400::2, 30 hops max, 80 byte packets 1 * * * 2 fde2:aa1a:d66:443b:d9d8:30c9:9674:5379 1.767 ms 1.750 ms 1.753 ms 3 * * * 4 2001:db8:0:400::2: 2.747 ms 2.721 ms 2.669 ms
- In the AWS Cloud WAN console, the Routes information for the segment only lists one Destination, the Connect peer for the surviving SD-WAN-2 appliance (Figure 9).
Step 8: Cleaning up
To avoid unnecessary costs, make sure to delete the solution when you are finished.
- Terminate the SD-WAN EC2 instances using either the Console or AWS CLI.
- Delete the Connect peers corresponding to each SD-WAN appliance using either the Console or AWS CLI.
- Delete the Connect attachment using either the Console or AWS CLI.
- Delete the Transport VPC attachment using either the Console or AWS CLI.
- Delete the core network using either the Console or AWS CLI.
- Delete the global network using either the Console or AWS CLI.
Conclusion
In this post, we explored how you can build resilient and high-performance IPv6 networks using AWS Cloud WAN and SD-WAN. By leveraging AWS Cloud WAN Connect with GRE tunnels, we demonstrated how to create a robust hybrid network architecture that addresses the growing demand for IPv6 connectivity.
This step-by-step guide offers network engineers and cloud architects a clear path to implementing ECMP routing for IPv6 traffic, effectively balancing loads and eliminating single points of failure. We’ve covered key aspects of the setup, including Multi-protocol BGP configuration, route verification, and failover testing, giving you a comprehensive toolkit for deploying this solution in your own environment.
If you have questions about this post, start a new thread on AWS re:Post or contact AWS Support.
About the authors
Sankalp is a Technical Account Manager for start-up customers and a Transit Gateway SME. He provides architectural guidance to enterprise AWS customers around the globe. Originally from Mumbai, he earned a Master’s Degree in Telecommunication Engineering from UT Dallas. Outside of work, Sankalp is a history buff who also enjoys traveling and listening to music from different languages.
Nick is a Solutions Architect for public sector customers based in the Washington DC region. He has over 20 years of experience in infrastructure engineering, and has been helping AWS customers migrate workloads and build applications for the past five years. Outside work, Nick enjoys spending time with his family, classic video games, and live music.