This Guidance shows how to assess ad opportunities at scale using real-time bidding (RTB) technology and NoSQL database tables. With a microservices approach, this offering enables rapid scalability, implements robust security mechanisms to protect data, and uses a deployment pipeline that allows for quick modifications. By capturing and analyzing bidding information in real-time, advertisers have a powerful tool to optimize their advertising strategies and react quickly to market dynamics.
Architecture Diagram
Step 1
The supply-side platform (SSP) receives an ad request from a publisher, creates a real-time auction, and sends a bid request to a demand-side public endpoint that is configured on Elastic Load Balancing.
Step 2
The requests are routed to “bidder” pods hosted on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The bidder uses Amazon Virtual Private Cloud (Amazon VPC) endpoints to access NoSQL database tables that hold information about audience/segments, campaigns, and budgets. The bidder uses this information to process the bids.
Step 3
Based on the data from the NoSQL database (such as Aerospike or Amazon DynamoDB), the bidder decides whether to bid or not. If a sent bid wins, the bidder updates the budgets and campaign database tables. NOTE: Aerospike will run within the Amazon VPC and does not require an Amazon VPC endpoint. Configure the rack-aware feature on Aerospike for better performance.
Step 4
The bidder transactions are sent to Amazon Kinesis Data Streams via Kinesis VPC Endpoints in compressed micro batches of 25 KB PUTs. Amazon Kinesis Data Firehose then sends this data to Amazon Simple Storage Service (Amazon S3) for downstream analytics and reporting. A data stream enables the bidder to respond faster and helps in scaling each component independently.
Consideration A
Use AWS Graviton instances for bidder nodes. For additional cost optimization, implement auto-scaling and Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances.
Consideration B
Pre-install bidder container images with dependent libraries and binaries to minimize boot time. Upload the images to a container registry like Amazon Elastic Container Registry (Amazon ECR).
Consideration C
Encrypt and decrypt data at rest and in transit across DynamoDB, Kinesis, EKS, and S3 using AWS Key Management Service (AWS KMS). Grant least privilege access using AWS Identity and Access Management (IAM) to provide permissions for users, roles, and services.
Consideration D
Automate the deployment of the RTB platform using supported git repository, AWS CodeBuild, AWS CodePipeline, and AWS CloudFormation to reduce time-consuming, manual processes.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
This Guidance is designed using a stateless, microservices-based architecture where changes can be made independently on each component using a deployment pipeline. It can be used to optimize the cost of high-throughput, low-latency workloads, allowing you to process a greater number of bid requests at a reduced cost.
-
Security
IAM policies grant least-privilege access to data so you only have the minimum permissions required for your specific tasks. Additionally, AWS KMS is used to encrypt data at rest and in transit, providing an additional layer of protection against unauthorized access. Lastly, access to the Amazon S3 bucket is secured through bucket policies and by blocking public access and data is routed through Amazon VPC endpoints between services.
-
Reliability
By enabling autoscaling for the Amazon EKS cluster, as well as provisioned throughput for DynamoDB and Amazon Kinesis Streams, the services configured in this Guidance will scale to meet your demand. Consider exploring Kinesis Auto Scaling to scale shards, the individual units of data storage, based on demand.
-
Performance Efficiency
The services selected for this architecture, such as AWS Graviton processors, Amazon EKS, and DynamoDB, are purpose-built for high-throughput, low latency applications such as RTB. These services process millions of transactions per second. AWS Graviton processors provide 40 percent better price performance when compared to X86-based instances, processing more bids or transactions per second.
-
Cost Optimization
This Guidance uses AWS Graviton processors, Amazon EC2 Spot Instances, and managed services to optimize costs. Specifically, Amazon EC2 Spot Instances achieve scale and cost savings up to 90 percent compared to On-Demand Instances. Moreover, Amazon EKS and DynamoDB are designed to scale based on demand so you only pay for the resources used. The Amazon EKS cluster is configured with an autoscaler to scale bidder pods and nodes. This Guidance also employs compression and Amazon VPC endpoints to minimize data transfer costs. Lastly, Amazon VPC endpoints are used to route traffic over the AWS backbone network, avoiding data transfer out costs.
-
Sustainability
Amazon S3 lifecycle policies provide effective storage management by defining the appropriate data archival or expiration timelines. Also, by using Amazon EC2 Graviton-based instances, you can optimize performance with a reduced number or smaller instances to host the bidder and Aerospike clusters. These Graviton-based instances demonstrate up to a 60 percent reduction in power consumption compared to similar-sized x86 CPU-based instances for the same workload.
Implementation Resources
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.