This Guidance helps you build a data lake and a data analytics platform to address many of the issues that complicate regulatory reporting, such as data being in disconnected silos and distributed extract, transform, load (ETL) processes. Using a data lake, financial institutions will have a single source of data to help them meet regulatory requirements for a large volume of information. With this Guidance, financial institutions can gain insights through advanced analytics and machine learning—faster and at a lower cost.
Architecture Diagram
[Architecture diagram description]
Step 1
Ingest on-premises data using services such as AWS Database Migration Service (AWS DMS) for databases, AWS Glue for batch data, or Amazon Managed Streaming for Apache Kafka (Amazon MSK) for streaming data.
Step 2
Access vendor data by connecting to the vendor account directly using AWS PrivateLink or by using services such as AWS Data Exchange and AWS DataSync.
Step 3
Replicate transaction data from data sources on AWS (such as Amazon Aurora, Amazon Relational Database Service [Amazon RDS], or Amazon DynamoDB) to a Amazon Simple Storage Service (Amazon S3) bucket using a service such as DataSync.
Step 4
All the data is stored as-is in the raw layer without undergoing any changes.
Step 5
Data undergoes basic transformations in the processed layer, such as normalizing the date to a certain format or cleaning up empty rows.
Step 6
The consumption layer contains the final “cleansed” copy of the data to be used across a number of different use cases, including regulatory reporting.
Step 7
AWS Glue Data Catalog provides a view of the metadata of all the data across the different S3 buckets.
Step 8
AWS Lake Formation centrally manages access to the available datasets and applies fine-grained permissions for all users accessing the data.
Step 9
Use Amazon QuickSight, a data visualization and business intelligence service, for reporting. Use Athena for interactive analytics.
Step 10
Use services such as Amazon EMR and Amazon SageMaker for credit risk calculations and forecasting.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
Data is persisted across three different layers. Data in the raw layer is untouched, giving you a baseline “input” dataset that does not change, regardless of what happens to the data in subsequent layers, such as the processing or consumption layer.
-
Security
Lake Formation provides fine-grained access control for the S3 buckets in the data lake, and this data is encrypted at rest. To further secure your data, use only the consumption layer for reporting purposes.
-
Reliability
This architecture uses Amazon S3, which can replicate data across AWS Regions or Availability Zones to help backup and restore critical data.
-
Performance Efficiency
To optimize this architecture, you can change the data stored in the consumption later into a data format that would provide the best performance for your needs.
-
Cost Optimization
This architecture uses Athena with Amazon S3 so you can run ad-hoc queries rather than having to keep an Amazon Redshift cluster up and running, even when querying is not needed. You can save on costs by paying only for the queries you run rather than idle infrastructure.
-
Sustainability
This architecture uses scalable services where possible so that resources are scaled up only according to business need.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.