This Guidance helps customers scale product carbon footprint (PCF) tracking, reduce the manual effort involved with data collection and calculation, and provide transparent and auditable PCFs for reporting. The architecture pairs Internet of Things (IoT) sensor data from a manufacturing facility with product information and emission factors. An interactive dashboard uses this data to track product-level energy and carbon footprint in addition to benchmarking environmental performance across equipment and sites. With this Guidance, customers can identify hotspots and best practices to lower their PCF and manufacturing costs.
Please note: This solution by itself will not make a customer compliant with any product carbon footprint frameworks, standards, or regulations. It provides the foundational infrastructure from which additional complementary solutions can be integrated.
Architecture Diagram
-
Overview
-
Data Sources and Ingestion
-
Storage and Processing
-
Consumption and Dashboard
-
Overview
-
Please note: This is an overview architecture. For diagrams highlighting different aspects of this architecture, open the other tabs.
Step 1
Telemetry data, such as utility consumption and production metrics, are collected from sensors deployed at the industrial equipment.Step 2
Telemetry data is ingested to the cloud and processed.Step 3
Sustainability subject matter experts (SMEs) generate static files for bill of materials, reference data, emission factors, and supplier information.
Step 4
The files are ingested and processed into mapped emission factors and combined with telemetry data for PCF calculations. An audit trail of the PCF calculation is stored in an audit log.
Step 5
An interactive dashboard or a web application can combine and visualize the processed data to provide stakeholders with valuable insights.
-
Data Sources and Ingestion
-
Step 1
Sensors collect measurements of electricity and natural gas usage for the PCF analysis.Step 2
AWS IoT Greengrass collects, aggregates, and filters sensor readings. AWS IoT Greengrass Stream Manager exports the telemetry data to Amazon Kinesis Data Streams.Step 3
Kinesis Data Streams allows for high-throughput ingestion of telemetry data. An AWS Lambda function consumes the stream and loads telemetry data into Amazon Timestream. Amazon Kinesis Data Firehose loads the telemetry data into Amazon Simple Storage Service (Amazon S3).Step 4
Sustainability SMEs collect static files for bill of materials, reference data, emission factors, and supplier information.
Step 5
These static files are ingested to Amazon S3 through a REST API endpoint exposed by Amazon API Gateway and backed by Lambda.
-
Storage and Processing
-
Step 1
Timestream stores raw telemetry data as hot storage.Step 2
A Timestream scheduled query aggregates the telemetry data to the maximum granularity required by downstream consumers and stores it in a new table.Step 3
Raw static files are stored in an Amazon S3 bucket.
Step 4
Lambda functions and AWS Glue read the static files from the raw S3 bucket and transform them into structured data within the processed storage tier. This processing step includes a set of precomputations, such as the mapped emission factors for emission sources and raw materials and electricity for a specific factory based on the grid mix.
Step 5
The processed static data is loaded into Amazon Relational Database Service (Amazon RDS) and the data lake powered by Amazon S3. This provides long-term storage and fast query access for downstream calculations.
Step 6
A Lambda function reads emission factors and bill of materials data from Amazon RDS and electricity consumption data from Timestream. It performs carbon footprint calculations and stores the results and audit logs in the curated data tier.
Step 7
The processed and curated storage tiers store the PCFs in Amazon RDS and Amazon S3 for flexible access and long-term storage. Amazon CloudWatch stores audit logs. -
Consumption and Dashboard
-
Step 1
Timestream, Amazon RDS, and Amazon S3 store data available for consumption.Step 2
Amazon QuickSight builds interactive dashboards to help executives, sustainability SMEs, and operations personnel analyze the PCF data, including on-the-fly PCF calculations. It can also pull precomputed PCF values for known queries ahead of time.Step 3
Based on the consumption patterns and type of business insights, executives, sustainability SMEs, and operations personnel may need a custom web application. This web application can pull precomputed PCF values from Amazon RDS or perform ad-hoc calculations as the user requests them.
Step 4
Amazon Route 53, the domain name system (DNS), enables front-end clients to resolve the website hostname to the AWS content delivery network, Amazon CloudFront.
Step 5
CloudFront routes the web requests to origin servers and caches the static content and assets served from Amazon S3 and origin servers. It secures the application traffic using AWS WAF, a web application firewall that helps to protect the application against common exploits and bots.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
CloudWatch provides centralized logging with metrics and alarms across all deployed services. These metrics and alarms can raise alerts for operational anomalies.
-
Security
Resources are protected using AWS Identity and Access Management (IAM) policies and principles. Use least privilege access and role-based access to grant permissions to operators. AWS Key Management Service (KMS) encrypts data at rest. HTTPS endpoints with transport layer security (TLS) provide encryption for in-transit data, including service endpoints and API Gateway endpoints.
-
Reliability
This Guidance uses serverless services whenever possible, such as API Gateway, Lambda, and Timestream, enabling auto-scaling to respond to fluctuating demands. This Guidance also uses AWS services such as Amazon S3, Amazon RDS, and Timestream to provide built-in functionality for data backup and recovery.
-
Performance Efficiency
This Guidance uses serverless managed services, such as Lambda, that automatically scale in response to changing demand, reducing resource overhead. Additionally, customers can apply different analytics tools to their data stored in Amazon S3, depending on their needs.
-
Cost Optimization
This Guidance relies on serverless and fully managed services, such as Lambda, Amazon S3, and Timestream, which automatically scale according to workload demand. As a result, you only pay for the resources you use.
-
Sustainability
Amazon S3 lifecycle policies can automatically move data to more energy-efficient storage classes, enforce deletion timelines, and minimize overall storage requirements. Timestream allows for data to automatically be moved from the memory tier to the magnetic tier to minimize cost. This Guidance also uses managed, serverless technologies such as AWS Glue, Lambda, and Timestream to help ensure hardware is minimally provisioned to meet demand.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.