AWS for SAP

Optimization Techniques for Running SAP workloads on AWS: A Cost-Savings Guide

Introduction

In the current economic climate, cost optimization is a key priority for many businesses. As companies modernize their SAP implementations, optimizing the cost of running SAP workloads can free up resources which can instead be used for innovation. This blog shares insights and strategies for optimizing SAP workload costs and operations, leveraging best practices from SAP Lens for the AWS Well-Architected Framework.

Before you start

Before starting, you need to set clear target goals for your desired outcomes, identify your overall costs, and review the SAP on AWS cost estimation guide.

Run the AWS Cost Explorer in all accounts where SAP workloads are running. Identify and note the top-consumed services, which for SAP are typically Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Block Store (Amazon EBS), Amazon Simple Storage Service(Amazon S3), Amazon EBS Snapshots, and Amazon Elastic File Store (Amazon EFS). This blog focuses on these services and provides best practices and optimization guidance for reducing costs and running SAP workloads optimally.

Amazon EC2

Let’s start with Amazon EC2. When selecting Amazon EC2 instances for your SAP workloads, always run SAP NetWeaver and SAP HANA instances on certified Amazon EC2 instance types.

#1 Right Sizing
Right sizing is the process of matching instance types and sizes to your workload performance and capacity requirements at the lowest possible cost.

Amazon EC2 instance right-sizing should be done based on actual usage for the past three months, including the business peaks and quarter close period. To do this, leverage Amazon CloudWatch and SAP Early Watch Reports for usage analysis.

Activate the enhanced infrastructure metrics feature of AWS Compute Optimizer to use 3 months of Amazon CloudWatch data for generating recommendations. (By default, AWS Compute Optimizer stores and uses up to 14 days of your Amazon CloudWatch metrics history to generate your recommendations.) Compare the recommendations with SAP EarlyWatch Alert reports to increase accuracy. Based on the analysis, replace the running instance with the right SAP-certified NetWeaver or HANA instance.

Note: While Compute Optimizer is generally reliable, it may occasionally suggest suboptimal recommendations for SAP workloads, especially for idle SAP application servers. This can occur because it doesn’t account for reserved memory used by work processes. Therefore, it’s advisable to verify Compute Optimizer’s recommendations against SAP-certified instances for optimal performance.

#2 Leverage Savings Plans
Savings Plans offer a flexible pricing model that can help you reduce your bill by up to 72% compared to On-Demand prices, in exchange for a one- or three-year hourly spend commitment.

Check if there are any Amazon EC2 instances running on-demand and add them to savings plans if the workload and instance usage is stable (For example: A u-3tb1 Linux instance is more than 60% cheaper with 3 year upfront savings plan compared to on-demand in N Virginia). AWS Trusted Advisor and AWS Cost Explorer can provide your current savings plan usage and recommendations.

This needs to be reviewed on a regular basis (monthly or so) to avoid on-demand costs for predictable workloads.

Savings plan comparison to On Demand

#3 Amazon EC2 Instance family standardization to maximize the discounts
Instance family standardization maximizes utilization and minimizes the level of effort associated with management of reservations.

There are many Amazon EC2 instance families and types certified for SAP. To maximize the usage of savings plans and AWS Marketplace Subscription costs, standardize on instance families and instance types. For example, use Amazon EC2 C6i or the latest generation for small instances (ASCS, SCS, or WebDispatchers), Amazon EC2 M6i or the latest generation for SAP application servers, and Amazon EC2 R6i or the latest generation for HANA DB instances under 1 TB.

#4 Automate Start & Stop of the Non-Production Amazon EC2 Instances
Development, training, sandbox and project-related instances could have low uptime requirements (a few hours a day, or only during certain days) or a short-lived role in the project cycle, and buying a Savings Plan might not be cost-effective for these instances. In this case, you can leverage AWS Systems Manager or SAP Landscape Management to schedule stopping and starting of the instances based on the uptime requirements for cost savings.

By automating the start and stop of non-critical SAP workload instances, you can save money on your AWS costs. By only running your instances when you need them, you can avoid paying for idle instances.

#5 Migration to latest generation of Amazon EC2 Instances
By migrating to newer generation instance types, you can improve the price-performance of your SAP workloads. This is because newer generation instances have higher SAPS, and similar to lower prices. As a result, you may be able to use fewer or smaller instances to achieve the same level of performance, or handle workload growth by changing to the newest generation instance type of the same size. For example, migrating from m5 to m6i family will provide up to 15% higher CPU, up to 20% higher memory bandwidth and 25-100% networking bandwidth depending on the instance type, OS and region.

#6 Cancel unused On-Demand Capacity Reservations (ODCR)
On-Demand Capacity Reservations enable you to reserve compute capacity for your Amazon EC2 instances in a specific Availability Zone for any duration.

You may have created On-Demand Capacity Reservations (ODCRs) for specific instance types that are in Savings Plan to ensure that Amazon EC2 instances are available for you in case of instance restart. After right-sizing, standardizing on Amazon EC2 instance families, or migrating to different instance types, the existing ODCR’s need to be cancelled and new ones created for right-sized instances. For example, when changing an instance from an u-3tb1.56xlarge to r6i.32xlarge, you need to cancel the previous ODCR and create a new ODCR for r6i.32xlarge. Implement a process to check for unused ODCRs periodically.

#7 Optimize DR architecture with AWS Elastic Disaster Recovery based on Recovery Time Objective (RTO) and Recovery Point Objective (RPO)
AWS Elastic Disaster Recovery (AWS DRS) enables organizations to quickly and easily implement a new or migrate an existing disaster recovery plan to AWS. By replicating the source systems to replication servers in a staging area, the cost of disaster recovery is optimized by using affordable storage, shared servers, and minimal compute resources to maintain ongoing replication.

Leverage AWS DRS (as applicable for SAP Application servers and databases) rather than active-active for Disaster Recovery (DR) if the required RTO and RPO can be achieved. If your RTO and RPO options are different, consider evaluating other design patterns for managing your SAP on AWS system resiliency.

#8 Leverage cloud flexibility to build temporary systems as needed
AWS Launch Wizard offers a guided way of sizing, configuring, and deploying AWS resources for third party applications, such as HANA based SAP systems.

Build temporary systems (based on SAP HANA) as needed using AWS Launch Wizard from an Amazon Machine Image (AMI) (built with your hardening process for supported OS versions) rather than leaving them on or shutting them down. This is because storage costs are incurred even when instances are shut down.

Also AWS Service Catalog products can be created with AWS Launch Wizard and help save time by allowing teams to directly provision pre-defined SAP architectures built in AWS Launch Wizard.

Amazon EBS

To ensure you meet the SAP HANA Tailored Data Center Integration (TDI) requirements, always use the SAP HANA on AWS certified storage configuration, as a minimum configuration for your EC2 instances running SAP HANA. This is to achieve optimal performance and meet SAP’s storage KPI.

EBS volume types for SAP

#1 Use latest Amazon EBS volume generation type
Migrate Amazon EBS volumes from GP2 to GP3 volume type for better price-performance.

#2 Use Amazon EBS GP3 volume type over IO2
Check the IOPS usage of Amazon EBS IO2 volumes from Amazon CloudWatch. If the IOPS utilization is under 16,000, consider migrating the volumes from IO2 to GP3. You can also consider striping GP3 volumes to bypass the 16,000 IOPS limitation and achieve higher IOPS and throughput.

Note: IO2 should be used for mission critical workloads that require sub-millisecond latency with higher IOPS, throughput and durability (99.999%).

#3 Delete unattached Amazon EBS volumes from terminated Amazon EC2 instances
Check for unattached Amazon EBS volumes in the AWS console and delete them if they are not needed for future use. Unattached volumes can be created from terminated Amazon EC2 instances that were launched without the “Delete on Termination” flag.

#4 Backup database using AWS Backup or directly to AWS S3 storage via AWS Backint agent
Configure database backups and restores directly to/from AWS S3 (for anyDB and SAP HANA with HA)

SAP HANA on EC2: Use the AWS Backup* and AWS Backint Agent for SAP HANA

Oracle on EC2: Use the Oracle Secure Backup (OSB) module
SAP ASE Database: Use AWS File Gateway to asynchronously transfer data

This will avoid interim storage cost to hold the backups (either on Amazon EFS or Amazon EBS)

#5 Optimize EBS Snapshots for Non-Database volumes
Use AWS Backup, a centralized, managed service for backup, restore, and policy-based retention of different AWS resources, including Amazon EBS volumes and Amazon EC2 instances.

Database Servers:

  • Enable snapshots for root and binary volumes
  • Disable snapshots for data and log volumes provided the database backups (using database tools) are in place
  • Ad-hoc snapshots of data and log volumes can be used with AWS Backup to expedite system refreshes of large databases

SAP Application Servers:

  • Enable snapshots for root and binary volumes (/usr/sap)

Amazon S3

#1 Refine database backup schedule and retention periods (if backups are taken to S3 storage via AWS Backint agent)
Establish a backup schedule, combining both full and incremental backup types, for each environment. This should include production, staging, pre-production, quality, development, sandbox, and project landscape; depending on what you have deployed. The backup schedule should be based on a retention period that aligns with the recovery needs of each environment, considering the timeliness and recency of the data required for recovery.

For example, a monthly full backup and weekly incremental backups should be sufficient for a sandbox environment with 30 day retention, and these systems can be built with production/quality systems backups if needed.

#2 Enable lifecycle policies in Amazon S3
Leverage Amazon S3 analytics Storage Class Analysis to analyze storage access patterns and help you decide when to transition the right data to the right storage class with a retention period (identified above).

Based on the recovery needs and retention period for each environment, establish lifecycle policies. For example, production backups could be stored in Amazon S3 Standard for 5-7 days and then transitioned to a lower tier, such as Amazon S3 Glacier or Amazon S3 Glacier Deep Archive with 6 months retention, if the likelihood of restoring a backup older than 5 days is very low.

Amazon S3 storage typess
#3 Disable versioning of backups
Disable versioning for the backups and enable Multi Factor Authentication on delete and or vault lock for security.

#4 Lower Tier storage in the DR region
If Amazon S3 Cross-Region Replication (CRR) is enabled for business continuity and disaster recovery (BC/DR), use lower-tier storage in the disaster recovery (DR) region, such as Amazon S3 Glacier or Amazon S3 Glacier Deep Archive, depending on the recovery time objective (RTO) and recovery point objective (RPO).

#4 Leverage Amazon S3 instead of Amazon EBS or Amazon EFS for storing interface or archive files
Leverage Amazon S3 for interface or archive files to reduce costs by implementing lifecycle policies and versioning. The AWS SDK for SAP ABAP can be used to access Amazon S3 files from an SAP ABAP program.

#5 AWS Private Link for Amazon S3
AWS Private Link provides private connectivity between virtual private clouds (VPCs), supported AWS services, and your on-premises networks without exposing your traffic to the public internet. This is achieved by leveraging Amazon VPC endpoints.

Provision interface endpoint to establish connectivity between Amazon EC2 and Amazon S3. This will use AWS backbone network for data transfer and avoid internet egress charges. The interface endpoints can be secured using resource policies on the endpoint itself, and the resource that the endpoint provides access to. With a gateway endpoint, you can access Amazon S3 from your VPC without requiring an internet gateway or NAT device for your VPC, and at no additional cost.

#6 Cleanup Amazon S3 regularly and after major projects
SAP installations and upgrades require a large amount of Amazon S3 storage to store ad-hoc backups of the database, kernel, profiles, SUM directories and SAP software files. Enable lifecycle policy with a retention period or have a task in the project plan to cleanup the Amazon S3 buckets after the project is completed.

Amazon EFS

Amazon EFS can be used for SAP “/usr/sap/trans”, “/sapmnt”, or any other file systems that needs sharing between the servers.

Amazon EFS storage types

*https://docs.thinkwithwp.com/efs/latest/ug/lifecycle-management-efs.html

#1 Check for unattached Amazon EFS
Check if there are any unattached Amazon EFS and delete them after verifying their contents.

#2 Enable lifecycle policy
Use Amazon EFS lifecycle management to automatically manage cost-effective file storage for SAP file systems, such as /usr/sap/trans, archives accessed by SARA, SOX related files, SAP software, SAP installation and upgrade tools, and backups.

Amazon EFS Intelligent Tiering uses lifecycle management to monitor the access patterns of your workload and is designed to automatically transition files to and from the file system’s Infrequent Access (IA) storage class.

#3 Cleanup Amazon EFS regularly and after major projects
SAP installations and upgrades require a large amount of Amazon EFS storage to store SAP software files and SAP tools, such as Software Provisioning Manager and Software Update manager.

Conclusion

Optimizing the cost of SAP workloads following SAP and AWS best practices is essential and can be challenging for customers.

Get familiar with AWS services like AWS Trusted Advisor and AWS Cost Explorer to analyze and monitor the SAP workload costs, lifecycle policies available in Amazon S3 and Amazon EFS, and certified instance and storage types for SAP, are key areas to consider in your cost optimization efforts. Utilize AWS cost allocation tags to track AWS costs at a granular level and pinpoint those contributing to higher expenses.

Establish a process and governance in place to avoid exceeding budgets.

What’s Next

Run a self-service SAP Lens review in the AWS Well-Architected Tool (AWS WA Tool) every quarter (or as needed) for the cost optimization pillar. Engage your AWS Technical Account Manager (TAM) or account team for help with running the AWS WA Tool.

Implement a cloud financial management to continuously monitor and improve cost optimization

Contact AWS Support for any questions or clarifications, and engage AWS Technical Account Manager (TAM) or AWS Solution Architect during analysis.