AWS Database Blog

Implementing a fall forward strategy from Amazon RDS for SQL Server Transparent Data Encryption (TDE) and Non-TDE Enabled databases to self-managed SQL Server

Customers utilizing Amazon Relational Database Service (Amazon RDS) for SQL Server for their large mission-critical SQL Server databases are seeking ways to migrate to AWS while retaining the same database engine (homogeneous migration) with minimal downtime. Several methods exist to migrate self-managed SQL Server to Amazon RDS for SQL Server, including native backup and restore, as well as AWS Database Migration Service (AWS DMS). However, a crucial aspect of any mission-critical database migration is implementing a rollback strategy.

Customers have requested an appropriate solution to implement a rollback strategy for their SQL Server database migrations from self-managed environments to Amazon RDS for SQL Server, while meeting their recovery point objective (RPO) and recovery time objective (RTO) requirements. In this post, we discuss how to set up a rollback strategy using a fall forward approach from Amazon RDS for SQL Server transparent database encryption (TDE)- and non-TDE-enabled databases to self-managed SQL Server, utilizing SQL’s native backup and restore option.

Rollback strategy with a fall forward approach

A comprehensive rollback strategy is crucial for data migration, necessitating meticulous planning and contingency measures to mitigate risks and ensure a smooth transition. Despite careful planning, migrations can sometimes encounter unexpected issues, such as database performance degradation or application failure. A rollback strategy that allows you to swiftly revert to the previous state of your database is essential. The fall forward approach is a  rollback strategy that involves replicating data from the migrated database to a third database environment without impacting the source database environment. By incorporating database rollback into your migration strategy, you ensure business continuity, mitigate risks, and navigate unexpected challenges with greater confidence. It serves as a safety net that empowers successful migrations and safeguards your valuable data.

fall-forward-strategy

Solution overview

The following diagram illustrates the architecture of a fall forward approach for RDS for SQL Server using native backup and restore.

Fall forward arch diagram

A – Self-managed SQL Server, which is the source of the migration. It could be running on an on-premises server or an Amazon Elastic Compute Cloud (Amazon EC2) instance.

B – Amazon RDS for SQL Server, which is the target of the migration.

A’ – Self-managed SQL Server, which is the fall forward target in case of migration rollback.

Migration from self-managed SQL Server to Amazon RDS for SQL Server

You can migrate a self-managed SQL Server environment to Amazon RDS for SQL Server using different methods, contingent upon the application’s recovery time objective (RTO) and recovery point objective (RPO) requirements. You have the option to use either AWS Database Migration Service (AWS DMS) or SQL Server’s native backup and restore methods to migrate both TDE- and non-TDE-enabled databases to Amazon RDS for SQL Server. The solution presented in this post assumes that you have already migrated TDE- and non-TDE-enabled databases using any of the aforementioned approaches.

Fall forward from Amazon RDS for SQL Server to self-managed SQL Server

You can implement a fall forward approach for both TDE- and non-TDE-enabled databases with the following high-level steps:

  1. Create Amazon Simple Storage Service (Amazon S3)
  2. Create an AWS Identity and Access Management (IAM) role to access the S3 buckets and change the bucket policy to take full and transaction log backups.
  3. Create a symmetric AWS Key Management Service (AWS KMS)
  4. Backup and restore a TDE certificate from Amazon RDS for SQL Server to self-managed SQL Server for a TDE-enabled database.
  5. Backup and restore a full backup from Amazon RDS for SQL Server to a self-managed SQL Server for both TDE- and non-TDE-enabled databases.
  6. Copy transaction logs from Amazon RDS for SQL Server and decrypt the logs using the provided Python script and apply them to the self-managed SQL Server for both TDE- and non-TDE-enabled databases to keep it in sync.

Steps for setting up the fall forward strategy for Amazon RDS for SQL Server

The solution uses an Amazon Elastic Compute Cloud (Amazon EC2) with the SQL Server database engine installed to emulate the self-managed environment as the target and Amazon RDS for SQL Server as the source for migration.

Prerequisites

The following prerequisites are needed before you begin:

  • An existing Amazon RDS for SQL Server instance (source) with TDE and a Backup and Restore options group enabled. Refer to Creating an Amazon RDS DB instance for how to provision an RDS SQLServer instance.
  • An existing Amazon EC2 instance with SQL Server installed (target) with the same version and edition as that of Amazon RDS for SQL Server that is used as a fall forward server.
  • Both a TDE-encrypted database (tde-demo) and a non-encrypted database (no-tde-demo) that have already been migrated from the Amazon EC2 SQL Server instance to Amazon RDS for SQL Server by following the instructions in Migrate TDE-enabled SQL Server databases to Amazon RDS for SQL Server. You will be setting up the fall forward strategy for both databases from Amazon RDS for SQL Server to the EC2 SQL Server instance.
  • The AWS Command Line Interface (AWS CLI) installed and configured in the EC2 instance.
  • Python 3.12 installed in the EC2 instance to decrypt the transactions logs. Install the modules requests, boto3, and pycryptodomex after installing Python.
    • pip install requests boto3 pycryptodomex
  • Install SQL Server Management Studio (SSMS) in the EC2 instance and set up access to Amazon RDS for SQL Server instance.
  • Copy the following Python script into a file and name it decrypt_file.py and save it to any directory in the EC2 instance. For this example, copy it into c:\temp.
import base64
from Cryptodome.Cipher import AES
import sys
import boto3
import json
import os 
import requests

def unpad(data):
    pad_size = ord(data[-1:])
    return data[:-pad_size]

def decrypt(key, iv_p, input_filename):
    block_size = AES.block_size
    chunk_size = block_size * 1024
    iv = base64.b64decode(iv_p)

    output_filename = f"{input_filename}.out"

    cipher = AES.new(key, AES.MODE_CBC, iv)

    with open(input_filename, 'rb') as input_file, open(output_filename, 'wb') as output_file:
        while True:
            encrypted_chunk = input_file.read(chunk_size)
            if len(encrypted_chunk) == 0:
                break
            decrypted_chunk = cipher.decrypt(encrypted_chunk)
            output_file.write(decrypted_chunk if len(input_file.peek(chunk_size)) else unpad(decrypted_chunk))

    print(f"Finished decrypting {output_filename}")

def split_s3_path(s3_path):
    path_parts=s3_path.replace("s3://","").split("/")
    bucket=path_parts.pop(0)
    key="/".join(path_parts)
    return bucket, key

def get_info(bucket_name, key):

    file_name = key.split("/")[-1]
    s3 = boto3.client('s3')
    s3response = s3.head_object(Bucket=bucket_name, Key=key)
    s3metadata = s3response['Metadata']
    x_amz_key = base64.b64decode(s3metadata['x-amz-key'])
    db_resource_id  = s3metadata['dbresourceid']
    kms_key = json.loads(s3metadata['x-amz-matdesc'])['kms_cmk_id']
    x_amz_iv = s3metadata['x-amz-iv']
    kms = boto3.client('kms', region_name)
    kmsoutput = kms.decrypt( 
                            CiphertextBlob= x_amz_key,
                            KeyId=kms_key, 
                            EncryptionContext={"aws:rds:db-id": db_resource_id}
                            )
    plaintext = kmsoutput['Plaintext']
    return plaintext, x_amz_iv, file_name

def download_tlog(object_dir, start_seq_id):
    bucket, key = split_s3_path(object_dir)
    # Listing all the files in the S3 directory
    s3_client = boto3.client('s3')
    objects = s3_client.list_objects_v2(Bucket=bucket, Prefix=key)

    for obj in objects['Contents']:
         fname = os.path.basename(obj['Key'])
         if int(os.path.basename(obj['Key']).split(".")[-2]) >= start_seq_id :
             s3_client.download_file(bucket,obj['Key'], fname)
             plaintext, iv, input_filename = get_info(bucket, obj['Key'])
             decrypt(plaintext, iv, input_filename)
             
def get_region():
    r = requests.get("http://169.254.169.254/latest/dynamic/instance-identity/document",timeout=10)
    response_json = r.json()
    return response_json.get('region')

def usage():
    print("Script to download and decrypt the transaction log files from S3 bucket")
    print("Script to decrypt password for certificate")
    print("For downloading the tlogs, use the following command options")
    print("Usage: python decrypt_file.py tlog <S3 URI of tlog directory> <start-seq-id>")
    print("For decrypting the password for certificate, use the following command options")
    print("Usage: python decrypt_file.py certificate <certificate_s3uri> <customer_managed_kms_key_arn>")
    sys.exit(1)


def get_password(certificate_uri, kms_key):

    s3 = boto3.client('s3')
    bucket_name, key = split_s3_path(certificate_uri) 
    s3response = s3.head_object(Bucket=bucket_name, Key=key)
    s3metadata = s3response['Metadata']
    rds_tde_pwd = base64.b64decode(s3metadata['rds-tde-pwd'])
    kms = boto3.client('kms', region_name)
    kmsoutput = kms.decrypt( 
                            CiphertextBlob= rds_tde_pwd,
                            KeyId=kms_key
                            )
    plaintext = base64.b64encode(kmsoutput['Plaintext'])
    print("Decryption password : {}".format(plaintext.decode("utf-8")))


if __name__ == "__main__":

    if len(sys.argv) != 4:
        usage()
    action = sys.argv[1]
    print(action) 
    if action != "certificate" and action != "tlog" :
        print("Unknown action")
        usage()

    region_name = get_region()

    if action == "tlog":
        object_dir = sys.argv[2]
        start_seq_id = int(sys.argv[3])
        download_tlog(object_dir,start_seq_id)

    if action == "certificate":
        certificate_uri = sys.argv[2]
        kms_arn = sys.argv[3]
        get_password(certificate_uri, kms_arn)

For this post, we deploy all the AWS resources in the US East (N. Virginia) Region. Because this solution involves AWS resource setup and utilization, it will incur costs on your account. Refer to AWS Pricing for more information. We strongly recommend that you set this up in a non-production instance and run end-to-end validations before you implement this solution in a production environment.

Create an S3 bucket

As a security best practice, we suggest creating two S3 buckets: one for your database backups and transaction logs and another bucket for storing the TDE certificate and private key files. For this post, we create the buckets <certificate-bucket-name> and <db-backup-logs-bucket-name>. You must create these buckets in the same Region as your Amazon RDS DB instance. For instructions, refer to Creating a bucket.

The <db-backup-logs-bucket-name> bucket is used for copying the backup and transactions logs. For transaction logs, additional bucket configuration is needed. Refer to Access to transaction log backups with RDS for SQL Server and configure the following:

  1. Change the permission to bucket owner preferred.
  2. Add a bucket policy.

Create an IAM role and policy to access the S3 buckets

If you already have an existing IAM role, you can use that, but make sure you have the following trust relationship and permissions policy attached to it. If you want to create a new IAM role manually, refer to Creating a role to delegate permissions to an AWS service.

For this post, we create a role called rds-sqlserver-fall-forward-role and add the following trusted entity in the code block for the custom trust policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "rds.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

Next, we create a customer-managed policy using the following sample Amazon S3 permission policy listed within the IAM role itself.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation",
                "s3:GetBucketACL"
            ],
            "Resource": [
                "arn:aws:s3:::<certificate-bucket-name>",
                "arn:aws:s3:::<db-backup-logs-bucket-name>"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListMultipartUploadParts",
                "s3:AbortMultipartUpload"
            ],
            "Resource": [
                "arn:aws:s3:::<certificate-bucket-name>/*",
                "arn:aws:s3:::<db-backup-logs-bucket-name>/*"
            ]
        }
    ]
}

Create a symmetric KMS key

Create a symmetric key in the same Region as your RDS DB instance. For instructions, refer to Creating symmetric encryption KMS keys.

Choose the following options when creating the key:

  • Key type – Symmetric
  • Key usage – Encrypt and decrypt
  • Alias – rds-fall-forward-key
  • Key administrators – Add the IAM role you created
  • Key usage permissions – Add the IAM role you created

Set up fall forward from Amazon RDS for SQL Server to EC2 SQL Server instance for TDE- and non-TDE-enabled databases

In this step, we set up the fall forward strategy of the TDE-enabled database (tde-demo) and the non-TDE-enabled database(no-tde-demo) from Amazon RDS for SQL Server to the EC2 SQL Server instance.

A TDE-enabled database requires the certificate to be backed up and restored to an EC2 SQL Server instance. Backup and restore of the TDE certificate isn’t required for a non-TDE-enabled database.

Backup and restore certificate for a TDE-enabled database

In this step, we take the backup of the TDE certificate in Amazon RDS for SQL Server, and restore it in the EC2 SQL Server instance. This step is only applicable to the TDE-enabled database.

  1. Get the name of the TDE certificate for the tde-demo database in Amazon RDS for SQL Server. Use SSMS to connect to the Amazon RDS for SQL Server instance and run the following Transact-SQL (T-SQL) command.
USE master
GO
select
    database_name = d.name,
    cert_name = c.name
    from sys.dm_database_encryption_keys dek
    left join sys.certificates c on dek.encryptor_thumbprint = c.thumbprint
    inner join sys.databases d on dek.database_id = d.database_id
    where d.name='tde-demo'
  1. Backup the Amazon RDS for SQL Server database certificate using the following T-SQL command.
EXECUTE msdb.dbo.rds_backup_tde_certificate
    @certificate_name='<RDS_Certificate_Name>',
    @certificate_file_s3_arn='arn:aws:s3:::<certificate-bucket-name>/rds-to-ec2-tde/certificate/certificatename.cer',
    @private_key_file_s3_arn='arn:aws:s3:::<certificate-bucket-name>/rds-to-ec2-tde/certificate/privatekey.pvk',
    @kms_password_key_arn='<customer-managed-kms-key-arn>';

The <RDS_Certificate_Name> is the certificate name from the above output. The <customer-managed-kms-key-arn> is the Amazon Resource Name (ARN) of the KMS key created previously (rds-fall-forward-key). Refer to Backing up a TDE certificate for additional information.

Currently, Amazon RDS for SQL Server single-Availability Zone DB instance is only supported for taking certificate backups. If you’re running Amazon RDS for SQL Server with a multi-Availability Zone DB instance, you might have to remove the multi-AZ option before backing up the certificate.

  1. Copy the certificate files from the S3 bucket to the EC2 instance. Run the following command in the command prompt of EC2 instance.
aws s3 cp s3://<certificate-bucket-name>/rds-to-ec2-tde/certificate/certificatename.cer c:\rds-to-ec2-tde\certificate\
aws s3 cp s3://<certificate-bucket-name>/rds-to-ec2-tde/certificate/privatekey.pvk c:\rds-to-ec2-tde\certificate\
  1. Use the decrypt_file.py script to get the decryption password. The script does the following actions.
    1. Gets the metadata from the S3 bucket of pvk file.
    2. Decrypts the password using the KMS key.

The S3 metadata of the privatekey.pvk file generated in the backup step and the KMS key are used to retrieve the plain text of the data key.

python c:\temp\decrypt_file.py certificate s3://<certificate-bucket-name>/rds-to-ec2-tde/certificate/privatekey.pvk <customer_managed_kms_key_arn>

The preceding command outputs the decryption password to create the TDE certificate. The sample output looks like the following:

C:\>python c:\temp\decrypt_file.py certificate s3://<certificate-bucket-name>/rds-to-ec2-tde/certificate/privatekey.pvk arn:aws:kms:us-east-2:xxxxxxxxx:key/cd53049f-1c7d-4dee-b287-3ff60708ea78
certificate
Decryption password : yn45yuEbVmgR+5nqzMiCnjVYDfZo0u0lOI1GhOGl6XU=
  1. Create the certificate in the EC2 SQL Server instance using the preceding information. Run the following T-SQL command in SSMS by connecting it to the EC2 SQL Server instance.
CREATE CERTIFICATE myOnPremTDEcertificate
FROM FILE='c:\rds-to-ec2-tde\certificate\certificatename.cer'
WITH PRIVATE KEY (FILE = N'c:\rds-to-ec2-tde\certificate\privatekey.pvk',
DECRYPTION BY PASSWORD = '<password_from_above_command>');

For detailed instructions on how to back up and restore the TDE certificate from Amazon RDS for SQL Server to EC2 SQL Server instance, refer to Backing up and restoring TDE certificates on RDS for SQL Server.

Backup and restore database and transaction logs

In this section, you complete a full database backup of the TDE- and non-TDE-enabled databases in Amazon RDS for SQL Server, along with the transaction logs, and restore and recover the database in the EC2 SQL Server instance. Because the steps are the same for both TDE- and non-TDE-enabled databases, we have included only the TDE-enabled database in this example.

  1. Back up the Amazon RDS for SQL Server database. Run the following T-SQL command by connecting to the Amazon RDS for SQL Server.
exec msdb.dbo.rds_backup_database
 @source_db_name='tde-demo',
 @s3_arn_to_backup_to='arn:aws:s3:::<db-backup-logs-bucket-name>/rds-to-ec2-tde/backup/tde-fb-full.bak',
 @type='FULL';
 
-- Monitor Task:
exec msdb.dbo.rds_task_status @task_id= <task id>;
  1. Copy the backup from the S3 bucket to the local file system in the EC2 instance. Run the following in the command prompt of the EC2 instance.
aws s3 cp s3://<db-backup-logs-bucket-name>/rds-to-ec2-tde/backup/tde-fb-full.bak c:\rds-to-ec2-tde\backup\
  1. Restore the database into the EC2 SQL Server instance by running the following T-SQL command by connecting to the EC2 SQL Server instance.
RESTORE DATABASE tde-demo FROM DISK = N'c:\rds-to-ec2-tde\backup\tde-fb-full.bak' WITH FILE=1,
NORECOVERY 

In this example, because you’re setting up the fall forward to a new target EC2 SQL Server instance, you’re using the same database name. If you’re restoring it to the original source server, you can rename the database while restoring it. After the full backup is restored, the database in the EC2 SQL Server instance goes into recovery mode.

  1. Copy the transaction logs from the Amazon RDS for SQL Server to apply them to the above database to keep it in sync. Run the following T-SQL command in the Amazon RDS for SQL Server to list the transaction logs backup
SELECT * from msdb.dbo.rds_fn_list_tlog_backup_metadata('tde-demo');

All the backups of the transaction logs are encrypted. So, you must decrypt them before applying them to the EC2 SQL server database.

  1. Copy the transaction logs generated in Amazon RDS for SQL Server to an S3 bucket. Set the S3 location where the transaction logs backup can be copied it to. Run the T-SQL command in the Amazon RDS for SQL Server
exec msdb.dbo.rds_tlog_copy_setup
@target_s3_arn='arn:aws:s3:::<db-backup-logs-bucket-name>/rds-to-ec2-tde/tlog/';
  1. Run the following T-SQL command to validate the preceding S3 settings in the Amazon RDS for SQL Server.
exec rdsadmin.dbo.rds_show_configuration @name='target_s3_arn_for_tlog_copy';
  1. Transaction log backups run every 5 minutes in Amazon RDS for SQL Server, and many logs are created after the full backup of the tde-demo You need to identify from which log_seq_id you need to take the copy of the transaction logs backup. Run the following T-SQL command in the EC2 SQL Server instance
select redo_start_lsn from sys.master_files where database_id=DB_ID('tde-demo')
  1. Make a note of the redo_start_lsn number from the above step, which you use to identify the logs to be copied. Run the following T-SQL command in SSMS after connecting to the Amazon RDS for SQL Server instance.
SELECT min(rds_backup_seq_id) as starting_seq_id, max(rds_backup_seq_id) as ending_seq_id from msdb.dbo.rds_fn_list_tlog_backup_metadata('tde-demo') 
where ending_lsn > <redo_start_lsn> order by backup_file_time_utc
  1. Copy the transaction log backed up by the Amazon RDS for SQL Server to an S3 bucket and wait for the process to complete. Provide the starting_seq_id and ending_seq_id from the preceding output and the ARN of the customer managed KMS key (fall-forward-kms-key). Run the following T-SQL command in Amazon RDS for SQL Server.
exec msdb.dbo.rds_tlog_backup_copy_to_S3
@db_name='tde-demo',
@kms_key_arn='<customer_managed_kms_key_arn>',
@rds_backup_starting_seq_id= <starting_lsn>,
@rds_backup_ending_seq_id= <ending_lsn>;

-- Monitor Task:
exec msdb.dbo.rds_task_status @task_id= <task id>;

The task copies individual transactions logs to the S3 bucket configured previously.

  1. Call the Python program decrypt_file.py with the S3 URI of the transaction log base directory along with the starting_seq_id that the database needs to be recovered from. Transaction logs are encrypted using the customer managed key and must be decrypted. Each transaction log file, have a metadata property that must be gathered from the S3 metadata to decrypt it. Run the following command in the command prompt of EC2 SQL Server instance.
python c:\temp\decrypt_file.py tlog s3://<db-backup-logs-bucket-name>/rds-to-ec2-tde/tlog/<copy_dir>/ <starting_seq_id>

Here’s the sample output of the preceding command:

C:\>python c:\temp\decrypt_file.py tlog s3://<db-backup-logs-bucket-name>/rds-to-ec2-tde/tlog/7.49364690-214b-4d41-872e-16e09a23bc14/ 0
tlog
Finished decrypting 7.49364690-214b-4d41-872e-16e09a23bc14.0.1708651081.out
Finished decrypting 7.49364690-214b-4d41-872e-16e09a23bc14.1.1708651381.out
Finished decrypting 7.49364690-214b-4d41-872e-16e09a23bc14.10.1708654081.out
Finished decrypting 7.49364690-214b-4d41-872e-16e09a23bc14.11.1708654381.out

The script carries out the following steps.

  1. Downloads the transactions logs from the specified bucket to local file system
  2. Gets the S3 metadata information of individual transactions log
  3. Using the metadata gets the decryption key from customer managed KMS key
  4. Using the decryption key, decrypts the encrypted transaction log and creates a new file with the name <file_name>.out in the same directory
  1. Apply the previously generated decrypted transaction log files one at a time into the EC2 SQL server database tde-demo to recover it. Run the T-SQL command in the EC2 SQL Server instance.
RESTORE LOG tde-demo FROM
DISK = N'c:\rds-to-ec2-tde\tlog\<decrypted_transaction_log>' WITH FILE = 1,NOUNLOAD,NORECOVERY
  1. If you need to switch to the EC2 SQL Server database, copy the last transaction log and apply it with the RECOVERY option to fully recover the tde-demo. Run the T-SQL command in the EC2 SQL Server instance.
RESTORE LOG tde-demo FROM
DISK = N'c:\rds-to-ec2-tde\tlog\<decrypted_transaction_log>' WITH FILE = 1,NOUNLOAD,RECOVERY

You have successfully completed setting up the fall forward strategy for Amazon RDS for SQL Server to EC2 SQL Server environment.

Cleanup

To avoid future charges, remove all the components created while testing this solution, complete the following steps:

  1. Connect to the EC2 SQL Server instance through SSMS and delete the TDE- and non-TDE-enabled databases.
  2. On the IAM console, select the roles and search for rds-sqlserver-fall-forward-role role and delete it.
  3. On the AWS KMS console, select the customer managed key rds-fall-forward-key and delete it.
  4. On the Amazon S3 consoleempty the bucket that contains <certificate-bucket-name> and <db-backup-logs-bucket-name>, and then delete the bucket.
  5. Delete the Python script and the directory where the backup and transaction logs are downloaded on the EC2 instance.
  6. Delete the EC2 instance and the RDS for SQL Server instance should you no longer need them.

Summary

In this post, you have learned about how to set up a fall forward rollback strategy from Amazon RDS for SQL Server to a self-managed SQL server for both TDE- and non-TDE-enabled databases.

Understanding the possible rollback solutions from Amazon RDS for SQL Server is key to safeguarding your data and insuring you meet your RTO and RPO needs, as well as insuring you can recover from critical events.

Try out this solution in your RDS for SQL Server instance and if you have any comments or questions, leave them in the comments section. For more information about native backup and restore, refer to Microsoft SQL Server Native Backup and Restore Support in the Amazon RDS User Guide.

About the Authors

Raj Jayakrishnan is a Senior Database Specialist Solutions Architect with Amazon Web Services helping customers reinvent their business through the use of purpose-built database cloud solutions. Over 20 years of experience in architecting commercial & open-source database solutions in financial and logistics industry.

Vijayakumar Kesavan is a Senior Database Specialist Solutions Architect at Amazon Web Services. He works with our customers to provide database architectural guidance, best practices and technical assistance for database projects on AWS. With expertise in multiple database technologies he also support customers in database migrations and cost optimization in AWS Cloud.

Alvaro Costa-Neto is a Senior Database Specialist Solutions Architect for AWS, where he helps customers design and implement database solutions on the cloud. He has a passion for database technologies and has been working with them for more than 19 years, mostly with Microsoft SQL Server. He resides in Clermont, FL with his wife and two children, who share his love for aviation and traveling. When he is not working, he likes to host cookouts with his family and friends and explore new places.

Nirmal John is a Database Specialist Solutions Architect with Amazon Web services.He pursues building customer relationships that outlast all of us.