AWS Database Blog
Migrate an Oracle database to Amazon RDS for Oracle using Oracle GoldenGate Microservices Architecture
Many AWS customers use AWS managed services to handle the heavy lifting in their daily operations and modernize their applications. Among these services, Amazon Relational Database Service (Amazon RDS) is ideal for relational database deployments. Amazon RDS can greatly reduce the administrative burden of running and maintaining a relational database.
In this post, we show you how to migrate an Oracle database to an Amazon RDS for Oracle DB instance. The solution uses Oracle Data Pump for initial data load and Oracle GoldenGate Microservices Architecture installed on an Amazon Elastic Compute Cloud (Amazon EC2) instance as the Oracle GoldenGate hub for change data capture.
Solution overview
This post uses Oracle GoldenGate Microservices Architecture installed on an EC2 instance as the Oracle GoldenGate hub. Oracle GoldenGate recommends using a file-based initial load process for data replication in Oracle GoldenGate Microservices Architecture. For more information, see About Instantiating with Initial Load Extract. A variety of initial load techniques are available, such as Oracle Data Pump for high-speed movement of data and metadata, AWS Database Migration Service (AWS DMS) for a managed experience, and Oracle transportable tablespaces for migration which requires outage on the source for the final incremental and metadata export while other mechanisms do not require source database outage for initial data loading. In this post, you use Oracle Data Pump to load the initial data into the target RDS for Oracle DB instance and Oracle GoldenGate for change data capture (CDC).
The following diagram illustrates this architecture.
To implement this solution, you need to complete the following steps:
- Install Oracle GoldenGate Microservices Architecture on Amazon EC2.
- Prepare the target RDS for Oracle database instance.
- Prepare the source Oracle database.
- Set up the Oracle GoldenGate hub on Amazon EC2.
- Complete the initial load using Oracle Data Pump.
- Start change data capture.
- Perform application cutover to Amazon RDS for Oracle.
Migration rollback consideration:
Before you dive into migration steps, it is important to consider what options you have in case you would want to roll back the migration.
- If the target is “RDS for Oracle non-CDB architecture” then you can initiate reverse replication before starting application on Amazon RDS for Oracle by implementing Oracle GoldenGate extract to capture changes from RDS for Oracle and apply them using Oracle GoldenGate Replicat on your on-premises database.
- If the target is “21c RDS for Oracle Single or Multi-Tenant” then you can use Oracle GoldenGate for reverse replication using the per-PDB extract feature.
- If the target is “19c RDS for Oracle Single or Multi-Tenant” then you cannot use Oracle GoldenGate for reverse replication, as the per-PDB extract feature is only supported for Oracle database 21c onward. In this case, you can use AWS Database Migration Service (AWS DMS) to setup reverse replication or other methods such as Oracle DataPump to roll back the migration.
The following table summarizes these options:
Source | Target | Rollback Plan |
On-premises Non-CDB Oracle Database | RDS for Oracle non-CDB architecture | Setup replication from RDS for Oracle to on-premises Oracle database using Oracle GoldenGate |
On-premises Multitenant Oracle Database | RDS for Oracle CDB architecture (single-tenant or multi-tenant) | RDS for Oracle 19c – Setup replication from RDS for Oracle to on-premises Oracle database using AWS DMS. RDS for Oracle 21c – Setup replication from RDS for Oracle to on-premises Oracle database using Oracle GoldenGate per-PDB extract. |
In this post, the source on-premises Oracle database uses the Oracle multitenant architecture, and the target RDS for Oracle DB instance uses multi-tenant configuration of the CDB architecture. Both databases are running on Oracle Database 19c Enterprise Edition 19.4.0.0.0 engine version. For ongoing replication, you use Oracle GoldenGate 21c Microservices Architecture. The source Oracle database has the container database (CDB) ORCL and pluggable database (PDB) PDB1. The target RDS for Oracle DB instance has PDB1 as its pluggable database. In this post, we also use dms_sample
as a sample schema to migrate from the on-premises pluggable database PDB1 to the RDS for Oracle pluggable database PDB1.
Additionally, if the source database is not running in the same region as your target RDS for Oracle instance, then you can choose to have another Oracle GoldenGate instance near the source and create a distribution path from this instance to Oracle GoldenGate Hub instance on EC2. This helps with extract performance implications due to network latency.
Prerequisites
Make sure you have the following prerequisites:
- An active AWS account.
- Corporate Data Center and AWS account with connectivity using either AWS Direct Connect or AWS Site-to-Site VPN. For more information, see Network-to-Amazon VPC connectivity options.
- An EC2 Linux instance in the target AWS account with sufficient storage for the Oracle GoldenGate hub. For more information, see Disk Requirements.
- An Amazon Virtual Private Cloud (Amazon VPC) security group associated with the target RDS for Oracle DB instance, allowing allow inbound connections from the EC2 instance used as the Oracle GoldenGate Microservices Architecture hub.
We recommend to review licensing requirements for Oracle GoldenGate. For more information, refer Oracle GoldenGate Licensing Information . For more on RDS for Oracle licensing options, refer to RDS for Oracle licensing options.
Limitations
This solution has the following limitations:
- Due to its managed service nature, you can’t use the RMAN to restore database backup to RDS for Oracle DB instance.
- Oracle GoldenGate does not support Oracle sequences by default. You must install some database procedures to support Oracles sequence. For more information, refer to Support for Oracle Sequences.
- Oracle GoldenGate does not enable sequence replication for schemas that contain system-generated sequences, such as one created for an identity column.
- Refer to Understanding What’s Supported to review the source database and get a list of unsupported data types and objects.
Setting up Oracle GoldenGate Microservices Architecture on Amazon EC2
Oracle GoldenGate 21c (and higher releases) introduces unified build support, so a single software installation supports capturing and applying replicated data between different major Oracle Database versions (like 11g Release 2 to 21c). If your source and target database software versions differ, Oracle GoldenGate 21c doesn’t need to be installed with different Oracle binaries. This is possible because an Oracle GoldenGate installation includes the required Oracle Database client libraries without requiring a separate database ORACLE_HOME installation. Refer to Installing Oracle GoldenGate Microservices Architecture to install Oracle GoldenGate Microservices Architecture on the EC2 instance.
In this post, you use the following configurations to install Oracle GoldenGate Microservices Architecture on your EC2 instance. For more details about Oracle GoldenGate Microservices Architecture and its components, see Oracle GoldenGate Microservices Architecture.
Parameter | Configuration |
OGG_SOFTWARE_HOME |
/data/app/oracle/ogg_home |
DEPLOYMENT_NAME |
ogg21c |
SERVICEMANAGER_DEPLOYMENT_HOME |
/data/app/oracle/deployments/ServiceManager |
OGG_DEPLOYMENT_HOME |
/data/app/oracle/deployments/ogg21c |
ADMINISTRATOR_USER |
oggadmin |
ENV_TNS_ADMIN |
/data/app/oracle/ogg_home |
SECURITY_ENABLED |
TRUE |
SERVER_WALLET |
/data/wallet_ogg/$(hostname) |
CLIENT_WALLET |
/data/wallet_ogg/dist_client |
REGISTER_SERVICEMANAGER_AS_A_SERVICE |
TRUE |
PORT_SERVICEMANAGER |
7820 |
PORT_ADMINSRVR |
7821 |
PORT_DISTSRVR |
7822 |
PORT_RCVRSRVR |
7823 |
PORT_PMSRVR |
7824 |
UDP_PORT_PMSRVR |
7825 |
OGG_SCHEMA |
ggadmin |
METRICS_SERVER_ENABLED |
TRUE |
PMSRVR_DATASTORE_TYPE |
BDB |
PMSRVR_DATASTORE_HOME |
/data/app/oracle/deployments/ogg21c/pmsrvr |
We also recommend configuring NGINX reverse proxy to provide access to microservices and enable enhanced security and manageability. The Oracle GoldenGate reverse proxy feature allows a single point of contact for the GoldenGate microservices associated with a GoldenGate deployment. Without reverse proxy, the microservices are accessed using a URL consisting of a host name or IP address and separate port numbers, one for each of the services. With reverse proxy, port numbers are not required to connect to the microservices, because they are replaced with the deployment name and VIP for the hostname. For the sake of simplicity, we have not configured NGINX proxy in this post.
Prepare the target RDS for Oracle DB instance
In this step, you set up the target RDS for Oracle DB instance for migration.
Create and modify the RDS for Oracle DB instance for migration
Complete the following steps to create and configure your RDS for Oracle DB instance:
- Create the RDS for Oracle DB instance and pluggable database PDB1. We recommend using a custom DB parameter group while creating the DB instance with engine version 19c. For more information, see Creating and connecting to an Oracle DB instance.
- It’s recommended to create the target database instance with a larger instance class to get compute to speed up the initial data loading process.
- Keep Multi-AZ disabled for the target database instance during the initial data load.
- Allocate enough initial storage to the target RDS for Oracle DB instance for backup dump files and database tablespaces. You can also enable storage auto scaling for the DB instance. For more information, refer to Working with storage for Amazon RDS DB instances.
- Create the required tablespaces in the target DB instance.
- Modify the size of the database tablespaces and Temp as per the size of the source database:
- Create database roles in the target DB instance. Use the following script in the source database to generate scripts. Review those generated SQLs before running them in the target database.
- Create database profiles in the target DB instance. Use the following script in the source database to generate scripts. Review those generated SQLs before running them in the target database.
- Create public synonyms in the target database. Use the following script in the source database to generate scripts. Review those generated SQLs before running them in the target database.
- Make sure to have the same size for redo logs that you have in the source database. For more information, see Resizing online redo logs.
- Verify that the source and target DB instance have the same configuration for parameters. Review the RDS DB parameter groups and check the database properties using the following SQL:
- We recommend that you temporarily disable automated backups of the target RDS for Oracle database instance to speed up the Oracle Data Pump import. There will be a brief outage during this activity. Configure Backup-retention-period to 0 to disable automated backups. For more information, see Disabling automated backups.
- Modify
job_queue_processes
to 0 in the DB parameter group of the target database instance to stop it from running database jobs during the migration process. For more information, see Modifying parameters in a DB parameter group.
Set the ENABLE_GOLDENGATE_REPLICATION parameter
The ENABLE_GOLDENGATE_REPLICATION parameter must be set to true in order to support logical replication. Modify ENABLE_GOLDENGATE_REPLICATION
to true
in the DB parameter group of the target RDS for Oracle DB instance using the following code. For more information, see Modifying parameters in a DB parameter group.
Create an Oracle GoldenGate user account in RDS for Oracle
Oracle GoldenGate runs as a database user and requires the appropriate database privileges. If the GoldenGate administrator user has the database administrator (DBA) role, additional object privileges are not needed. However, there might be security constraints granting the DBA role to the GoldenGate administrator user. The DBA role is not necessarily required for Oracle GoldenGate. For more information on user privileges, refer to Grant User Privileges for Oracle Database 21c and Lower and Grant account privileges on the target database. For simplicity, we have granted DBA privileges to the Oracle GoldenGate administrator user in this post.
Connect to the respective PDB and use the following statements to create an Oracle GoldenGate user account named ggadmin
on the target DB instance:
Configure the streams pool
For Integrated Replicat, Oracle recommend having 1 GB for STREAMS_POOL_SIZE
per replicat process and an additional 25%. Modify the STREAMS_POOL_SIZE
parameter in the DB parameter group of the target RDS for Oracle DB instance using the following code. For more information, see Modifying parameters in a DB parameter group.
Prepare the source Oracle database
Oracle GoldenGate requires additional configuration on the Oracle database before you can use it as a source for ongoing replication.
Unsupported objects
Use the dictionary view DBA_GOLDENGATE_SUPPORT_MODE to determine if there are objects that are not fully supported for extraction by Oracle GoldenGate due to data type replication limitations. Tables that are listed in the following query must be excluded from capture using the GoldenGate Extract parameter TABLEEXCLUDE
owner.object_name. These objects must be manually copied to the target database during the cutover phase. Refer to Understanding What’s Supported to review the source database and get a list of unsupported data types and objects.
Row uniqueness
Oracle GoldenGate requires a unique row identifier on the source and target tables to locate the correct target rows for the replication of update and delete statements. This is normally taken care of with primary key or unique key indexes. Use the data dictionary view DBA_GOLDENGATE_NOT_UNIQUE
to find the tables without primary keys and tables with unique indexes that have NULL
values if the underlying database is 12c release 2 or later.
When you enable the supplemental logging, Oracle GoldenGate enables TRANDATA
on all the columns for such tables, although doing so results in additional information being recorded in the redo log of the source database. We recommend assessing the impact of enabling supplemental logging for such tables on the database in a non-production environment.
As an alternative, you can create a substitute key using columns that are guaranteed to have unique values. This replacement key can be specified by adding a KEYCOLS
clause to the EXTRACT TABLE
and REPLICAT MAP
parameters. Existing primary or unique keys that Oracle GoldenGate discovers are replaced with the provided key.
Enable ARCHIVELOG mode for database
The source database must be set to ARCHIVELOG
mode. The Oracle GoldenGate Extract process mines Oracle redo for data replication. The LogMiner server mines redo from the redo log buffer, online redo logs, and archive log files when using Extract in integrated capture mode.
It’s crucial to have an archive log retention policy in place that retains archive logs for a sufficient duration, allowing GoldenGate to read and process them effectively. Additionally, adequate storage space should be maintained on the source database host for these archive logs. This becomes particularly important when the Extract process needs to perform a recovery following events like network outages or maintenance on the Oracle GoldenGate Hub.
Enable database force logging mode and minimal database-level supplemental logging
To make sure that the required redo information is contained in the Oracle redo logs for data replication, NOLOGGING
activities that might prevent the required redo information from being generated must be overridden. Enable database force logging mode if you are replicating the full database, or you can consider enabling it on specific tablespaces. Before you configure Oracle GoldenGate, it’s critical to examine the implications of force logging mode on database performance.
Oracle recommends putting the source database into forced logging mode and enabling minimal supplemental logging at the database level when using Oracle GoldenGate. This adds row chaining information, if they exist, to the redo log for update operations. For more information, see Configuring Logging Properties. The redo volume increases as the result of this force logging. We recommend assessing the impact of changes on the database in a non-production environment.
Enable Force logging on the source database:
Configure the streams pool
For the Oracle GoldenGate Integrated Extract process, an area of Oracle memory called the streams pool (STREAMS_POOL_SIZE
) must be configured in the System Global Area (SGA) of the database. The size requirement of the streams pool is based on the number of Extract processes. MAX_SGA_SIZE
(which is not the same as the database initialization parameter, SGA_MAX_SIZE
) controls the amount of shared memory used by the LogMiner server. MAX_SGA_SIZE
takes memory from the memory allocated to STREAMS_POOL_SIZE
. By default, one integrated Extract requests the LogMiner server to run with MAX_SGA_SIZE
of 1 GB and a PARALLELISM
of 2. In this post, you create one Integrated Extract and allocate 1 GB. You also add 25% additional memory for other processes.
Set the ENABLE_GOLDENGATE_REPLICATION parameter
Configure the ENABLE_GOLDENGATE_REPLICATION
parameter to true in the source database:
Create user accounts for Oracle GoldenGate
The user privileges that are required for connecting to an Oracle database from Oracle GoldenGate depends on the type of user. Privileges should be granted depending on the actions that the user needs to perform as the GoldenGate user on the source and target databases. If the GoldenGate administrator user has the DBA role, additional object privileges are not needed. However, there might be security constraints granting the DBA role to the GoldenGate administrator user. The DBA role is not necessarily required for Oracle GoldenGate. For more information on user privileges, refer to Grant User Privileges for Oracle Database 21c and Lower. For simplicity, we grant DBA privileges to the database user in this post.
Create the following ggadmin
database user account in each pluggable database you plan to replicate from:
Create the c##ggadmin
common database user account in the source container database:
Verify the database user account:
If you have Oracle Database 21c as the source database and you plan to use the per-PDB-Extract feature, you don’t need to create the common user c##ggadmin
in the source container database. For more information, see Requirements for Configuring Container Databases for Oracle GoldenGate.
Set up the Oracle GoldenGate hub on Amazon EC2
You can now configure the Oracle GoldenGate hub on Amazon EC2, which already has Oracle GoldenGate Microservices installed. In this section, you create the tnsnames.ora
file, database credentials, Extract process, heartbeat table, and checkpoint table. Because you’re using Oracle GoldenGate Microservices Architecture as the Oracle GoldenGate hub on Amazon EC2, you don’t need to create a distribution path to transfer the trail files from an Extract (source system) to a Replicat (target system). For more information on the Oracle GoldenGate distribution service, see Distribute.
Create tnsnames.ora
Create a tnsnames.ora
file at the location defined for the ENV_TNS_ADMIN
parameter during Oracle GoldenGate Microservices Architecture installation. Add the following entry names to tnsnames.ora
for the source and target databases:
Create Oracle GoldenGate database credentials
Connect to the administration server of your Oracle GoldenGate deployment using Oracle GoldenGate admin client. You use the public IPv4 DNS of your EC2 instance.
Create a credential store for the source container database and the pluggable databases being migrated. If you’re migrating multiple pluggable databases, add credentials for all the respective pluggable databases. If you have Oracle Database 21c as your source database and you plan to use the per-PDB-Extract feature, you don’t need to add the common user c##ggadmin
in the Oracle GoldenGate credential store.
Create a credential store for the target pluggable databases to which you are migrating. For a multitenant database, separate credentials must be created for the PDBs being migrated to and not the CDB.
Create a heartbeat table for the source database
The GoldenGate heartbeat table is useful to monitor the replication latency between the source and target databases. For a multitenant database, the heartbeat table must be located in the PDB that is being migrated. Refer to ADD HEARTBEATTABLE for more information. For Oracle multitenant databases, the heartbeat objects and jobs are created in the user’s schema that is connected to the database using the DBLOGIN command. Use Oracle GoldenGate adminclient to add a heartbeat table in the source pluggable database.
If you’re migrating multiple pluggable databases, log in to the respective pluggable databases to create a heartbeat table.
Enable supplemental logging for source tables and schemas
Enable supplemental logging on the table level to create the primary Extract. You can use either ADD TRANDATA
or ADD SCHEMATRANDATA
to enable supplemental logging at the table level. It’s recommended to use ADD SCHEMATRANDATA to enable supplemental logging for current and future tables in a given schema.
From GoldenGate version 12.2, there is tighter integration with Oracle Data Pump export and import. If you use Oracle Data Pump for initial data loading, use the PREPARESCN
parameter while enabling supplemental logging to make sure that the Data Pump export will have information about the instantiation CSNs for each table part of the export. This parameter populates the system tables and views with instantiation CSNs on the import.
In this post, you use ADD SCHEMATRANDATA
with PREPARECSN WAIT
to enable the supplemental logging. The WAIT option specifically instructs the system to wait for in-flight transactions and prepare table instantiation.
If you’re migrating multiple pluggable databases, log in to the respective pluggable databases to enable supplemental logging.
Configure and start Extract for the source
Connect to the pluggable database using the DBLOGIN
command from Oracle GoldenGate adminclient to add the extract. You also connect to the CDB using the alias source_cdb
to register the Extract.
Add Extract:
Edit Extaract Parameters:
Add Extract Trail
Register and Start the Extract
If you want to migrate from multiple PDBs, repeat the same steps for <schema-name>@<pdb-name>
to create separate Extracts. If you have Oracle Database 21c as your source database and you plan to use the per-PDB-Extract feature, you don’t need to connect to the root container with the common user c##ggadmin
to register the extract. You can create a per-PDB-Extract connecting to the local ggadmin
user in a specific pluggable database to create and register the Extract.
We recommend to freeze DDL changes in the source database during database migration. Once your database migration is completed, and source and target databases are synced using Replicat, you can apply DDL changes to the source database and use the DDL INCLUDE MAPPED
parameter in Extract to enable DDL replication until you conduct the cutover to the target database. For more information on DDL replication, refer DDL Replication.
Create a GoldenGate heartbeat table for the target database
A GoldenGate heartbeat table is useful to monitor the replication latency between the source and target databases. For a multitenant database, the heartbeat table must be located in the PDB replicated. For more details on adding a heartbeat table, see ADD HEARTBEATTABLE. For Oracle multitenant databases, the heartbeat objects and jobs are created in the user’s schema that is connected to the database using the DBLOGIN command.
If you’re migrating multiple PDBs, repeat the same steps to create a heartbeat table in the PDBs being migrated.
Create a GoldenGate checkpoint table in the target database
Use ADD CHECKPOINTTABLE to create a checkpoint table in the target database. It is recommended to use the checkpoint table for Replicat.
Complete the initial load using Oracle Data Pump
Before initiating the Oracle Data Pump export for source database schemas, it is crucial to verify that active transactions initiated prior to the creation of the GoldenGate Extract have completed. This step verifies that ongoing transactions are captured by either the GoldenGate extract or the Oracle Data Pump export process.
The source for this information is the V$TRANSACTION
performance view. After starting your Extract, determine what transactions exist in this view. When a transaction is complete, it will no longer exist in this view. You have to query this view many times. If a particular transaction is running for longer than expected, investigate who is running that transaction and what that transaction is doing. Ultimately, you may discover that you will need to kill the session that owns the long-running transaction to begin the instantiation of your target system.
Use the following query on the source database to determine when the transactions that started before the Extract was created have committed or rolled back, making it safe to start the backup of the source database:
Backup the source database
We use the Oracle Data Pump Export utility with the expdp
command to take the backup of the required schemas. You don’t need to use the Data Pump parameter FLASHBACK_SCN
because the objects being replicated by Oracle GoldenGate were already prepared (using PREPARECSN
while adding schematrandata
) in an earlier step. Therefore, after importing the objects into the target database, Replicat determines which transactions from the trail files to apply without introducing duplicate data.
Transfer dump files to the target RDS for Oracle DB instance
Create a database link between your source DB instance and your target DB instance. Your local Oracle instance must have network connectivity to the DB instance in order to create a database link and transfer your export dump file.
Use DBMS_FILE_TRANSFER to copy the dump files from the source database instance to the target DB instance. You can also transfer dump files from the source database to the target DB instance using Amazon Elastic File System (Amazon EFS) integration with Amazon RDS for Oracle. For more information, see Transferring files between RDS for Oracle and an Amazon EFS file system.
Import the schema in the target RDS for Oracle DB instance
Use the Oracle Data Pump Import utility to import the schema in the target DB instance to perform the initial data load. For more information on using Oracle Data Pump on an RDS for Oracle DB instance, see Importing using Oracle Data Pump.
Data Pump jobs are started asynchronously. You can view the contents of the import log by using the rdsadmin.rds_file_util.read_text_file
procedure:
After you complete the data import to the target RDS for Oracle DB instance, enable the Amazon RDS automatic backup and archive logging on the target database instance after instantiating the target database.
Enable automated backups of your target RDS for Oracle database instance. You enable automated backups by setting the backup retention period to a positive non-zero value. For more information, see Enabling automated backups.
Enable archive logging on the target DB instance:
Start change data capture
You have already started the extract process to capture the transactions happening on the source database and Oracle GoldenGate accumulates them in the trail files until you start the replicat process to apply the transactions on the target database. Oracle GoldenGate supports different types of Replicat. Refer to Which Replicat to choose to understand the suitable type based on your use case. In this post, you configure Integrated Replicat for ongoing replication using Oracle GoldenGate.
Create and start Replicat
In this step, you create and start the Replicat process.
Add Replicat
Start Replicat
You have used the PREPARECSN
parameter while enable supplemental logging on the database schema DMS_SAMPLE
and Oracle Data Pump for the initial data load, so you can start the Replicat without specifying positioning parameters (for example, AFTERCSN
or ATCSN
).
Monitor replication
Now you can test the DDL(if enabled) and DML replication by creating sample tables and inserting dummy data. You can also count the number of rows in the source and target tables to match the records count. Use the following commands to monitor the replication using admin client:
Log in to the target pluggable database using the ggadmin
database user account and check the lag through the heartbeat table created in the target database:
You can also use Oracle GoldenGate Performance Metrics Service for real-time monitoring of Oracle GoldenGate processes configured.
Setup automated tasks to purge trail files
To purge trail files when Oracle GoldenGate has finished processing them, you have to setup automated tasks. This will mitigate the excessive consumption of disk space on Oracle GoldenGate Hub. Follow documentation to add Purge Trail task.
Rollback Plan
Any migration plan is incomplete without having a proper rollback strategy. In this post, you use RDS for Oracle 19c CDB architecture(multi-tenant) as target database. RDS for Oracle 19c CDB architecture (single tenant or multi-tenant) database does not support configuring Oracle GoldenGate Integrated Extract. One strategy for creating rollback plan can be configuring the reverse replication from RDS for Oracle CDB architecture (single tenant or multi-tenant) to source Oracle database using AWS DMS Binary Reader for change data capture (CDC). This requires setting up a AWS DMS CDC-only task to capture and replicate the changes from the time of application cutover to source on-premises database. For more information, see Working with an AWS-managed Oracle database as a source for AWS DMS .This will make sure you have original database in sync with your RDS for Oracle instance and can be used to fail back in case of issues.
Application cutover to Amazon RDS for Oracle
Complete the following steps to cut over your application:
- Stop the application to make sure that you don’t have transactions on the source database.
- After new connections and transactions are no longer allowed on the source database, monitor the database for active transactions to complete. You will get zero rows for the following query from an idle database:
- Monitor Extract to make sure it has caught up with extracting outstanding transactions. When you see the Extract no longer moving forward, you can stop it.
- After Extract has been stopped, you must wait for Replicat to apply the outstanding trail file data. Use the
SEND REPLICAT
command to check if Replicat has applied the outstanding trail file data:
When Replicat has applied all the trail file data, the value YES
is returned.
- Now that the replicated data is applied to the target database, stop the Replicat:
- Recede the sequences in the target using the source value. Because you haven’t enabled Oracle GoldenGate support for Oracle sequences during CDC from the source database, you will generate the latest sequence value from the source for the sequences and apply it on the target database to help avoid sequence value inconsistencies. Run the following query in the source database to generate the DDL to reset sequences in the target database. Run the generated DDL statements in the target database to update sequences.
- Stop the source database.
- Modify the RDS for Oracle DB instance to enable Multi-AZ (if required).
- Modify
job_queue_processes
in the parameter group associated with target database instance with the appropriate value as configured in the source database instance. - Update the application configuration to point to the target database.
- Update the application configuration or DNS CNAME records with the target RDS for the Oracle database instance endpoint.
- Update the DNS with the new RDS for Oracle endpoint.
- Before you start the application, if you are configuring reverse replication for fallback strategy, start the extract on RDS for Oracle and replicat on original source Oracle database.
- Start the application.
Clean up
As part of this migration, you made changes in the source database, and deployed Amazon EC2 and Oracle GoldenGate resources on Amazon EC2 and Amazon RDS for Oracle to replicate the data from the on-premises database. Be sure to remove any resources and configurations you no longer need.
Conclusion
In this post, we discussed the steps involved in migrating an on-premises multitenant Oracle database to an RDS for Oracle CDB architecture (multi-tenant) using Oracle Data Pump. We also used GoldenGate Microservices Architecture for ongoing replication to reduce downtime. With the complexity involved in database migrations, we highly recommend testing the migration steps in non-production environments prior to making changes in production.
We welcome your feedback. If you have any questions or suggestions, leave them in the comments section.
About the Authors
Alok Srivastava is a Senior Database Consultant and Data Architect at AWS, specializing in database migration and modernization programs. Leveraging his expertise in both traditional and cutting-edge technologies, he guides AWS customers and partners through their journey to the AWS Cloud. Alok’s role encompasses not only database solutions but also the integration of GenAI to enhance data-driven insights and innovation.
Vishal Patil is a Senior Database Specialist Solutions Architect with Amazon Web Services. Vishal helps customers migrate and optimize Oracle workloads in AWS.
Pramod Yadav is a Cloud Support Database Engineer at AWS. He has been with AWS for almost 3 years. With over 10 years of experience in database engineering and infra-architecture, Pramod is also a subject matter expert in the AWS DMS. With a focus on providing excellent customer experiences in the AWS Cloud, he works with external customers to handle a variety of scenarios, such as troubleshooting Amazon RDS infrastructure, assisting with AWS DMS migrations.