Oracle

Option A – SFTP

Step 1: Export the database DDL using whatever favorite tool with no data.

Step2: Export the data of any table with < 100 million rows to a file. This is the standard export as an insert statement. I would do any table > 1m rows in its own session and all the smaller tables in a single session.

Step 3: Compress, encrypt and SFTP those files to the target AWS EC2 instance. Once there, run all the inserts with RI turned off. If the total size is large, using Tsunami UDP here instead of SFTP is recommended.

Option B – AWS DB Migration Service (DMS)

Instead of using tools to manually unload the data and then upload the data, we could just use AWS Database Migration Service (DMS). AWS DMS does all the heavy lifting itself, makes it very straightforward to setup replication jobs. AWS Database Migration Service monitors for replication tasks, network or host failures, and automatically provisions a host replacement in case of failures that can’t be repaired

Option C – Snowball

Option D – Oracle Data Pump & Tsunami UDP

SQL Server

Option A – SFTP

Option B – SQL Server Native Backup/Restore

Option C – Snowball

Anuj holds professional certifications in Google Cloud, AWS as well as certifications in Docker and App Performance Tools such as New Relic. He specializes in Cloud Security, Data Encryption and Container Technologies.

Initial Consultation

Anuj Varma – who has written posts on Anuj Varma, Hands-On Technology Architect, Clean Air Activist.