Importing and Exporting Databases

This section describes how to export data from MySQL Servers and import to MySQL DB Systems.

The following method is available:
  • MySQL Shell: (Recommended) MySQL Shell's dump/load utilities are purpose-built for use with MySQL Database Service. Useful for all types of exports and imports. Supports export to, and import from, Object Storage.

    The minimum supported source version of MySQL is 5.7.9.

MySQL Shell

This section describes importing and exporting data using MySQL Shell.

MySQL Shell provides the following utilities:
  • dumpInstance(): MySQL instance export utility which exports all compatible schemas to an Object Storage bucket or to local files. By default, this utility exports users, events, routines, and triggers.
  • dumpSchemas(): schema export utility which exports selected schemas to an Object Storage bucket or to local files. For more information, see MySQL Shell Instance and Schema Dump Utilities
  • loadDump(): an import utility which imports schemas to a DB System. For more information, see MySQL Shell Dump Loading Utility

    To import a schema to a MySQL DB System, MySQL Shell must be installed on a Compute instance configured to access the target MySQL DB System. For more information, see Connecting to a DB System.

Note

These utilities were introduced in MySQL Shell 8.0.21.

The dump files are exported as DDL files specifying the schema structure and tab-separated value (.tsv) files containing the data. The .tsv files are compressed using zstd, by default, but gzip is also available as an option. You can also choose no compression but, if you are uploading to Object Storage, it is recommended to use the default.

To further improve performance, large tables are chunked by default. The default chunk size is 32MB. Chunking can be disabled, but this is not recommended for large databases. During import, the chunks can be imported by parallel threads, which can greatly improve import performance.

Compatibility Between On-Premise MySQL and MySQL Database Service

MySQL Database Service has several security-related restrictions that are not present in an on-premise instance of MySQL. To make it easier to load existing databases into the Service, the dump commands in MySQL Shell can detect potential issues and, in some cases, automatically adjust your schema definition to be compliant.

The ocimds option, when set to true, performs compatibility checks on the schemas for these issues and aborts the dump if any are found. loadDump command only allows import of dumps created with the ocimds option enabled.

Some issues found by the ocimds option may require you to manually make changes to your database schema before it can be loaded into the MySQL Database Service. However, the compatibility option can be used to automatically modify the dumped schemas, resolving some of these compatibility issues. You can pass one or more of the following, comma-separated options to compatibility:

  • force_innodb: MySQL Database Service supports the InnoDB storage engine, only. This option modifies the ENGINE= clause of CREATE TABLE statements that use incompatible storage engines and replaces them with InnoDB.
  • strip_definers: strips the "DEFINER=account" clause from views, routines, events, and triggers. MySQL Database Service requires special privileges to create these objects with a definer other than the user loading the schema. By stripping the DEFINER clause, these objects will be created with that default definer. Views and Routines have their SQL SECURITY clause changed from DEFINER to INVOKER. This ensures that the access permissions of the account querying or calling these are applied, instead of the user that created them. If your database security model requires views and routines have more privileges than their invoker, you must manually modify the schema before loading it. For more information, see DEFINER and SQL Security.
  • strip_restricted_grants: certain privileges are restricted in the MySQL Database Service. Privileges such as RELOAD, FILE, SUPER, BINLOG_ADMIN, and SET_USER_ID. It is not possible to create users granting these privileges. This option strips these privileges from dumped GRANT statements.
  • strip_role_admin: ROLE_ADMIN privilege is restricted in the MySQL Database Service. It is not possible to create users granting this privilege. This option strips this privilege from dumped GRANT statements.
  • strip_tablespaces: Tablespaces have some restrictions in the MySQL Database Service. If you need tables created in their default tablespaces, this option strips the TABLESPACE= option from CREATE TABLE statements.

Additionally, DATA DIRECTORY, INDEX DIRECTORY, and ENCRYPTION options in CREATE TABLE statements are always commented out in DDL scripts if the ocimds option is enabled.

Note

If you intend to export an older version of MySQL, such as 5.7.9, the minimum supported source version, it is recommended to run the MySQL Shell Upgrade Checker Utility to generate a report of all potential issues with your migration. For more information, see MySQL Shell User Guide - Upgrade Checker Utility.

Exporting Data to Object Storage with MySQL Shell

This task describes how to export data from a supported MySQL Server source to an Object Storage bucket using the MySQL Shell dumpInstance utility.

This task requires the following:
  • MySQL Shell 8.0.21, or higher.
  • MySQL Server 5.7.9, or higher.
  • Access to Object Storage and an existing bucket.
  • The name of your Object Storage namespace. See Understanding Object Storage Namespaces for more information.
  • A valid configuration file. See SDK and CLI Configuration File. If you have installed and configured the Command Line Interface in the default location, you have a valid configuration file. If you have not installed and configured the CLI, you must either install it, or create a configuration file manually.
  • You have run the dumpInstance command with the dryRun and ocimds parameters set to true. This performs a test run of the export, checking for compatibility issues, and listing those issues in the output. For more information on compatibility issues, and how to correct them, see MySQL Shell Instance and Schema Dump Utilities.
To export a local MySQL instance to an Object Storage bucket, you can either export the entire MySQL instance, using util.dumpInstance, or specific schemas using util.dumpSchemas. The syntax for each command is:
  • util.dumpInstance(outputUrl[, options])
  • util.dumpSchemas(schemas, outputUrl[, options])

This task uses util.dumpInstance with compatibility options.

  1. The following command exports an entire instance, stripping any grants which cannot be used in a MySQL Database Service DB System
    shell>JS> util.dumpInstance("bucketPrefix", {osBucketName: "mds-bucket", osNamespace: "NamespaceID", 
    threads: n, ocimds: true, compatibility: ["strip_restricted_grants"]})
    where:
    • util.dumpInstance: command to export all data in the MySQL instance.
    • bucketPrefix: (Optional). Adds a prefix to the files uploaded to the bucket. If this is specified, the files are uploaded to the defined bucket with the prefix in the following format: bucketPrefix/filename, similarly to a file path. For example, if bucketPrefix is set to test, every file uploaded to the defined bucket, mds-bucket is done so as test/filename. If you download the file, the prefix is treated as a folder in the download. For local exports, this parameter is the path to the local directory you want to export to.
      Note

      Although the contents of this parameter are optional, the quotation marks are not. Even if you do not intend to use a prefix, you must include the quotation marks in the syntax of the command. For example:
      shell>JS> util.dumpInstance("", {osBucketName: "mds-bucket", osNamespace: "NamespaceID", 
      threads: n, ocimds: true, compatibility: ["strip_restricted_grants"]})
    • osBucketName: the case-sensitive name of the Object Storage bucket to export to. MySQL Shell uses the tenancy and user information defined in the config file.
    • osNamespace: the unique identifier of the Object Storage namespace associated with your tenancy. To retrieve this using the CLI, run command oci os ns get.
    • threads: specify the number of processing threads to use for this task. Default is 4. For best performance, it is recommended to set this parameter to the number of CPU cores available on the database server.
    • ocimds: enables checks for compatibility with MySQL Database Service. If this is set to true, it is not possible to export an instance if it is incompatible with MySQL Database Service. For example, privileges such as RELOAD, FILE, SUPER, BINLOG_ADMIN, and SET_USER_ID are not available to any MySQL Database Service user and storage engines such as MyISAM cannot be used on MySQL Database Service.
    • compatibility: (Optional) list of parameters specifying which modifications are performed on the exported data. For more information, see Compatibility Between On-Premise MySQL and MySQL Database Service.
    Note

    For large datasets, it is recommended to use the bytesPerChunk parameter to define larger chunks. The default chunk size is 32MB. To increase the size of the individual chunks, add the bytesPerChunk parameter to the command. For example: bytesPerChunk: 128M specifies a chunk size of 128MB.

    For more information on the options available to the dumpInstance and dumpSchemas utilities, see Instance and Schema Dump Utilities

  2. The command generates output similar to the following:
    Checking for compatibility with MySQL Database Service 8.0.21
    NOTE: User root@localhost had restricted privileges (RELOAD, FILE, SUPER, BINLOG_ADMIN, SET_USER_ID) removed
    NOTE: Database world had unsupported ENCRYPTION option commented out
    NOTE: Database world_x had unsupported ENCRYPTION option commented out
    Compatibility issues with MySQL Database Service 8.0.21 were found and repaired. 
    Please review the changes made before loading them.
    Acquiring global read lock
    All transactions have been started
    Locking instance for backup
    Global read lock has been released
    Writing global DDL files
    Writing users DDL
    Writing DDL for schema `world`
    Writing DDL for table `world`.`city`
    Preparing data dump for table `world`.`city`
    .....
    .....
    Data dump for table `world`.`city` will be chunked using column `ID`
    Preparing data dump for table `world`.`country`
    Data dump for table `world`.`country` will be chunked using column `Code`
    Preparing data dump for table `world`.`countrylanguage`
    Data dump for table `world`.`countrylanguage` will be chunked using column `CountryCode`
    Preparing data dump for table `world_x`.`city`
    Data dump for table `world_x`.`city` will be chunked using column `ID`
    Preparing data dump for table `world_x`.`country`
    Data dump for table `world_x`.`country` will be chunked using column `Code`
    Preparing data dump for table `world_x`.`countryinfo`
    Data dump for table `world_x`.`countryinfo` will be chunked using column `_id`
    Preparing data dump for table `world_x`.`countrylanguage`
    Data dump for table `world_x`.`countrylanguage` will be chunked using column `CountryCode`
    Running data dump using 8 threads.
    NOTE: Progress information uses estimated values and may not be accurate.
    Writing DDL for table `world_x`.`countryinfo`
    Writing DDL for table `world_x`.`countrylanguage`
    Data dump for table `world_x`.`country` will be written to 1 file
    Data dump for table `world_x`.`city` will be written to 1 file
    Data dump for table `world`.`city` will be written to 1 file
    Data dump for table `world`.`countrylanguage` will be written to 1 file
    Data dump for table `world`.`country` will be written to 1 file
    Data dump for table `world_x`.`countryinfo` will be written to 1 file
    Data dump for table `world_x`.`countrylanguage` will be written to 1 file
    2 thds dumping - 100% (10.84K rows / ~10.81K rows), 1.33K rows/s, 71.70 KB/s uncompressed, 15.01 KB/s compressed
    Duration: 00:00:08s
    Schemas dumped: 3
    Tables dumped: 7
    Uncompressed data size: 514.22 KB
    Compressed data size: 106.78 KB
    Compression ratio: 4.8
    Rows written: 10843
    Bytes written: 106.78 KB
    Average uncompressed throughput: 62.96 KB/s
    Average compressed throughput: 13.07 KB/s
The data is uploaded to the specified bucket.

Importing to MySQL DB System from Object Storage Using MySQL Shell

This task describes how to import data from Object Storage to a MySQL DB System.

This task requires the following:
  • A Compute instance with access to the target MySQL DB System. This example assumes you are using a Compute running Oracle Linux.
  • MySQL Shell 8.0.21, or higher, installed on the Compute instance.
  • Access to Object Storage and an existing bucket which contains the exported files.
  • The name of your Object Storage namespace. See Understanding Object Storage Namespaces for more information.
  • A valid Command Line Interface (CLI) configuration file. See SDK and CLI Configuration File. If you have installed and configured the CLI in the default location, you have a valid configuration file. If you have not installed and configured the CLI, you must either install it, or create a configuration file manually.
  • Ensure your DB System has enough storage space for the import.
  1. SSH into the Compute instance and run MySQL Shell.
  2. Start a global session by connecting to the DB System's Endpoint, using the following command:
    >MySQL>JS> \c userName@DBSystemEndpointIPAddress
    where:
    • \c: Shell command to establish a new connection.
    • userName: the username for the DB System.
    • DBSystemEndpointIPAddress: the IP address of the DB System Endpoint.
    The following output is generated:
    Creating a session to 'userName@DBSystemEndpointIPAddress'
    Please provide the password for 'userName@DBSystemEndpointIPAddress': ********
    Save password for 'userName@DBSystemEndpointIPAddress'? [Y]es/[N]o/Ne[v]er (default No): y
    Fetching schema names for autocompletion... Press ^C to stop.
    Your MySQL connection id is 21 (X protocol)
    Server version: 8.0.21-cloud MySQL Enterprise - Cloud
    No default schema selected; type \use <schema> to set one.
  3. Switch to the JavaScript input type, by typing \js and pressing Enter.
  4. Run the following command to import data from an Object Storage bucket, named mds-bucket, to the MySQL Database Service DB System connected to in the previous steps:
    >MySQL>JS> util.loadDump("bucketPrefix", {osBucketName: "mds-bucket", osNamespace: "NamespaceID", 
    threads: n})
    where:
    • util.loadDump: command to import data from the specified Object Storage bucket to MySQL DB System.
    • bucketPrefix: (Optional). If the data was uploaded to Object Storage with a prefix, you must use that prefix in the import command
    • osBucketName: the name of the Object Storage bucket to export to. MySQL Shell uses the tenancy and user information defined in the config file.
    • osNamespace: the unique identifier of the Object Storage namespace associated with your tenancy. To retrieve this using the CLI, run command oci os ns get.
    • threads: specify the number of processing threads to use for this task. Default is 4. For best performance, it is recommended to set this parameter to twice the number of OCPUs used by the target MySQL DB System.

    For more information on the options available to the loadDump utility, see Dump Loading Utility

    Note

    If you are importing data which was exported from a MySQL 5.7 server, such as 5.7.9, or higher, you must also use the "ignoreVersion": true option. If this option is not defined, and you try to import data exported from 5.7, the import will fail.
  5. Enter the text of the second step here.
    Loading DDL and Data from OCI ObjectStorage bucket=Shell-Bucket, prefix='dump1' using 12 threads.
    Target is MySQL 8.0.21-cloud (MySQL Database Service). Dump was produced from MySQL 8.0.21
    Checking for pre-existing objects...
    Executing common preamble SQL
    Executing DDL script for schema `world_x`
    Executing DDL script for `world_x`.`countrylanguage`
    Executing DDL script for `world_x`.`country`
    Executing DDL script for `world_x`.`countryinfo`
    Executing DDL script for `world_x`.`city`
    Executing DDL script for schema `world`
    Executing DDL script for `world`.`countrylanguage`
    Executing DDL script for `world`.`country`
    Executing DDL script for `world`.`city`
    Executing DDL script for schema `imdb`
    [Worker006] world_x@countryinfo@@0.tsv.zst: Records: 239  Deleted: 0  Skipped: 0  Warnings: 0
    [Worker002] world@country@@0.tsv.zst: Records: 239  Deleted: 0  Skipped: 0  Warnings: 0
    [Worker003] world_x@country@@0.tsv.zst: Records: 239  Deleted: 0  Skipped: 0  Warnings: 0
    [Worker005] world_x@countrylanguage@@0.tsv.zst: Records: 984  Deleted: 0  Skipped: 0  Warnings: 0
    [Worker008] world@countrylanguage@@0.tsv.zst: Records: 984  Deleted: 0  Skipped: 0  Warnings: 0
    [Worker001] world_x@city@@0.tsv.zst: Records: 4079  Deleted: 0  Skipped: 0  Warnings: 0
    [Worker007] world@city@@0.tsv.zst: Records: 4079  Deleted: 0  Skipped: 0  Warnings: 0
    Executing common postamble SQL                           
                                            
    7 chunks (10.84K rows, 514.22 KB) for 7 tables in 3 schemas were loaded in 5 sec (avg throughput 102.84 KB/s)
    0 warnings were reported during the load.
    Note

    If you cancel the process while it is running, (pressing Ctrl+c once) all existing threads are allowed to complete and Shell writes a progress file to Object Storage, recording the progress of the import process. This enables you to pick up where you left off, if you want to restart the import, and assuming no external changes were made to the data. If you cancel the import by pressing Ctrl+c twice, the progress file is written, and the InnoDB engine rolls back the ongoing transactions. This can take some time. For more information on progress files, see MySQL Shell Progress File.
The data is imported into the DB System.
Note

It is also possible to import data from Object Storage while the export is uploading that data. For more information, see Simultaneous Import and Export with MySQL Shell.
MySQL Shell Progress File

Describes the use of a local progress file, instead of using the default progress file stored in Object Storage bucket.

If your database export is extremely large, with many thousands of chunks, it is recommended to set the progressFile: "/path/to/filename" parameter. This creates the progress file on the local file system of the compute, instead of in the same Object Storage bucket as the data, and can improve the performance of the import.

The following example loads data from Object Storage and creates a progress file called progressfile.json in the opc user's home directory on the Compute instance:
util.loadDump("bucketPrefix", {osBucketName: "mds-bucket", osNamespace: "NamespaceID", 
             threads: 4, progressFile: "/home/opc/progressfile.json"})
Note

If you specify the progressFile: parameter, but leave the value blank, progressFile: "", no progress file is written, and it is not possible to resume the import.
Simultaneous Import and Export with MySQL Shell

Enabling simultaneous export and import of data.

This option enables loadDump to load a dump while it is still being created in the Object Storage bucket. When all uploaded chunks are processed, the command either waits for more data, the dump is marked as completed, or the defined timeout passes.

The following example specifies a 5 minute timeout:

>MySQL>JS> util.loadDump("bucketPrefix", {osBucketName: "mds-bucket", 
           osNamespace: "NamespaceID", threads: n, waitDumpTimeout: 300})