Importing Using MySQL Shell

Use MySQL Shell in the Oracle Cloud Infrastructure compute instance to import data to a MySQL HeatWave Service DB system.

Note

For faster import to a standalone DB system, disable crash recovery, or use the data import feature. See Enabling or Disabling Crash Recovery and Importing Using the Data Import Feature.
This task requires the following:
  • SSH access to a compute instance with access to the MySQL DB system, enabling you to run MySQL Shell on the compute instance. See Networking.
  • MySQL Shell 8.0.27, or higher.
  • A valid configuration file if the dump files are stored in an Object Storage bucket. If you have installed and configured the CLI in the default location, you have a valid configuration file. If you have not installed and configured the CLI, you must either install it, or create a configuration file manually. See SDK and CLI Configuration File.
  • Enough storage to accommodate imported data, and enough network bandwidth for the import.
Do the following to import data to a MySQL DB system:
  1. Run MySQL Shell in the Oracle Cloud Infrastructure compute instance.
  2. Switch to the JavaScript input type, by typing \js and pressing Enter.
  3. Run the following command to start a global session by connecting to the endpoint of the DB system:
    \c <UserName>@<DBSystemEndpointIPAddress>
    • \c: Specifies the Shell command to establish a new connection.
    • <UserName>: Specifies the username for the DB System.
    • <DBSystemEndpointIPAddress>: Specifies the IP address of the endpoint of the DB system.
  4. If you are importing a dump that is located in the local file system of the Oracle Cloud Infrastructure compute instance, run the following command to import data to the MySQL HeatWave Service DB system:
    util.loadDump("<URL>", {waitDumpTimeout: <TimeinSeconds> , updateGtidSet: "append" })
    • util.loadDump: Specifies the command to import data to the DB system.
    • <URL>: Specifies the path to a local directory containing the dump files. For example, /home/opc/dump.
    • waitDumpTimeout: (Optional) Enables you to apply a dump that is still in the process of being created. Tables are loaded as they become available, and the utility waits for the specified number of seconds after new data stops arriving in the dump location. When the timeout elapses, the utility aborts the import.
    • updateGtidSet:"append": (Optional) Adds the transaction IDs from the gtid_executed GTID set recorded in the dump metadata, to the DB system gtid_purged GTID set. This configures the GTID set in the DB system for enabling inbound replication from the MySQL instance which the dump files are exported from.
  5. If you are importing a dump from an Oracle Cloud Infrastructure Object Storage bucket, run the following command to import data to the MySQL HeatWave Service DB system:
    util.loadDump("<URL>", {threads: <Number>, osBucketName: "<MDSBucket>", waitDumpTimeout: <TimeinSeconds> , updateGtidSet: "append" })
    • util.loadDump: Specifies the command to import data to the DB system.
    • <URL>: Specifies the path prefix that the dump files have in the bucket, which was assigned using the outputUrl parameter when the dump was created.
    • threads: (Optional) Specifies the number of parallel threads to use to upload chunks of data to the DB system. The default value is 4.
    • osBucketName: Specifies the name of the Object Storage bucket.
    • waitDumpTimeout: (Optional) Enables you to apply a dump that is still in the process of being created. Tables are loaded as they become available, and the utility waits for the specified number of seconds after new data stops arriving in the dump location. When the timeout elapses, the utility aborts the import.
    • updateGtidSet:"append": (Optional) Adds the transaction IDs from the gtid_executed GTID set recorded in the dump metadata, to the DB system gtid_purged GTID set. This configures the GTID set in the DB system for enabling inbound replication from the MySQL instance which the dump files are exported from.
  6. (Optional) If you did not specify the updateGtidSet:"append" option in the util.loadDump command, you can perform the following to update the gtid_purged GTID set of the DB system:
    1. Retrieve the value of gtid_executed from the @.json dump file.
    1. Connect to the DB system and add the GTID set to gtid_purged using the following command:
      CALL sys.SET_GTID_PURGED("+<gtidSet>")
      This command runs a stored procedure on the DB system to change the system variable value.
      • <gtidSet> is the value from the gtidExecuted field in the @.json dump file.
      • The + sign appends the GTID set to the DB system's gtid_purged system variable.
The data is imported into the DB system.

Related Topics