Updating an Exadata Cloud Service Instance

This topic covers how to update the operating system and the tooling on the compute server nodes (for cloud VM clusters, these are called virtual machines) of an Exadata Cloud Service instance. Review all of the information carefully before you begin the updates.

OS Updates

You update the operating systems of Exadata compute nodes by using the patchmgr tool. This utility manages the entire update of one or more compute nodes remotely, including running pre-reboot, reboot, and post-reboot steps. You can run the utility from either an Exadata compute node or a non-Exadata server running Oracle Linux. The server on which you run the utility is known as the "driving system." You cannot use the driving system to update itself. Therefore, if the driving system is one of the Exadata compute nodes on a system you are updating, you must run a separate operation on a different driving system to update that server.

The following two scenarios describe typical ways of performing the updates:

Scenario 1: Non-Exadata Driving System

The simplest way to run the update the Exadata system is to use a separate Oracle Linux server to update all Exadata compute nodes in the system.

Scenario 2: Exadata Node Driving System

You can use one Exadata compute node to drive the updates for the rest of the compute nodes in the system, and then use one of the updated nodes to drive the update on the original Exadata driver node.

For example: You are updating a half rack Exadata system, which has four compute nodes - node1, node2, node3, and node4. First, use node1 to drive the updates of node2, node3, and node4. Then, use node2 to drive the update of node1.

The driving system requires root user SSH access to each compute node the utility will update.

Preparing for the OS Updates

Caution

Do not install NetworkManager on the Exadata Cloud Service instance. Installing this package and rebooting the system results in severe loss of access to the system.

  • Before you begin your updates, review Exadata Cloud Service Software Versions (Doc ID 2333222.1) to determine the latest software version and target version to use.
  • Some steps in the update process require you to specify a YUM repository. The YUM repository URL is:

    http://yum-<region_identifier>.oracle.com/repo/EngineeredSystems/exadata/dbserver/<latest_version>/base/x86_64.

    Region identifiers are text strings used to identify Oracle Cloud Infrastructure regions (for example, us-phoenix-1). You can find a complete list of region identifiers in Regions.

    You can run the following curl command to determine the latest version of the YUM repository for your Exadata Cloud Service instance region:

    curl -s -X GET http://yum-<region_identifier>.oracle.com/repo/EngineeredSystems/exadata/dbserver/index.html |egrep "18.1."

    This example returns the most current version of the YUM repository for the US West (Phoenix) region:

    curl -s -X GET http://yum-us-phoenix-1.oracle.com/repo/EngineeredSystems/exadata/dbserver/index.html |egrep "18.1."
    <a href="18.1.4.0.0/">18.1.4.0.0/</a> 01-Mar-2018 03:36 -
  • To apply OS updates, the system's VCN  must be configured to allow access to the YUM repository. For more information, see Option 2: Service Gateway Access to Both Object Storage and YUM Repos.
To update the OS on all compute nodes of an Exadata Cloud Service instance

This example procedure assumes the following:

  • The system has two compute nodes, node1 and node2.
  • The target version is 18.1.4.0.0.180125.3.
  • Each of the two nodes is used as the driving system for the update on the other one.
  1. Gather the environment details.

    1. SSH to node1 as root and run the following command to determine the version of Exadata:

      [root@node1]# imageinfo -ver
      12.2.1.1.4.171128
    2. Switch to the grid user, and identify all computes in the cluster.

      [root@node1]# su - grid
      [grid@node1]$ olsnodes
      node1
      node1
  2. Configure the driving system.

    1. Switch back to the root user on node1, check whether a root ssh key pair (id_rsa and id_rsa.pub) already exists. If not, then generate it.

      [root@node1 .ssh]#  ls /root/.ssh/id_rsa*
      ls: cannot access /root/.ssh/id_rsa*: No such file or directory
      [root@node1 .ssh]# ssh-keygen -t rsa
      Generating public/private rsa key pair.
      Enter file in which to save the key (/root/.ssh/id_rsa):
      Enter passphrase (empty for no passphrase):
      Enter same passphrase again:
      Your identification has been saved in /root/.ssh/id_rsa.
      Your public key has been saved in /root/.ssh/id_rsa.pub.
      The key fingerprint is:
      93:47:b0:83:75:f2:3e:e6:23:b3:0a:06:ed:00:20:a5 root@node1.fraad1client.exadataclientne.oraclevcn.com
      The key's randomart image is:
      +--[ RSA 2048]----+
      |o..     + .      |
      |o.     o *       |
      |E     . o o      |
      | . .     =       |
      |  o .   S =      |
      |   +     = .     |
      |    +   o o      |
      |   . .   + .     |
      |      ...        |
      +-----------------+
    2. Distribute the public key to the target nodes, and verify this step. In this example, the only target node is node2.

      [root@node1 .ssh]# scp -i ~opc/.ssh/id_rsa ~root/.ssh/id_rsa.pub opc@node2:/tmp/id_rsa.node1.pub
      id_rsa.pub
      
      [root@node2 ~]# ls -al /tmp/id_rsa.node1.pub
      -rw-r--r-- 1 opc opc 442 Feb 28 03:33 /tmp/id_rsa.node1.pub
      [root@node2 ~]# date
      Wed Feb 28 03:33:45 UTC 2018
      
    3. On the target node (node2, in this example), add the root public key of node1 to the root authorized_keys file.

      [root@node2 ~]# cat /tmp/id_rsa.node1.pub >> ~root/.ssh/authorized_keys
      
    4. Download dbserver.patch.zip as p21634633_12*_Linux-x86-64.zip onto the driving system (node1, in this example), and unzip it. See dbnodeupdate.sh and dbserver.patch.zip: Updating Exadata Database Server Software using the DBNodeUpdate Utility and patchmgr (Doc ID 1553103.1) for information about the files in this .zip.

      [root@node1 patch]# mkdir /root/patch
      [root@node1 patch]# cd /root/patch
      [root@node1 patch]# unzip p21634633_181400_Linux-x86-64.zip
      Archive:  p21634633_181400_Linux-x86-64.zip   creating: dbserver_patch_5.180228.2/
         creating: dbserver_patch_5.180228.2/ibdiagtools/
        inflating: dbserver_patch_5.180228.2/ibdiagtools/cable_check.pl
        inflating: dbserver_patch_5.180228.2/ibdiagtools/setup-ssh
        inflating: dbserver_patch_5.180228.2/ibdiagtools/VERSION_FILE
       extracting: dbserver_patch_5.180228.2/ibdiagtools/xmonib.sh
        inflating: dbserver_patch_5.180228.2/ibdiagtools/monitord
        inflating: dbserver_patch_5.180228.2/ibdiagtools/checkbadlinks.pl
         creating: dbserver_patch_5.180228.2/ibdiagtools/topologies/
        inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/VerifyTopologyUtility.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/verifylib.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Node.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Rack.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Group.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Switch.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/topology-zfs
        inflating: dbserver_patch_5.180228.2/ibdiagtools/dcli
         creating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/
        inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/remoteScriptGenerator.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/CommonUtils.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/SolarisAdapter.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/LinuxAdapter.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/remoteLauncher.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/remoteConfig.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/spawnProc.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/runDiagnostics.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/OSAdapter.pm
        inflating: dbserver_patch_5.180228.2/ibdiagtools/SampleOutputs.txt
        inflating: dbserver_patch_5.180228.2/ibdiagtools/infinicheck
        inflating: dbserver_patch_5.180228.2/ibdiagtools/ibping_test
        inflating: dbserver_patch_5.180228.2/ibdiagtools/tar_ibdiagtools
        inflating: dbserver_patch_5.180228.2/ibdiagtools/verify-topology
        inflating: dbserver_patch_5.180228.2/installfw_exadata_ssh
         creating: dbserver_patch_5.180228.2/linux.db.rpms/
        inflating: dbserver_patch_5.180228.2/md5sum_files.lst
        inflating: dbserver_patch_5.180228.2/patchmgr
        inflating: dbserver_patch_5.180228.2/xcp
        inflating: dbserver_patch_5.180228.2/ExadataSendNotification.pm
        inflating: dbserver_patch_5.180228.2/ExadataImageNotification.pl
        inflating: dbserver_patch_5.180228.2/kernelupgrade_oldbios.sh
        inflating: dbserver_patch_5.180228.2/cellboot_usb_pci_path
        inflating: dbserver_patch_5.180228.2/exadata.img.env
        inflating: dbserver_patch_5.180228.2/README.txt
        inflating: dbserver_patch_5.180228.2/exadataLogger.pm
        inflating: dbserver_patch_5.180228.2/patch_bug_26678971
        inflating: dbserver_patch_5.180228.2/dcli
        inflating: dbserver_patch_5.180228.2/patchReport.py
       extracting: dbserver_patch_5.180228.2/dbnodeupdate.zip
         creating: dbserver_patch_5.180228.2/plugins/
        inflating: dbserver_patch_5.180228.2/plugins/010-check_17854520.sh
        inflating: dbserver_patch_5.180228.2/plugins/020-check_22468216.sh
        inflating: dbserver_patch_5.180228.2/plugins/040-check_22896791.sh
        inflating: dbserver_patch_5.180228.2/plugins/000-check_dummy_bash
        inflating: dbserver_patch_5.180228.2/plugins/050-check_22651315.sh
        inflating: dbserver_patch_5.180228.2/plugins/005-check_22909764.sh
        inflating: dbserver_patch_5.180228.2/plugins/000-check_dummy_perl
        inflating: dbserver_patch_5.180228.2/plugins/030-check_24625612.sh
        inflating: dbserver_patch_5.180228.2/patchmgr_functions
        inflating: dbserver_patch_5.180228.2/exadata.img.hw
        inflating: dbserver_patch_5.180228.2/libxcp.so.1
        inflating: dbserver_patch_5.180228.2/imageLogger
        inflating: dbserver_patch_5.180228.2/ExaXMLNode.pm
        inflating: dbserver_patch_5.180228.2/fwverify
      
    5. Create the dbs_group file that contains the list of compute nodes to update. Include the nodes listed after running the olsnodes command in step 1 except for the driving system node. In this example, dbs_group should include only node2.

      [root@node1 patch]# cd /root/patch/dbserver_patch_5.180228
      [root@node1 dbserver_patch_5.180228]# cat dbs_group
      node2
      
  3. Run a patching precheck operation.

    patchmgr -dbnodes dbs_group -precheck -yum_repo <yum_repository> -target_version <target_version> -nomodify_at_prereq
    Important

    You must run the precheck operation with the -nomodify_at_prereq option to prevent any changes to the system that could impact the backup you take in the next step. Otherwise, the backup might not be able to roll back the system to its original state, should that be necessary.

    The output should look like the following example:

    [root@node1 dbserver_patch_5.180228]# ./patchmgr -dbnodes dbs_group -precheck -yum_repo  http://yum-phx.oracle.com/repo/EngineeredSystems/exadata/dbserver/18.1.4.0.0/base/x86_64 -target_version 18.1.4.0.0.180125.3  -nomodify_at_prereq
    
    ************************************************************************************************************
    NOTE    patchmgr release: 5.180228 (always check MOS 1553103.1 for the latest release of dbserver.patch.zip)
    NOTE
    WARNING Do not interrupt the patchmgr session.
    WARNING Do not resize the screen. It may disturb the screen layout.
    WARNING Do not reboot database nodes during update or rollback.
    WARNING Do not open logfiles in write mode and do not try to alter them.
    ************************************************************************************************************
    2018-02-28 21:22:45 +0000        :Working: DO: Initiate precheck on 1 node(s)
    2018-02-28 21:24:57 +0000        :Working: DO: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:26:15 +0000        :SUCCESS: DONE: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:26:47 +0000        :Working: DO: dbnodeupdate.sh running a precheck on node(s).
    2018-02-28 21:28:23 +0000        :SUCCESS: DONE: Initiate precheck on node(s). 
  4. Back up the current system.

    patchmgr -dbnodes dbs_group -backup -yum_repo <yum_repository> -target_version <target_version>  -allow_active_network_mounts
    Important

    This is the proper stage to take the backup, before any modifications are made to the system.

    The output should look like the following example:

    [root@node1 dbserver_patch_5.180228]#  ./patchmgr -dbnodes dbs_group -backup  -yum_repo  http://yum-phx.oracle.com/repo/EngineeredSystems/exadata/dbserver/18.1.4.0.0/base/x86_64 -target_version 18.1.4.0.0.180125.3 -allow_active_network_mounts
    
    ************************************************************************************************************
    NOTE    patchmgr release: 5.180228 (always check MOS 1553103.1 for the latest release of dbserver.patch.zip)
    NOTE
    WARNING Do not interrupt the patchmgr session.
    WARNING Do not resize the screen. It may disturb the screen layout.
    WARNING Do not reboot database nodes during update or rollback.
    WARNING Do not open logfiles in write mode and do not try to alter them.
    ************************************************************************************************************
    2018-02-28 21:29:00 +0000        :Working: DO: Initiate backup on 1 node(s).
    2018-02-28 21:29:00 +0000        :Working: DO: Initiate backup on node(s)
    2018-02-28 21:29:01 +0000        :Working: DO: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:30:18 +0000        :SUCCESS: DONE: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:30:51 +0000        :Working: DO: dbnodeupdate.sh running a backup on node(s).
    2018-02-28 21:35:50 +0000        :SUCCESS: DONE: Initiate backup on node(s).
    2018-02-28 21:35:50 +0000        :SUCCESS: DONE: Initiate backup on 1 node(s).
    
  5. Remove all custom RPMs from the target compute nodes that will be updated. Custom RPMs are reported in precheck results. They include RPMs that were manually installed after the system was provisioned.

    Note

    • If you are updating the system from version 12.1.2.3.4.170111, and the precheck results include krb5-workstation-1.10.3-57.el6.x86_64, remove it. (This item is considered a custom RPM for this version.)
    • Do not remove exadata-sun-vm-computenode-exact or oracle-ofed-release-guest. These two RPMs are handled automatically during the update process.
  6. Run the nohup command to perform the update.

    nohup patchmgr -dbnodes dbs_group -upgrade -nobackup -yum_repo <yum_repository> -target_version <target_version> -allow_active_network_mounts &

    The output should look like the following example:

    [root@node1 dbserver_patch_5.180228]# nohup ./patchmgr -dbnodes dbs_group -upgrade -nobackup  -yum_repo  http://yum-phx.oracle.com/repo/EngineeredSystems/exadata/dbserver/18.1.4.0.0/base/x86_64 -target_version 18.1.4.0.0.180125.3  -allow_active_network_mounts &
    
    ************************************************************************************************************
    NOTE    patchmgr release: 5.180228 (always check MOS 1553103.1 for the latest release of dbserver.patch.zip)
    NOTE
    NOTE    Database nodes will reboot during the update process.
    NOTE
    WARNING Do not interrupt the patchmgr session.
    WARNING Do not resize the screen. It may disturb the screen layout.
    WARNING Do not reboot database nodes during update or rollback.
    WARNING Do not open logfiles in write mode and do not try to alter them.
    *********************************************************************************************************
    
    2018-02-28 21:36:26 +0000        :Working: DO: Initiate prepare steps on node(s).
    2018-02-28 21:36:26 +0000        :Working: DO: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:37:44 +0000        :SUCCESS: DONE: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:38:43 +0000        :SUCCESS: DONE: Initiate prepare steps on node(s).
    2018-02-28 21:38:43 +0000        :Working: DO: Initiate update on 1 node(s).
    2018-02-28 21:38:43 +0000        :Working: DO: Initiate update on node(s)
    2018-02-28 21:38:49 +0000        :Working: DO: Get information about any required OS upgrades from node(s).
    2018-02-28 21:38:59 +0000        :SUCCESS: DONE: Get information about any required OS upgrades from node(s).
    2018-02-28 21:38:59 +0000        :Working: DO: dbnodeupdate.sh running an update step on all nodes.
    2018-02-28 21:48:41 +0000        :INFO   : node2 is ready to reboot.
    2018-02-28 21:48:41 +0000        :SUCCESS: DONE: dbnodeupdate.sh running an update step on all nodes.
    2018-02-28 21:48:41 +0000        :Working: DO: Initiate reboot on node(s)
    2018-02-28 21:48:57 +0000        :SUCCESS: DONE: Initiate reboot on node(s)
    2018-02-28 21:48:57 +0000        :Working: DO: Waiting to ensure node2 is down before reboot.
    2018-02-28 21:56:18 +0000        :Working: DO: Initiate prepare steps on node(s).
    2018-02-28 21:56:19 +0000        :Working: DO: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:57:37 +0000        :SUCCESS: DONE: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:57:42 +0000        :SEEMS ALREADY UP TO DATE: node2
    2018-02-28 21:57:43 +0000        :SUCCESS: DONE: Initiate update on node(s)
  7. After the update operation completes, verify the version of the kernel on the compute node that was updated.

    [root@node2 ~]# imageinfo -ver
    18.1.4.0.0.180125.3
    
  8. If the driving system is a compute node that needs to be updated (as in this example), repeat steps 2 through 7 of this procedure using an updated compute node as the driving system to update the remaining compute node. In this example update, you would use node2 to update node1.
  9. On each compute node, run the uptrack-install command as root to install the available ksplice updates.

    uptrack-install --all -y
    

Updating Tooling on an Exadata Cloud Service Instance

You can update the cloud-specific tooling included on an Exadata Cloud Service compute node by downloading and applying an RPM file containing the latest version of the tools.

Note

Oracle highly recommends that you maintain the same version of cloud tooling across your Exadata Cloud Service environment. Perform the following procedure on every compute node in the Exadata Cloud Service instance.

Prerequisite

The compute nodes in the Exadata Cloud Service instance must be configured to access the Oracle Cloud Infrastructure Object Storage service. For more information, see Node Access to Object Storage: Static Route.

Updating the Cloud Tooling on Each Compute Node Manually

The method for updating the tooling depends on the tooling release that is currently installed on the compute node.

To check the installed tooling release
  1. Connect to the compute node as the opc user.
  2. Start a root-user command shell.

    $ sudo -s
    #
  3. Use the following command to display information about the installed cloud tooling and note the release label, shown in red in the example that follows.

    # rpm -qa|grep -i dbaastools_exa
    
    dbaastools_exa-1.0-1+18.1.2.1.0_180511.0801.x86_64

    In this example, the release version is 18.1.2.1.0_180511.0801.

To update the tooling if the release label is higher than 17430

You use the patch tools subcommand of the dbaascli utility to update the cloud tooling.

Important

If you are updating the tooling on an Exadata Cloud Service instance that includes a Data Guard configuration, you must perform these steps on both the primary database's system and on the standby database's system.
  1. Connect as the opc user to the compute node.

  2. Start a root-user command shell:

    $ sudo -s
    #
  3. Check whether any cloud tooling updates are available:

    # dbaascli patch tools list

    Example output:

    [root@exacs-node1 ]# dbaascli patch tools list
    DBAAS CLI version 19.4.1.0.0
    Executing command patch tools list
    Checking tools on all nodes
    Current Patchid on stb-elbdc1: 19.4.1.0.0_190822.1034
    Available Patches
    Patchid : 19.4.1.0.0_190827.1034
    Patchid : 19.4.1.0.0_190912.0440(LATEST)
    Install tools patch using
    dbaascli patch tools apply --patchid 19.4.1.0.0_190912.0440    or
    dbaascli patch tools apply --patchid LATEST
    All Nodes have the same tools version
  4. In the command response, locate the patch ID of the cloud tooling update. The patch ID is listed as the "Patchid" value. If multiple patches are listed, choose the latest one.
  5. Apply the patch containing the latest cloud tooling update by using one of the following methods:

    • Specify the patch ID of the latest patch:

      # dbaascli patch tools apply --patchid <patch_ID>
    • Specify the patch ID as LATEST:

      # dbaascli patch tools apply --patchid LATEST
    • Run the update process in the background:

      # dbaascli patch tools apply --patchid LATEST &
  6. Reset the backup configuration:

    # /var/opt/oracle/ocde/assistants/bkup/bkup
  7. Exit the root-user command shell and disconnect from the compute node:

    # exit
    $ exit
  8. If you are updating cloud tooling on a DB system hosting a Data Guard configuration, repeat the preceding steps on the compute node of the peer (primary or standby database's) Exadata Cloud Service instance.
To update the tooling if the release label is 17430 or lower
  1. Download the RPM file using the Swift object storage API endpoint URL for your region.

    wget <swift_API_endpoint>/v1/exadata/patches/dbaas_patch/shome/dbaastools_exa.rpm

    The following example downloads the RPM file from the US West (Phoenix).

    wget https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/exadata/patches/dbaas_patch/shome/dbaastools_exa.rpm

    See API Reference and Endpoints for the Swift API endpoint for your region.

  2. Apply the RPM file.

    
    # rpm -ev dbaastools_exa
    # rpm -ivh dbaastools_exa.rpm
  3. Repeat the previous steps on each compute node in the Exadata Cloud Service instance.

Configuring Automatic Cloud Tooling Updates

You can configure automatic cloud tooling updates for Exadata Cloud Service instance. When you configure these updates, an entry is added to the /etc/crontab file to regularly check for cloud tooling updates and apply new updates to the compute node when they become available.

Note

These procedures apply only if the release label is higher than 17430.
To check whether automatic cloud tooling updates are enabled for an Exadata Cloud Service instance
  1. Connect to the compute node as the opc user.
  2. Start a root-user command shell:

    $ sudo -s
    #
  3. Use the following command to check whether automatic tooling updates are enabled:

    # dbaascli patch tools auto status
    

    If the command response includes "INFO: auto rpm update is enabled", then automatic updates are enabled. If the response includes "INFO: auto rpm update is disabled", then automatic updates are disabled.

  4. Exit the root-user command shell and disconnect from the compute node:

    # exit
    $ exit
    
  5. If you are checking the status of automatic cloud tooling updates on an Exadata Cloud Service instance hosting a Data Guard configuration, repeat the preceding steps on the compute node of the peer (primary or standby database's) system.
To enable automatic cloud tooling updates for an Exadata Cloud Service Instance
  1. Connect to the compute node as the opc user.
  2. Start a root-user command shell:

    $ sudo -s
    #
  3. Use the following command to enable automatic tooling updates:

    # dbaascli patch tools auto enable
    
  4. Exit the root-user command shell and disconnect from the compute node:

    # exit
    $ exit
    
  5. If you are enabling automatic cloud tooling updates on a Exadata Cloud Service instance hosting a Data Guard configuration, repeat the preceding steps on the compute node of the peer (primary or standby database's) system.
To run a tooling update on demand when automatic cloud tooling updates are enabled

You can perform an update at any time between automatic updates by running the dbaascli patch tools auto enable subcommand. This command checks whether there is a newer version of the tooling than the version on the compute node and applies the newer version if it finds one.

  1. Connect to the compute node as the opc user.
  2. Start a root-user command shell:

    $ sudo -s
    #
  3. Use the following command to check for a newer tooling version and apply it:

    # dbaascli patch tools auto execute
    
  4. Exit the root-user command shell and disconnect from the compute node:

    # exit
    $ exit
    
  5. If you are performing the on-demand cloud tooling update on an Exadata Cloud Service instance hosting a Data Guard configuration, repeat the preceding steps on the compute node of the peer (primary or standby database's) system.
To disable automatic cloud tooling updates for an Exadata Cloud Service instance
  1. Connect to the compute node as the opc user.
  2. Start a root-user command shell:

    $ sudo -s
    #
  3. Use the following command to disable automatic tooling updates:

    # dbaascli patch tools auto disable
    
  4. Exit the root-user command shell and disconnect from the compute node:

    # exit
    $ exit
    
  5. If you are disabling automatic cloud tooling updates on an Exadata Cloud Service instance hosting a Data Guard configuration, repeat the preceding steps on the compute node of the peer (primary or standby database's) system.