Patching and Updating an Exadata Cloud@Customer System Manually

This topic describes the responsibilities and procedures for patching and updating various components in Exadata Cloud@Customer.

Note

For more guidance on achieving continuous service during patching operations, see the Continuous Availability white paper.

Managing VM Clusters on Exadata Cloud@Customer

Oracle performs patches and updates to all of the Oracle-managed system components on Exadata Cloud@Customer.

Oracle patches and updates include the physical compute nodes (Dom0), network switches, power distribution units (PDUs), integrated lights-out management (ILOM) interfaces, and the Exadata Storage Servers.

In all but rare exceptional circumstances, you receive advance communication about these updates to help you plan for them. If there are corresponding recommended updates for your compute node virtual machines (VMs), then Oracle provides notification about them.

Wherever possible, scheduled updates are performed in a manner that preserves service availability throughout the update process. However, there can be some noticeable impact on performance and throughput while individual system components are unavailable during the update process.

For example, Dom0 patching typically requires a reboot. In such cases, wherever possible, the compute nodes are restarted in a rolling manner, one at a time, to ensure that the service remains available throughout the process. However, each compute node is unavailable for a short time while it restarts, and the overall service capacity diminishes accordingly. If your applications cannot tolerate the restarts, then take mitigating action as needed. For example, shut down an application while Dom0 patching occurs.

Administering Software Images

Oracle maintains a library of cloud software images and provides capabilities to view the library and download images to your Oracle Database Exadata Cloud@Customer DomU. Using these facilities, you can control the version of Oracle binaries that is used when a new Oracle Home is created.

When you create a new database deployment with a new Oracle Home, the Oracle Database binaries are sourced from a software image that is stored and set as a default in your Exadata Cloud@Customer DomU ACFS volume. Over time, the software images in your Exadata Cloud@Customer instance will become old if they are not maintained. Using an old software image means that you need to apply patches to newly installed binaries to bring them up to date, which is unnecessarily laborious and possibly prone to error.

Software image administration uses the dbaascli utility, which is part of the cloud-specific tooling included in Exadata Cloud@Customer. To use the latest enhancements, you should update to the latest version of the cloud tools. See Cloud Tooling Update.

Note

If you create a new database deployment in an existing Oracle Home directory location, and if the default software image version is higher or lower than the Oracle Home version, a check is made to know if the images required for creating the database exist on DomU. If the image does not exist, an internal command is issued by the cloud automation software to download and use the required image from control plane server to DomU to create the database.

When you create a new Oracle Home, it will be created with the RU that is set as the default image for that Database version. If you want to create a new Oracle Home with the RU that is higher or lower than the default image, you first need to change the default image to match with the RU that you want to create a new Oracle Home with by using Activating a Software image.

Related Topics

Viewing Information About Downloaded Software Images

You can view information about Oracle Database software images that are downloaded to your Exadata Cloud@Customer environment by using the dbimage list subcommand of the dbaascli utility.

  1. Connect to a compute node as the opc user.

    For detailed instructions, see Connecting to a Compute Node Through Secure Shell (SSH).

  2. Start a root-user command shell:

    $ sudo -s
    #
  3. Run the dbaascli command with the dbimage list option:

    # dbaascli dbimage list

    The command displays a list of software images that are downloaded to your Exadata Cloud@Customer environment, including version and bundle patch information.

  4. Exit the root-user command shell:

    # exit
    $

Viewing Information About Available Software Images

You can view information about Oracle Database software images that are available to download to your Exadata Cloud@Customer environment by using the cswlib list subcommand of the dbaascli utility.

  1. Connect to a compute node as the opc user.

    For detailed instructions, see Connecting to a Compute Node Through Secure Shell (SSH).

  2. Start a root-user command shell:

    $ sudo -s
    #
  3. Run the dbaascli command with the cswlib list option:

    # dbaascli cswlib list [ --oss_uri download_location ]

    The command displays a list of available software images, including version and bundle patch information that you can use to download the software image.

  4. Exit the root-user command shell:

    # exit
    $

Downloading a Software Image

You can download available software images and make them available in your Exadata Cloud@Customer environment by using the cswlib download subcommand of the dbaascli utility.

  1. Connect to a compute node as the opc user.

    For detailed instructions, see Connecting to a Compute Node Through Secure Shell (SSH).

  2. Start a root-user command shell:

    $ sudo -s
    #
  3. Run the dbaascli command with the cswlib download option:

    # dbaascli cswlib download --version software_version --bp software_bp [--bp_update ( yes | no )] [--cdb ( yes | no )] [--oss_uri download_location]

    In the preceding command:

    • software_version — specifies an Oracle Database software version. For example, 11204, 12102, 12201, 18000, 19000.

    • software_bp — identifies a bundle patch release. For example, APR2018, JAN2019, OCT2019, and so on.

    • --bp_update — optionally indicates whether the downloaded software image becomes the current default software image. Default is no.

    • --cdb — optionally specifies whether the downloaded software image supports the Oracle multitenant architecture. Default is yes. If you specify --cdb no, then a separate software image is downloaded that contains binaries to support non-container databases (non-CDB).

  4. Exit the root-user command shell:

    # exit
    $

Activating a Software Image

You can use the following procedure to activate a specific software image, making it the default software image for the corresponding software release in your Exadata Cloud@Customer environment.

  1. Connect to a compute node as the opc user.

    For detailed instructions, see Connecting to a Compute Node Through Secure Shell (SSH).

  2. Start a root-user command shell:

    $ sudo -s
    #
  3. Run the dbaascli command with the dbimage activateBP option:

    # dbaascli dbimage activateBP --version software_version --bp software_bp [--cdb ( yes | no )]

    In the preceding command:

    • software_version — specifies the Oracle Database software version. For example, 11204, 12102, 12201, 18000, 19000.

    • software_bp — identifies the bundle patch release. For example, APR2018, JAN2019, OCT2019, and so on.

    • --cdb — optionally specifies whether to activate a software image that supports the Oracle multitenant architecture. Default is yes. If you specify --cdb no, then the command acts on the software image that contains binaries to support non-container databases (non-CDB). Please note that Gen 2 ExaCC only supports non-CDB for 12.1 and 19c.

    The command fails and outputs an error message if the specified software image is not already downloaded to your Exadata Cloud@Customer environment.

  4. Exit the root-user command shell:

    # exit
    $

Deleting a Software Image

You can use the following procedure to delete a software image from your Exadata Cloud@Customer environment.

WARNING:

If you delete an image that is not available on Control Plane server, you may not be able to get it back. To check if the image you are planning to see is available on Control Plane servers, use the dbaascli cswlib list command. If the version you are purging is available in the control plane server, you can use the dbaascli cswlib download command at a later point in time to get the deleted image back.

  1. Connect to a compute node as the opc user.

    For detailed instructions, see Connecting to a Compute Node Through Secure Shell (SSH).

  2. Start a root-user command shell:

    $ sudo -s
    #
  3. Run the dbaascli command with the dbimage purge option:

    # dbaascli dbimage purge --version software_version --bp software_bp [--cdb ( yes | no )]

    In the preceding command:

    • software_version — specifies the Oracle Database software version. For example, 11204, 12102, 12201, 18000, 19000.

    • software_bp — identifies the bundle patch release. For example, APR2018, JAN2019, OCT2019, and so on.

    • --cdb — optionally specifies whether to remove the software image that supports the Oracle multitenant architecture. Default is yes. If you specify --cdb no, then the software image that contains binaries to support non-container databases (non-CDB) is removed.

    If the command will remove a software image that is not currently available in the software image library, and therefore cannot be downloaded again, then the command pauses and prompts for confirmation.

    You cannot remove the current default software image for any software version. To avoid this restriction, you must make another software image the current default.

  4. Exit the root-user command shell:

    # exit
    $

Managing Oracle Database and Oracle Grid Infrastructure Patches

You are responsible for routine patching of Oracle Database and Oracle Grid Infrastructure software.

About the dbaascli Utility

On Exadata Cloud@Customer, routine patching of the Oracle Database and Oracle Grid Infrastructure software is facilitated by using the dbaascli utility.

The dbaascli utility provides a simple means for applying routine patches, which Oracle periodically loads on to the Cloud Control Plane servers.

The dbaascli utility is part of the cloud-specific tooling bundle that is included with Exadata Cloud@Customer. Therefore, before performing the following procedures, ensure that you have the latest version of the cloud-specific tooling on all of the compute nodes in the VM cluster.

List Available Patches

To produce a list of available patches for Oracle Exadata Cloud@Customer, you can use the dbaascli command.

  1. Connect to a compute node as the opc user and start a command shell as the root user.
  2. Execute the dbaascli patch db list command:

    # dbaascli patch db list --oh hostname:oracle_home

    In the preceding command, --oh specifies a compute node and Oracle home directory for which you want to list the available patches. In this context, an Oracle home directory can be an Oracle Database home directory or the Oracle Grid Infrastructure home directory.

    For example:

    # dbaascli patch db list --oh hostname1:/u02/app/oracle/product/12.1.0.2/dbhome_1
    Note

    The list of available patches is determined by interrogating the database to establish the patches that have already been applied. When a patch is applied, the corresponding database entry is made as part of the SQL patching operation, which is run at the end of the patch workflow. Therefore, the list of available patches can include partially applied patches along with patches that are currently being applied.

Check Prerequisites Before Applying a Patch

To list the prerequisites before applying a patch for Oracle Exadata Cloud@Customer, run this procedure.

You can perform the prerequisites-checking operation using the dbaascli command as follows:

  1. Connect to a compute node as the opc user and start a command shell as the root user.
  2. Execute the dbaascli patch db prereq command:
    • On a specific instance:
      # dbaascli patch db prereq --patchid patchid --instance1 hostname:oracle_home [--dbnames dbname[,dbname2 ...]]
    • By specifying only database names:
      # dbaascli patch db prereq --patchid patchid --dbnames dbname[,dbname2 ...] [-alldbs]
      In the preceding commands:
      • patchid identifies the patch to be pre-checked. .
      • --instance1 specifies a compute node and Oracle Home directory that is subject to the pre-check operation. In this context, an Oracle Home directory may be an Oracle Database home directory or the Oracle Grid Infrastructure home directory.
      • --dbnames specifies the database names for the databases that are the target of the pre-check operation.
      • -alldbs specifies that you want to pre-check all of the databases that share the same Oracle Database binaries (Oracle Home) as the specified databases.

      For example:

      # dbaascli patch db list --oh hostname1:/u02/app/oracle/product/12.1.0.2/dbhome_1

Apply a Patch

To apply patches for Oracle Exadata Cloud@Customer., use the dbaascli command.

Tip:

The default SSH configuration will close the terminal session after a few minutes of inactivity, but some of the dbaascli patching commands (in particular the patch db apply command) may take longer than the SSH timeout. As a result, the terminal session where the patch db apply command is running may be killed before the patching operation is complete on all nodes.

To avoid this situation, use a utility like nohup or screen to avoid killing the patching operation if the terminal connection is lost for any reason.

For example:
# nohup dbaascli patch db apply 23456789 --instance1 hostname1:/u02/app/oracle/product/12.1.0.2/dbhome_1 --run_datasql 1 >apply_23456789.out 2>&1 &

This example invokes the dbaascli command through the nohup wrapper in the background, and redirects standard output and standard error to the file apply_23456789.out. The command will keep running even if the terminal gets killed. In that case, reconnect to the system and continue monitoring apply_23456789.out for completion.

The dbaascli patching operation:

  • Can be used to patch some or all of your compute nodes using one command.
  • Coordinates multi-node patching in a rolling manner.
  • Can run patch-related SQL after patching all the compute nodes in the cluster.
  1. Connect to a compute node as the opc user, and start a command shell as the root user.

  2. Run the command dbaascli patch db apply:

    For example, on a specific instance, use this syntax:

    # dbaascli patch db apply --patchid patchid --instance1 hostname:oracle_home --dbnames dbname[,dbname2 ...]] [--run_datasql 1]

    By specifying only database names:

    # dbaascli patch db apply --patchid patchid --dbnames dbname[,dbname2 ...] [--run_datasql 1] [-alldbs]

    In the preceding commands:

    • patchid identifies the patch to be applied.
    • --instance1 specifies a compute node and Oracle home directory that is subject to the patching operation. In this context, an Oracle home directory can be an Oracle Database home directory or the Oracle Grid Infrastructure home directory.

      If you use this argument to specify a shared Oracle home directory, and you do not specify the --dbnames argument, then all of the databases that share the specified Oracle home are patched. After the operation, the Oracle home directory location remains unchanged; however, the patch level information embedded in the Oracle home name is adjusted to reflect the patching operation.

    • --dbnames specifies the database names for the databases that are the target of the patching operation.

      If you use this argument to patch a database that uses a shared Oracle home, and you do not specify the -alldbs option, then a new Oracle home containing the patched Oracle Database binaries is created and the database is moved to the new Oracle home.

    • -alldbs patches all of the databases that share the same Oracle Database binaries (Oracle home as the databases specified in the --dbnames argument.

      After the operation, the Oracle home directory location remains unchanged; however, the patch level information embedded in the Oracle home name is adjusted to reflect the patching operation.

    • --run_datasql 1 instructs the command to run patch-related SQL commands.
      • Only run patch-related SQL after all of the compute nodes are patched. Take care not to specify this argument if you are patching a node, and further nodes remain to be patched.
      • This argument can only be specified with a patching operation on a compute node. If you have patched all of your nodes, and you did not specify this argument, then you must manually run the SQL commands associated with the patch. Typically, running the SQL commands manually involves running the catbundle.sql script for Oracle Database 11g, or the datapatch utility for Oracle Database 12c, or later releases. Refer to the patch documentation for full details.

    For example:

    # dbaascli patch db apply 23456789 --instance1 hostname1:/u02/app/oracle/product/12.1.0.2/dbhome_1 --run_datasql 1

List Applied Patches

To list the applied patches for Oracle Exadata Cloud@Customer, use this procedure.

You can use the opatch utility to list the patches that have been applied to an Oracle Database or Grid Infrastructure installation.

To produce a list of applied patches for an Oracle Database installation:

  1. Connect to a compute node as the oracle user.
  2. Set the ORACLE_HOME variable to the location of the Oracle Database installation you want to examine. For example:
    $ export ORACLE_HOME=/u02/app/oracle/product/12.1.0.2/dbhome_1
  3. Execute the opatch command with the lspatches option:
    $ $ORACLE_HOME/OPatch/opatch lspatches

To produce a list of applied patches for Oracle Grid Infrastructure:

  1. Connect to a compute node as the opc user.
  2. Become the grid user:
    $ sudo -s
    # su - grid
  3. Execute the opatch command with the lspatches option:
    $ $ORACLE_HOME/OPatch/opatch lspatches

Roll Back a Patch

To roll back patches for Oracle Exadata Cloud@Customer, complete this procedure.

To roll back a patch or a failed patch attempt, use the dbaascli command.

Rollback patch operations:

  • Can be used to roll back a patch on some or all of your compute nodes using one command.
  • Coordinate multi-node operations in a rolling manner.
  • Can run rollback-related SQL after rolling back the patch on all the compute nodes in the cluster.
  1. Connect to a compute node as the opc user, and start a command shell as the root user.
  2. Run the dbaascli command with the -rollback_async option:
    • On specific instances:

      # dbaascli patch db switchback --patchid patchid --instance1 hostname:oracle_home [--dbnames dbname[,dbname2 ...]] [--run_datasql 1]
    • By specifying only database names:

      # dbaascli patch db switchback --patchid patchid --dbnames dbname[,dbname2 ...] [--run_datasql 1] [-alldbs]

    In the preceding commands:

    • --patchid identifies the patch that you want to roll back.
    • --instance1 specifies the compute node host name and Oracle home directory that is subject to the rollback operation. In this context, an Oracle home directory can be either an Oracle Database home directory (Oracle home), or the Oracle Grid Infrastructure (Grid home) directory.

      If you use this argument to specify a shared Oracle home directory, and you do not specify the --dbnames argument, then all of the databases that share the specified Oracle home are rolled back.

    • --dbnames specifies the database names for the databases that are the target of the rollback operation.
    • -alldbs specifies that you want to roll back all of the databases that share the same Oracle Database binaries (Oracle home) as the databases specified in the --dbnames argument.
    • --run_datasql 1 instructs the command to run rollback-related SQL commands.
      Note

      • Only run rollback-related SQL after all of the compute nodes are rolled back. If you are rolling back a node, and further nodes remain to be rolled back, then do not specify this argument.
      • You can only specify this argument as part of a rollback operation on a compute node. If you have rolled back all of your nodes, and you did not specify this argument, then you must run the SQL commands associated with the rollback operation manually. Refer to the patch documentation for full details.

    For example:

    # dbaascli patch db switchback 34567890 --instance1 hostname1:/u02/app/oracle/product/12.1.0.2/dbhome_1 --run_datasql 1

Manually Patching Oracle Database and Oracle Grid Infrastructure Software

For Oracle Java VM, daylight savings time, and some routine or one-off patches, it can be necessary for you to patch software manually.

To perform routine patching of Oracle Database and Oracle Grid Infrastructure software, Oracle recommends that you use the facilities provided by Oracle Exadata Cloud@Customer. However, under some circumstances, it can be necessary for you to patch the Oracle Database or Oracle Grid Infrastructure software manually:
  • Oracle Java Virtual Machine (OJVM) Patching: Because they cannot be applied in a rolling fashion, patches for the Oracle Database OJVM component are not included in the routine patch sets for Exadata Cloud@Customer. If you need to apply patches to the OJVM component of Oracle Database, then you must do so manually. See My Oracle Support Doc ID 1929745.1.
  • Daylight Savings Time (DST) Patching: Because they cannot be applied in a rolling fashion, patches for the Oracle Database DST definitions are not included in the routine patch sets for Exadata Cloud@Customer. If you need to apply patches to the Oracle Database DST definitions, you must do so manually. See My Oracle Support Doc ID 412160.1.
  • Non-routine or One-off Patching: If you encounter a problem that requires a patch which is not included in any routine patch set, then work with Oracle Support Services to identify and apply the appropriate patch.

For general information about patching Oracle Database, refer to information about patch set updates and requirements in Oracle Database Upgrade Guide for your release.

Updating the Compute Node Operating System

Learn about standard Exadata tools and techniques that you can use to update the operating system components on the Exadata Cloud@Customer compute nodes.

You are responsible for managing patches and updates to the operating system environment on the compute node VMs. For further information, read about updating Exadata Database Machine servers in Oracle Exadata Database Machine Maintenance Guide.

Preparing for an Operating System Update

To prepare for an operating system update for Oracle Exadata Cloud@Customer, review this checklist of tasks.

Before you update your operating system, do each of these preparation tasks:
  • Determine the latest software update. Before you begin an update, to determine the latest software to use, review Exadata Cloud Service Software Versions in My Oracle Support note 2333222.1.
  • Lodge a service request with Oracle Support. For feature release updates only, Oracle recommends that you lodge a service request with Oracle Support Services to ensure that Oracle is aware of your plans, and is prepared to assist if there are any difficulties. You are able to apply Oracle Exadata software release updates to the compute nodes at your convenience.

    Determine if you are performing a feature release update. A feature release update is an update that changes any of the first four digits in the Oracle Exadata software release identifier. For example, upgrading from Oracle Exadata software release 12.2.2.2.0 to release 12.2.2.3.0 would be a feature release update. However, upgrading from Oracle Exadata software release 12.2.2.3.0 to release 12.2.2.3.4 would not be considered a feature release update. You can determine the current Oracle Exadata software release by running the command imageinfo on any compute node.

  • Identify your YUM repository. Some steps in the update process require you to specify a YUM repository. The YUM repository URL is:
    http://yum.oracle.com/repo/EngineeredSystems/exadata/dbserver/latest-version/base/x86_64.

    In the preceding URL, latest-version is the YUM repository version that you want to specify. To determine the latest version of the YUM repository, examine the output from the following curl command:

    curl -s -X GET http://yum.oracle.com/repo/EngineeredSystems/exadata/dbserver/index.html
  • Configure YUM repository access. To apply operating system updates, the network hosting your Oracle Exadata Cloud@Customer system must be configured to allow access to the YUM repository.

Updating the Operating System on All Compute Nodes of an Oracle Exadata Cloud@Customer System

To update the operating system on the compute node virtual machines, use the patchmgr tool.

The patchmgr utility manages the entire update of one or more compute nodes remotely, including the pre-restart, restart, and post-restart steps of an Oracle Exadata Cloud@Customer system.

You can run the utility either from one of your Oracle Exadata Cloud@Customer compute nodes, or from another server running Oracle Linux. The server on which you run the utility is known as the driving system. You cannot use the driving system to update itself. Therefore, if the driving system is one of the compute nodes in a VM cluster that you are updating, then you must run the patchmgr utility more than once. The following scenarios describe typical ways of performing the updates:

  • Non-Exadata Driving System

    The simplest way to run the update the system is to use a separate Oracle Linux server to update all compute nodes in one operation.

  • Exadata Compute Node Driving System

    You can use one compute node to drive the updates for the rest of the compute nodes in the VM cluster. Then, you can use one of the updated nodes to drive the update on the original driving system. For example, consider updating a half rack system with four compute nodes; node1, node2, node3, and node4. You could first use node1 to drive the updates of node2, node3, and node4. Then, you could use node2 to drive the update of node1.

The driving system requires root user SSH access to each compute node being updated.

The following procedure is based on an example that assumes the following:

  • The system has two compute nodes, node1 and node2.
  • The target Exadata software version is 18.1.4.0.0.180125.3.
  • Each node is used as the driving system to update the other node.
  1. Gather the environment details.
    1. Using SSH, connect to node1 as root and run the following command to determine the current Exadata software version:
      [root@node1 ~]# imageinfo -ver 12.2.1.1.4.171128
    2. Switch to the grid user, and identify all nodes in the cluster.
      [root@node1 ~]# su - grid 
      [grid@node1 ~]$ olsnodes
      node1
      node2
  2. Configure the driving system.
    1. Switch back to the root user on node1 and check whether an SSH key pair (id_rsa and id_rsa.pub) exists. If not, then generate it.
      [root@node1 ~]# ls /root/.ssh/id_rsa*
      ls: cannot access /root/.ssh/id_rsa*: No such file or directory
      [root@node1 ~]# ssh-keygen -t rsa
      Generating public/private rsa key pair.
      Enter file in which to save the key (/root/.ssh/id_rsa):
      Enter passphrase (empty for no passphrase):
      Enter same passphrase again:
      Your identification has been saved in /root/.ssh/id_rsa.
      Your public key has been saved in /root/.ssh/id_rsa.pub.
      The key fingerprint is:
      93:47:b0:83:75:f2:3e:e6:23:b3:0a:06:ed:00:20:a5 root@node1.example.com
      The key's randomart image is:
      +--[ RSA 2048]----+
      |o..     + .      |
      |o.     o *       |
      |E     . o o      |
      | . .     =       |
      |  o .   S =      |
      |   +     = .     |
      |    +   o o      |
      |   . .   + .     |
      |      ...        |
      +-----------------+
    2. Distribute the public key to the target nodes, and verify this step. In the example, the only target node is node2.
      [root@node1 ~]# scp -i ~root/.ssh/id_rsa.pub opc@node2:/tmp/id_rsa.node1.pub
      [root@node2 ~]# ls -al /tmp/id_rsa.node1.pub
      -rw-r--r-- 1 opc opc 442 Feb 28 03:33 /tmp/id_rsa.node1.pub
      [root@node2 ~]# date
      Wed Feb 28 03:33:45 UTC 2018
    3. On the target node (node2 in the example), add the root public key of node1 to the root authorized_keys file.
      [root@node2 ~]# cat /tmp/id_rsa.node1.pub >> ~root/.ssh/authorized_keys
    4. Download patchmgr into /root/patch on the driving system (node1 in this example).

      You can download the patchmgr bundle from Oracle Support by using My Oracle Support Patch ID 21634633.

      For further information, see also dbnodeupdate.sh and dbserver.patch.zip: Updating Exadata Database Server Software using the DBNodeUpdate Utility and patchmgr: My Oracle Support Doc ID 1553103.1.

    5. Unzip the patchmgr bundle.

      Depending on the version that you downloaded, the name of your ZIP file can differ.

      [root@node1 ~]# cd /root/patch
      [root@node1 patch]# unzip p21634633_181400_Linux-x86-64.zip
      Archive:  p21634633_181400_Linux-x86-64.zip   creating: dbserver_patch_5.180228.2/
      creating: dbserver_patch_5.180228.2/ibdiagtools/
      inflating: dbserver_patch_5.180228.2/ibdiagtools/cable_check.pl
      inflating: dbserver_patch_5.180228.2/ibdiagtools/setup-ssh
      inflating: dbserver_patch_5.180228.2/ibdiagtools/VERSION_FILE
      extracting: dbserver_patch_5.180228.2/ibdiagtools/xmonib.sh
      inflating: dbserver_patch_5.180228.2/ibdiagtools/monitord
      inflating: dbserver_patch_5.180228.2/ibdiagtools/checkbadlinks.pl
      creating: dbserver_patch_5.180228.2/ibdiagtools/topologies/
      inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/VerifyTopologyUtility.pm
      inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/verifylib.pm
      inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Node.pm
      inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Rack.pm
      inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Group.pm
      inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Switch.pm
      inflating: dbserver_patch_5.180228.2/ibdiagtools/topology-zfs
      inflating: dbserver_patch_5.180228.2/ibdiagtools/dcli
      creating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/
      inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/remoteScriptGenerator.pm
      inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/CommonUtils.pm
      inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/SolarisAdapter.pm
      inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/LinuxAdapter.pm
      inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/remoteLauncher.pm
      inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/remoteConfig.pm
      inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/spawnProc.pm
      inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/runDiagnostics.pm
      inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/OSAdapter.pm
      inflating: dbserver_patch_5.180228.2/ibdiagtools/SampleOutputs.txt
      inflating: dbserver_patch_5.180228.2/ibdiagtools/infinicheck
      inflating: dbserver_patch_5.180228.2/ibdiagtools/ibping_test
      inflating: dbserver_patch_5.180228.2/ibdiagtools/tar_ibdiagtools
      inflating: dbserver_patch_5.180228.2/ibdiagtools/verify-topology
      inflating: dbserver_patch_5.180228.2/installfw_exadata_ssh
      creating: dbserver_patch_5.180228.2/linux.db.rpms/
      inflating: dbserver_patch_5.180228.2/md5sum_files.lst
      inflating: dbserver_patch_5.180228.2/patchmgr
      inflating: dbserver_patch_5.180228.2/xcp
      inflating: dbserver_patch_5.180228.2/ExadataSendNotification.pm
      inflating: dbserver_patch_5.180228.2/ExadataImageNotification.pl
      inflating: dbserver_patch_5.180228.2/kernelupgrade_oldbios.sh
      inflating: dbserver_patch_5.180228.2/cellboot_usb_pci_path
      inflating: dbserver_patch_5.180228.2/exadata.img.env
      inflating: dbserver_patch_5.180228.2/README.txt
      inflating: dbserver_patch_5.180228.2/exadataLogger.pm
      inflating: dbserver_patch_5.180228.2/patch_bug_26678971
      inflating: dbserver_patch_5.180228.2/dcli
      inflating: dbserver_patch_5.180228.2/patchReport.py
      extracting: dbserver_patch_5.180228.2/dbnodeupdate.zip
      creating: dbserver_patch_5.180228.2/plugins/
      inflating: dbserver_patch_5.180228.2/plugins/010-check_17854520.sh
      inflating: dbserver_patch_5.180228.2/plugins/020-check_22468216.sh
      inflating: dbserver_patch_5.180228.2/plugins/040-check_22896791.sh
      inflating: dbserver_patch_5.180228.2/plugins/000-check_dummy_bash
      inflating: dbserver_patch_5.180228.2/plugins/050-check_22651315.sh
      inflating: dbserver_patch_5.180228.2/plugins/005-check_22909764.sh
      inflating: dbserver_patch_5.180228.2/plugins/000-check_dummy_perl
      inflating: dbserver_patch_5.180228.2/plugins/030-check_24625612.sh
      inflating: dbserver_patch_5.180228.2/patchmgr_functions
      inflating: dbserver_patch_5.180228.2/exadata.img.hw
      inflating: dbserver_patch_5.180228.2/libxcp.so.1
      inflating: dbserver_patch_5.180228.2/imageLogger
      inflating: dbserver_patch_5.180228.2/ExaXMLNode.pm
      inflating: dbserver_patch_5.180228.2/fwverify
    6. In the directory that contains the patchmgr utility, create the dbs_group file, which contains the list of compute nodes to update. Include the nodes listed after running the olsnodes command in step 1, except for the driving system. In this example, dbs_group only contains node2.
      [root@node1 patch]# cd /root/patch/dbserver_patch_5.180228
      [root@node1 dbserver_patch_5.180228]# cat dbs_group
      node2
  3. Run a patching precheck operation.
    [root@node1 dbserver_patch_5.180228]# ./patchmgr -dbnodes dbs_group -precheck -yum_repo yum-repository -target_version target-version -nomodify_at_prereq
    Note

    Run the precheck operation with the -nomodify_at_prereq option to prevent any changes to the system that could impact the backup you take in the next step. Otherwise, the backup might not be able to roll the system back to its original state, should it be necessary.

    The output should look similar to the following example:

    [root@node1 dbserver_patch_5.180228]# ./patchmgr -dbnodes dbs_group -precheck -yum_repo http://yum.oracle.com/repo/EngineeredSystems/exadata/dbserver/18.1.4.0.0/base/x86_64 -target_version 18.1.4.0.0.180125.3 -nomodify_at_prereq
     
    ************************************************************************************************************
    NOTE    patchmgr release: 5.180228 (always check MOS 1553103.1 for the latest release of dbserver.patch.zip)
    NOTE
    WARNING Do not interrupt the patchmgr session.
    WARNING Do not resize the screen. It may disturb the screen layout.
    WARNING Do not reboot database nodes during update or rollback.
    WARNING Do not open logfiles in write mode and do not try to alter them.
    ************************************************************************************************************
    2018-02-28 21:22:45 +0000        :Working: DO: Initiate precheck on 1 node(s)
    2018-02-28 21:24:57 +0000        :Working: DO: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:26:15 +0000        :SUCCESS: DONE: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:26:47 +0000        :Working: DO: dbnodeupdate.sh running a precheck on node(s).
    2018-02-28 21:28:23 +0000        :SUCCESS: DONE: Initiate precheck on node(s).
  4. Back up the current system.
    [root@node1 dbserver_patch_5.180228]# ./patchmgr -dbnodes dbs_group -backup -yum_repo yum-repository -target_version target-version -allow_active_network_mounts
    Note

    Ensure that you take the backup at this point, before any modifications are made to the system.

    The output should look similar to the following example:

    [root@node1 dbserver_patch_5.180228]# ./patchmgr -dbnodes dbs_group -backup -yum_repo http://yum.oracle.com/repo/EngineeredSystems/exadata/dbserver/18.1.4.0.0/base/x86_64 -target_version 18.1.4.0.0.180125.3 -allow_active_network_mounts
     
    ************************************************************************************************************
    NOTE    patchmgr release: 5.180228 (always check MOS 1553103.1 for the latest release of dbserver.patch.zip)
    NOTE
    WARNING Do not interrupt the patchmgr session.
    WARNING Do not resize the screen. It may disturb the screen layout.
    WARNING Do not reboot database nodes during update or rollback.
    WARNING Do not open logfiles in write mode and do not try to alter them.
    ************************************************************************************************************
    2018-02-28 21:29:00 +0000        :Working: DO: Initiate backup on 1 node(s).
    2018-02-28 21:29:00 +0000        :Working: DO: Initiate backup on node(s)
    2018-02-28 21:29:01 +0000        :Working: DO: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:30:18 +0000        :SUCCESS: DONE: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:30:51 +0000        :Working: DO: dbnodeupdate.sh running a backup on node(s).
    2018-02-28 21:35:50 +0000        :SUCCESS: DONE: Initiate backup on node(s).
    2018-02-28 21:35:50 +0000        :SUCCESS: DONE: Initiate backup on 1 node(s).
  5. Remove all custom RPMs from the target compute nodes. Custom RPMs are reported in precheck results. They include RPMs that were manually installed after the system was provisioned.
    • If you are updating the system from version 12.1.2.3.4.170111, and the precheck results include krb5-workstation-1.10.3-57.el6.x86_64, then remove it. This item is considered a custom RPM for this version.
    • Do not remove exadata-sun-vm-computenode-exact or oracle-ofed-release-guest. These two RPMs are handled automatically during the update process.
  6. Perform the update. To ensure that the update process in not interrupted, use the command nohup. For example:
    [root@node1 dbserver_patch_5.180228]# nohup ./patchmgr -dbnodes dbs_group -upgrade -nobackup -yum_repo yum-repository -target_version target-version -allow_active_network_mounts &

    The output should look similar to the following example:

    [root@node1 dbserver_patch_5.180228]# nohup ./patchmgr -dbnodes dbs_group -upgrade -nobackup -yum_repo http://yum.oracle.com/repo/EngineeredSystems/exadata/dbserver/18.1.4.0.0/base/x86_64 -target_version 18.1.4.0.0.180125.3 -allow_active_network_mounts &
     
    ************************************************************************************************************
    NOTE    patchmgr release: 5.180228 (always check MOS 1553103.1 for the latest release of dbserver.patch.zip)
    NOTE
    NOTE    Database nodes will reboot during the update process.
    NOTE
    WARNING Do not interrupt the patchmgr session.
    WARNING Do not resize the screen. It may disturb the screen layout.
    WARNING Do not reboot database nodes during update or rollback.
    WARNING Do not open logfiles in write mode and do not try to alter them.
    *********************************************************************************************************
    2018-02-28 21:36:26 +0000        :Working: DO: Initiate prepare steps on node(s).
    2018-02-28 21:36:26 +0000        :Working: DO: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:37:44 +0000        :SUCCESS: DONE: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:38:43 +0000        :SUCCESS: DONE: Initiate prepare steps on node(s).
    2018-02-28 21:38:43 +0000        :Working: DO: Initiate update on 1 node(s).
    2018-02-28 21:38:43 +0000        :Working: DO: Initiate update on node(s)
    2018-02-28 21:38:49 +0000        :Working: DO: Get information about any required OS upgrades from node(s).
    2018-02-28 21:38:59 +0000        :SUCCESS: DONE: Get information about any required OS upgrades from node(s).
    2018-02-28 21:38:59 +0000        :Working: DO: dbnodeupdate.sh running an update step on all nodes.
    2018-02-28 21:48:41 +0000        :INFO   : node2 is ready to reboot.
    2018-02-28 21:48:41 +0000        :SUCCESS: DONE: dbnodeupdate.sh running an update step on all nodes.
    2018-02-28 21:48:41 +0000        :Working: DO: Initiate reboot on node(s)
    2018-02-28 21:48:57 +0000        :SUCCESS: DONE: Initiate reboot on node(s)
    2018-02-28 21:48:57 +0000        :Working: DO: Waiting to ensure node2 is down before reboot.
    2018-02-28 21:56:18 +0000        :Working: DO: Initiate prepare steps on node(s).
    2018-02-28 21:56:19 +0000        :Working: DO: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:57:37 +0000        :SUCCESS: DONE: Check free space and verify SSH equivalence for the root user to node2
    2018-02-28 21:57:42 +0000        :SEEMS ALREADY UP TO DATE: node2
    2018-02-28 21:57:43 +0000        :SUCCESS: DONE: Initiate update on node(s)
  7. After the update operation completes, verify the version of the Exadata software on the compute node that was updated.
    [root@node2 ~]# imageinfo -ver
    18.1.4.0.0.180125.3
  8. Repeat steps 2 through 7 of this procedure using the updated compute node as the driving system to update the remaining compute node. In this example update, you would now use node2 to update node1.
  9. As root On each compute node, run the uptrack-install command to install the available ksplice updates.
    [root@node1 ~]# uptrack-install --all -y
    [root@node2 ~]# uptrack-install --all -y

Installing Additional Operating System Packages

Review these guidelines before you install additional operating system packages for Oracle Exadata Cloud@Customer.

You are permitted to install and update operating system packages on Oracle Exadata Cloud@Customer as long as you do not modify the kernel or InfiniBand-specific packages. However, Oracle technical support, including installation, testing, certification and error resolution, does not apply to any non-Oracle software that you install.

Also be aware that if you add or update packages separate from an Oracle Exadata software update, then these package additions or updates can introduce problems when you apply an Oracle Exadata software update. Problems can occur because additional software packages add new dependencies that can interrupt an Oracle Exadata update. For this reason, Oracle recommends that you minimize customization.

If you install additional packages, then Oracle recommends that you have scripts to automate the removal and reinstallation of those packages. After an Oracle Exadata update, if you install additional packages, then verify that the additional packages are still compatible, and that you still need these packages.

For more information, refer to Oracle Exadata Database Machine Maintenance Guide.

Cloud Tooling Updates

You are responsible for updating the cloud-specific tooling included on the Exadata Cloud@Customer compute nodes.

Note

You can update the cloud-specific tooling by downloading and applying a software package containing the updated tools.

Checking the Installed Cloud Tooling Release for Updates

To check the installed cloud tooling release for Oracle Exadata Cloud@Customer, complete this procedure.

To check the installed cloud tooling release for updates:
  1. Connect to a compute node as the opc user, and start a command shell as the root user.
  2. Use the following command to display information about the installed cloud tooling, and to list the available updates:
    # dbaascli patch tools list

    The command output displays:

    • The version of the cloud tooling that is installed on the compute node.
    • The list of available updates.
    • Notification of the cloud tooling version that is installed on the other compute nodes in the VM cluster.

Updating the Cloud Tooling Release

To update the cloud tooling release for Oracle Exadata Cloud@Customer, complete this procedure.

To update the cloud tooling release:
  1. Connect to a compute node as the opc user, and start a command shell as the root user.
  2. Download and apply the cloud tooling update:
    • To update to the latest available cloud tooling release, use the following command:
      # dbaascli patch tools apply --patchid LATEST
    • To update to a specific cloud tooling release, use the following command:
      # dbaascli patch tools apply --patchid patchid

      In the preceding command, patchid is a cloud tooling patch identifier, as reported in the output of the dbaascli patch tools list command.

    The cloud tooling update is applied to all nodes in the VM cluster.