Managing VM Clusters on Exadata Cloud@Customer

Learn to manage virtual machine (VM) clusters on Exadata Cloud@Customer.

About Managing VM Clusters on Exadata Cloud@Customer

The VM cluster provides a link between your Exadata Cloud@Customer infrastructure and Oracle Database.

Before you can create any databases on your Exadata Cloud@Customer infrastructure, you must create a VM cluster network, and you must associate it with a VM cluster. Each Exadata Cloud@Customer infrastructure deployment can support one VM cluster network and associated VM cluster.

The VM cluster network specifies network resources, such as IP addresses and host names, that reside in your corporate data center and are allocated to Exadata Cloud@Customer. The VM cluster network includes definitions for the Exadata client network and the Exadata backup network. The client network and backup network contain the network interfaces that you use to connect to the VM cluster compute nodes, and ultimately the databases that reside on those compute nodes.

The VM cluster provides a link between your Exadata Cloud@Customer infrastructure Oracle Databases you deploy. The VM cluster contains an installation of Oracle Clusterware, which supports databases in the cluster. In the VM cluster definition, you also specify the number of enabled CPU cores, which determines the amount of CPU resources that are available to your databases.

Note

Avoid entering confidential information when assigning descriptions, tags, or friendly names to your cloud resources through the Oracle Cloud Infrastructure Console, API, or CLI.

Required IAM Policy for Managing VM Clusters

Review the identity access management (IAM) policy for managing virtual machine (VM) clusters on Oracle Exadata Cloud@Customer Systems.

A policy is an IAM document that specifies who has what type of access to your resources. It is used in different ways: to mean an individual statement written in the policy language; to mean a collection of statements in a single, named "policy" document (which has an Oracle Cloud ID (OCID) assigned to it); and to mean the overall body of policies your organization uses to control access to resources.

A compartment is a collection of related resources that can be accessed only by certain groups that have been given permission by an administrator in your organization.

To use Oracle Cloud Infrastructure, you must be given the required type of access in a policy written by an administrator, whether you're using the Console, or the REST API with a software development kit (SDK), a command-line interface (CLI), or some other tool. If you try to perform an action, and receive a message that you don’t have permission, or are unauthorized, then confirm with your administrator the type of access you've been granted, and which compartment you should work in.

For administrators: The policy in "Let database admins manage DB systems" lets the specified group do everything with databases, and related database resources.

If you're new to policies, then see "Getting Started with Policies" and "Common Policies". If you want to dig deeper into writing policies for databases, then see "Details for the Database Service".

Prerequisites for VM Clusters on Exadata Cloud@Customer

To connect to the VM cluster compute node, you use an SSH public key.

The public key is in OpenSSH format, from the key pair that you plan to use for connecting to the VM cluster compute nodes through SSH. The following shows an example of a public key, which is abbreviated for readability.
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAA....lo/gKMLVM2xzc1xJr/Hc26biw3TXWGEakrK1OQ== rsa-key-20160304

Using the Console for VM Clusters on Exadata Cloud@Customer

Learn how to use the console to create, edit, download a configuration file, validate, and terminate your infrastructure network, and manage your infrastructure for Oracle Exadata Cloud@Customer.

Using the Console to Create a VM Cluster Network

To create your VM cluster network with the Console, be prepared to provide values for the fields required for configuring the infrastructure.

  1. Open the navigation menu. Under Database, click Exadata Cloud@Customer.
  2. Choose the Region and Compartment that contains the Exadata infrastructure for which you want to create a VM cluster network.
  3. Click Exadata Infrastructure.
  4. Click the name of the Exadata infrastructure for which you want to create a VM cluster network.

    The Infrastructure Details page displays information about the selected Exadata infrastructure.

  5. Click Create VM Cluster Network.
  6. Provide the requested information in the Data Center Network Details page:
    1. Provide the display name.

      The display name is a user-friendly name that you can use to identify the VM cluster network. The name doesn't need to be unique because an Oracle Cloud Identifier (OCID) uniquely identifies the VM cluster network.

    2. Provide client network details.
      The client network is the primary channel for application connectivity to Exadata Cloud@Customer resources. The following settings define the required network parameters:
      • VLAN ID: Provide a virtual LAN identifier (VLAN ID) for the client network between 1 and 4094, inclusive. To specify no VLAN tagging, enter "1". (This is equivalent to a "NULL" VLAN ID tag value.)
        Note

        The values "0" and "4095" are reserved and cannot be entered.
      • CIDR Block: Using CIDR notation, provide the IP address range for the client network.

        The following table specifies the maximum and recommended CIDR block prefix lengths for each Exadata system shape. The maximum CIDR block prefix length defines the smallest block of IP addresses that are required for the network. To allow for possible future expansion within Exadata Cloud@Customer, a smaller CIDR block prefix length is recommended, which reserves more IP addresses for the network.

        Exadata System Shape Base System, Quarter Rack, or Half Rack Full Rack

        Maximum CIDR block prefix length

        /28

        /27

        Recommended CIDR block prefix length

        /27

        /26

      • Netmask: Specify the IP netmask for the client network.
      • Gateway: Specify the IP address of the client network gateway.
      • Hostname Prefix: Specify the prefix that is used to generate the hostnames in the client network.
      • Domain Name: Specify the domain name for the client network.
    3. Provide backup network details.
      The backup network is the secondary channel for connectivity to Exadata Cloud@Customer resources. It is typically used to segregate application connections on the client network from other network traffic. The following settings define the required network parameters:
      • VLAN ID: Provide a virtual LAN identifier (VLAN ID) for the backup network between 1 and 4094, inclusive. To specify no VLAN tagging, enter "1". (This is equivalent to a "NULL" VLAN ID tag value.)
        Note

        The values "0" and "4095" are reserved, and cannot be entered.
      • CIDR Block: Using CIDR notation, provide the IP address range for the backup network.

        The following table specifies the maximum and recommended CIDR block prefix lengths for each Exadata system shape. The maximum CIDR block prefix length defines the smallest block of IP addresses that are required for the network. To allow for possible future expansion within Exadata Cloud@Customer, a smaller CIDR block prefix length is recommended, which reserves more IP addresses for the network.

        Exadata System Shape Base System, Quarter Rack, or Half Rack Full Rack

        Maximum CIDR block prefix length

        /29

        /28

        Recommended CIDR block prefix length

        /28

        /27

      • Netmask: Specify the IP netmask for the backup network.
      • Gateway: Specify the IP address of the backup network gateway.
      • Hostname Prefix: Specify the prefix that is used to generate the hostnames in the backup network.
      • Domain Name: Specify the domain name for the backup network.
    4. Provide DNS and NTP server details.
      The VM cluster network requires access to Domain Names System (DNS) and Network Time Protocol (NTP) services. The following settings specify the servers that provide these services:
      • DNS Servers: Provide the IP address of a DNS server that is accessible using the client network. You may specify up to three DNS servers.
      • NTP Servers: Provide the IP address of an NTP server that is accessible using the client network. You may specify up to three NTP servers.
    5. Configure Advanced Options.

      Tags: (Optional) You can choose to apply tags. If you have permissions to create a resource, then you also have permissions to apply free-form tags to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more information about tagging, refer to information about resource tags. If you are not sure if you should apply tags, then skip this option (you can apply tags later) or ask your administrator.

  7. Click Review Configuration.

    The Review Configuration page displays detailed information about the VM cluster network, including the hostname and IP address allocations. These allocations are initially system-generated, and are based on your inputs to the Specify Parameters page.

  8. (Optional) You can choose to adjust the system-generated network definitions on the Review Configuration page.
    1. Click Edit IP Allocation.
    2. Use the Edit dialog to adjust the system-generated network definitions to meet your requirements.
    3. Click Save Changes.
  9. Click Create VM Cluster Network.

    The VM Cluster Network Details page is now displayed. Initially after creation, the state of the VM cluster network is Requires Validation.

Using the Console to Edit a VM Cluster Network

You can only edit a VM cluster network that is not associated with a VM cluster.

  1. Open the navigation menu. Under Database, click Exadata Cloud@Customer.
  2. Choose the Region and Compartment that contains the Exadata infrastructure that is associated with the VM cluster network that you want to edit.
  3. Click Exadata Infrastructure.
  4. Click the name of the Exadata infrastructure that is associated with the VM cluster network that you are interested in.

    The Infrastructure Details page displays information about the selected Exadata infrastructure.

  5. Click the name of the VM cluster network that you want to edit.

    The VM Cluster Network Details page displays information about the selected VM cluster network.

  6. Click Edit VM Cluster Network.
  7. Use the Edit dialog to edit the VM cluster network attributes:
    1. Client Network
      The client network is the primary channel for application connectivity to Exadata Cloud@Customer resources. You can edit the following client network settings:
      • VLAN ID: Provide a virtual LAN identifier (VLAN ID) for the client network between 1 and 4094, inclusive. To specify no VLAN tagging, enter "1". (This is equivalent to a "NULL" VLAN ID tag value.)
        Note

        The values "0" and "4095" are reserved and cannot be entered.
      • Netmask: Specify the IP netmask for the client network.
      • Gateway: Specify the IP address of the client network gateway.
      • Hostname: Specify the hostname for each address in the client network.
      • IP Address: Specify the IP address for each address in the client network.
    2. Backup Network
      The backup network is the secondary channel for connectivity to Exadata Cloud@Customer resources. It is typically used to segregate application connections on the client network from other network traffic. You can edit the following backup network settings:
      • VLAN ID: Provide a virtual LAN identifier (VLAN ID) for the backup network between 1 and 4094, inclusive. To specify no VLAN tagging, enter "1". (This is equivalent to a "NULL" VLAN ID tag value.)
        Note

        The values "0" and "4095" are reserved and cannot be entered.
      • Hostname: Specify the hostname for each address in the backup network.
      • IP Address: Specify the IP address for each address in the backup network.
    3. Configure DNS and NTP Servers
      The VM cluster network requires access to Domain Names System (DNS) and Network Time Protocol (NTP) services. You can edit the following settings:
      • DNS Servers: Provide the IP address of a DNS server that is accessible using the client network. You may specify up to three DNS servers.
      • NTP Servers: Provide the IP address of an NTP server that is accessible using the client network. You may specify up to three NTP servers.
  8. Click Save Changes.

    After editing, the state of the VM cluster network is Requires Validation.

Using the Console to Download a File Containing the VM Cluster Network Configuration Details

To provide VM cluster network information to your network administrator, you can download and supply a file containing the network configuration.

Use this procedure to download a configuration file that you can supply to your network administrator. The file contains the information needed to configure your corporate DNS and other network devices to work along with Exadata Cloud@Customer.
  1. Open the navigation menu. Under Database, click Exadata Cloud@Customer.
  2. Choose the Region and Compartment that contains the Exadata infrastructure that is associated with the VM cluster network that you are interested in.
  3. Click Exadata Infrastructure.
  4. Click the name of the Exadata infrastructure that is associated with the VM cluster network that you are interested in.

    The Infrastructure Details page displays information about the selected Exadata infrastructure.

  5. Click the name of the VM cluster network for which you want to download a file containing the VM cluster network configuration details.

    The VM Cluster Network Details page displays information about the selected VM cluster network.

  6. Click Download Network Configuration.

    Your browser downloads a file containing the VM cluster network configuration details.

Using the Console to Validate a VM Cluster Network

You can only validate a VM cluster network if its current state is Requires Validation, and if the underlying Exadata infrastructure is activated.

  1. Open the navigation menu. Under Database, click Exadata Cloud@Customer.
  2. Choose the Region and Compartment that contains the Exadata infrastructure that is associated with the VM cluster network that you want to validate.
  3. Click Exadata Infrastructure.
  4. Click the name of the Exadata infrastructure that is associated with the VM cluster network that you are interested in.

    The Infrastructure Details page displays information about the selected Exadata infrastructure.

  5. Click the name of the VM cluster network that you want to validate.

    The VM Cluster Network Details page displays information about the selected VM cluster network.

  6. Click Validate VM Cluster Network.

    Validation performs a series of automated checks on the VM cluster network. The Validate VM Cluster Network button is only available if the VM cluster network requires validation.

  7. In the resulting dialog, click Validate to confirm the action.

    After successful validation, the state of the VM cluster network changes to Validated and the VM cluster network is ready to use. If validation fails for any reason, examine the error message and resolve the issue before repeating validation.

Using the Console to Terminate a VM Cluster Network

Before you can terminate a VM cluster network, you must first terminate the associated VM cluster, if one exists, and all the databases it contains.

Terminating a VM cluster network removes it from the Cloud Control Plane.

  1. Open the navigation menu. Under Database, click Exadata Cloud@Customer.
  2. Choose the Region and Compartment that contains the Exadata infrastructure that is associated with the VM cluster network that you want to terminate.
  3. Click Exadata Infrastructure.
  4. Click the name of the Exadata infrastructure that is associated with the VM cluster network that you are interested in.

    The Infrastructure Details page displays information about the selected Exadata infrastructure.

  5. Click the name of the VM cluster network that you want to terminate.

    The VM Cluster Network Details page displays information about the selected VM cluster network.

  6. Click Terminate.
  7. In the resulting dialog, enter the name of the VM cluster network, and click Terminate VM Cluster Network to confirm the action.

Using the Console to Create a VM Cluster

To create your VM cluster, be prepared to provide values for the fields required for configuring the infrastructure.

To create a VM cluster, ensure that you that have:

  • Active Exadata infrastructure available to host the VM cluster.
  • A validated VM cluster network available for the VM cluster to use.
  1. Open the navigation menu. Under Database, click Exadata Cloud@Customer.
  2. Choose the Region that contains your Exadata infrastructure.
  3. Click VM Clusters.
  4. Click Create VM Cluster.
  5. Provide the requested information in the Create VM Cluster page:
    1. Choose a compartment: From the list of available compartments, choose the compartment that you want to contain the VM cluster.
    2. Provide the display name: The display name is a user-friendly name that you can use to identify the VM cluster. The name doesn't need to be unique because an Oracle Cloud Identifier (OCID) uniquely identifies the VM cluster.
    3. Select Exadata Cloud@Customer Infrastructure: From the list, choose the Exadata infrastructure to host the VM cluster. You are not able to create a VM cluster without available and active Exadata infrastructure.
    4. Select a VM Cluster Network: From the list, choose a VM cluster network definition to use for the VM cluster. You must have an available and validated VM cluster network before you can create a VM cluster.
    5. Choose the Oracle Grid Infrastructure version:From the list, choose the of Oracle Grid Infrastructure release that you want to install on the VM cluster.

      The Oracle Grid Infrastructure release determines the Oracle Database releases that can be supported on the VM cluster. You cannot run an Oracle Database release that is later than the Oracle Grid Infrastructure software release.

    6. Specify the OCPU count per VM: Specify the OCPU count for each individual VM. The count must be a value greater than 2 and up to the number of remaining unallocated CPU cores.

      If you specify a value of zero, then the VM cluster compute nodes are all shut down at the end of the cluster creation process. In this case, you can later start the compute nodes by scaling the CPU resources. See Using the Console to Scale the Resources on a VM Cluster.

      Otherwise, this value must be a multiple of the number of compute nodes so that every compute node has the same number of CPU cores enabled.

    7. Requested OCPU count for the VM Cluster: Displays the total number of CPU cores that are allocated to the VM cluster based on the value you specified in the Specify the OCPU count per VM field.
    8. Specify the memory per VM (GB): Specify the memory for each individual VM. The vaule must be a multiple of 1 GB and is limited by the available memory on the Exadata infrastructure.
    9. Requested memory for the VM Cluster (GB): Displays the total amount of memory that are allocated to the VM cluster based on the value you specified in the Specify the memory per VM (GB) field.
    10. Specify the local file system size per VM (GB): Specify the size for each individual VM. The value must be a multiple of 1 GB and is limited by the available size of the file system on the X8-2 and X7-2 infrastructures.

      Note that the minimum size of local system storage must be 60 GB. Each time when you create a new VM cluster, the space remaining out of the total available space is utilized for the new VM cluster.

      For more information and instructions to specify the size for each individual VM, see Introduction to Scale Up or Scale Down Operations.

    11. Configure the Exadata Storage: The following settings define how the Exadata storage is configured for use with the VM cluster. These settings cannot be changed after creating the VM cluster. See also Storage Configuration.
      • Specify Usable Exadata Storage: Specify the size for each individual VM. The minimum recommended size is 2 TB.
      • Allocate Storage for Exadata Snapshots: Check this option to create a sparse disk group, which is required to support Exadata snapshot functionality. Exadata snapshots enable space-efficient clones of Oracle databases that can be created and destroyed very quickly and easily.
      • Allocate Storage for Local Backups: Check this option to configure the Exadata storage to enable local database backups. If you select this option, more space is allocated to the RECO disk group to accommodate the backups. If you do not select this option, you cannot use local Exadata storage as a backup destination for any databases in the VM cluster.
    12. Add SSH Key: Specify the public key portion of an SSH key pair that you want to use to access the VM cluster compute nodes. You can upload a file containing the key, or paste the SSH key string.

      To provide multiple keys, upload multiple key files or paste each key into a separate field. For pasted keys, ensure that each key is on a single, continuous line. The length of the combined keys cannot exceed 10,000 characters.

    13. Choose a license type:
      • Bring Your Own License (BYOL): Select this option if your organization already owns Oracle Database software licenses that you want to use on the VM cluster.
      • License Included: Select this option to subscribe to Oracle Database software licenses as part of Exadata Cloud@Customer.
    14. Show Advanced Options:
      • Time zone: The default time zone for the Exadata Infrastructure is UTC, but you can specify a different time zone. The time zone options are those supported in both the Java.util.TimeZone class and the Oracle Linux operating system.
        Note

        If you want to set a time zone other than UTC or the browser-detected time zone, then select the Select another time zone option, select a Region or country, and then select the corresponding Time zone.

        If you do not see the region or country you want, then select Miscellaneous, and then select an appropriate Time zone.

      • Tags: Optionally, you can apply tags. If you have permissions to create a resource, you also have permissions to apply free-form tags to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more information about tagging, see Resource Tags. If you are not sure if you should apply tags, skip this option (you can apply tags later) or ask your administrator.
  6. Click Create VM Cluster.

    The VM Cluster Details page is now displayed. While the creation process is running, the state of the VM cluster is Pending. When the VM cluster creation process completes, the state of the VM cluster changes to Available.

Using the Console to Scale the Resources on a VM Cluster

Starting in Exadata Cloud@Customer Gen2, you can scale up or down multiple resources at the same time. You can also scale up or down resources one at a time.

Why should I scale down the resources?
  • Use Case 1: If you have allocated all of the resources to one virtual machine, and if you want to create multiple virtual machines, then there wouldn't be any resources available to allocate the new virtual machine. So scale down the resources as needed before you can create any additional virtual machines.
  • Use Case 2: If you want to allocate different resources based on the workload, then scale down or scale up accordingly. For example, you may want to run nightly batch jobs for reporting/ETL and scale down the VM once the job is over.

How long it takes to scale down the resources?

As part of scale down, you can scale down any combinations of the following resources:
  • OCPU
  • Memory
  • Local storage
  • Exadata storage

Each individual operation can take approximately 15 minutes and all the operations run in a series if multiple scale down is executed. For example, scale down Memory and Local Storage from the Console. In general, local storage and memory scale down takes more time than the other two.

  1. Open the navigation menu. Under Database, click Exadata Cloud@Customer.
  2. Choose the Region and Compartment that contains the VM cluster for which you want to scale the CPU resources.
  3. Click VM Clusters.
  4. Click the name of the VM cluster for which you want to scale the CPU resources.

    The VM Cluster Details page displays information about the selected VM cluster.

  5. Click Scale Up/Down.
  6. In the dialog box, adjust any or all of the following:
    • OCPU Count:

      The OCPU Count value must be a multiple of the number of compute nodes so that every compute node has the same number of CPU cores enabled.

      If you set the OCPU Count to zero, then the VM cluster compute nodes are all shut down. If you change from a zero setting, then the VM cluster compute nodes are all started. Otherwise, modifying the number of enabled CPU cores is an online operation, and compute nodes are not rebooted because of this operation. See also System Configuration.

      Note

      If you have explicitly set the CPU_COUNT database initialization parameter, that setting is not affected by modifying the number of CPU cores that are allocated to the VM cluster. Therefore, if you have enabled the Oracle Database instance caging feature, the database instance does not use extra CPU cores until you alter the CPU_COUNT setting. If CPU_COUNT is set to 0 (the default setting), then Oracle Database continuously monitors the number of CPUs reported by the operating system and uses the current count.
    • Memory:

      Specify the memory for each individual VM. The value must be a multiple of 1 GB and is limited by the available memory on the Exadata infrastructure.

      When you scale up or down the memory, the associated compute nodes are rebooted in a rolling manner one compute node at a time to minimize the impact on the VM cluster.

    • Local file system size:

      Specify the size for each individual VM. The value must be a multiple of 1 GB and is limited by the available size of the file system on the Exadata infrastructure.

      When you scale up or down the local file system size, the associated compute nodes are rebooted in a rolling manner one compute node at a time to minimize the impact on the VM cluster.

    • Usable Exadata storage size:

      Specify the total amount of Exadata storage that is allocated to the VM cluster. This storage is allocated evenly from all of the Exadata Storage Servers. The minimum recommended size is 2 TB.

      You may reduce the Exadata storage allocation for a VM cluster. However, you must ensure that the new amount covers the existing contents, and you should also allow for anticipated data growth.

      Note

      When you downsize, the new size must be at least 15% more than the currently used size.

      Modifying the Exadata storage allocated to the VM cluster is an online operation. Compute nodes are not rebooted because of this operation.

  7. . Click Save Changes.

Using the Console to Stop, Start, or Reboot a VM Cluster Compute Node

Use the console or API calls to stop, start, or reboot a compute node.

  1. Open the navigation menu. Under Database, click Exadata Cloud@Customer.
  2. Choose the Region and Compartment that is associated with the VM cluster that contains the compute node that you want to stop, start, or reboot.
  3. Click VM Clusters.
  4. Click the name of the VM cluster that contains the compute node that you want to stop, start, or reboot.

    The VM Cluster Details page displays information about the selected VM cluster.

  5. In the Resources list, click Nodes.

    The list of compute nodes is displayed.

  6. In the list of nodes, click the Actions icon (three dots) for a node, and then click one of the following actions:
    1. Start: Restarts a stopped node. After the node is restarted, the Stop action is enabled.
    2. Stop: Shuts down the node. After the node is stopped, the Start action is enabled.
    3. Reboot: Shuts down the node, and then restarts it.

Using the Console to Check the Status of a VM Cluster Compute Node

Review the health status of a VM cluster compute node.

  1. Open the navigation menu. Under Database, click Exadata Cloud@Customer.
  2. Choose the Region and Compartment that is associated with the VM cluster that contains the compute node that you are interested in.
  3. Click VM Clusters.
  4. Click the name of the VM cluster that contains the compute node that you are interested in.

    The VM Cluster Details page displays information about the selected VM cluster.

  5. In the Resources list, click Nodes.

    The list of compute nodes displays. For each compute node in the VM cluster, the name, state, and client IP address are displayed.

  6. In the node list, find the compute node that you are interested in and check its state.

    The color of the icon and the associated text it indicates its status.

    • Available: Green icon. The node is operational.
    • Starting: Yellow icon. The node is starting because of a start or reboot action in the Console or API.
    • Stopping: Yellow icon. The node is stopping because of a stop or reboot action in the Console or API.
    • Stopped: Yellow icon. The node is stopped.
    • Failed: Red icon. An error condition prevents the continued operation of the compute node.

Using the Console to Update the License Type on a VM Cluster

To modify licensing, be prepared to provide values for the fields required for modifying the licensing information.

  1. Open the navigation menu. Under Database, click Exadata Cloud@Customer.
  2. Choose the Region and Compartment that contains the VM cluster for which you want to update the license type.
  3. Click VM Clusters.
  4. Click the name of the VM cluster for which you want to update the license type.

    The VM Cluster Details page displays information about the selected VM cluster.

  5. Click Update License Type.
  6. In the dialog box, choose one of the following license types and then click Save Changes.
    • Bring Your Own License (BYOL): Select this option if your organization already owns Oracle Database software licenses that you want to use on the VM cluster.
    • License Included: Select this option to subscribe to Oracle Database software licenses as part of Exadata Cloud@Customer.

    Updating the license type does not change the functionality or interrupt the operation of the VM cluster.

Using the Console to Move a VM Cluster to Another Compartment

To change the compartment that contains your VM cluster on Exadata Cloud@Customer, use this procedure.

When you move a VM cluster, the compartment change is also applied to the compute nodes and databases that are associated with the VM cluster. However, the compartment change does not affect any other associated resources, such as the Exadata infrastructure, which remains in its current compartment.

  1. Open the navigation menu. Under Database, click Exadata Cloud@Customer.
  2. Choose the Region and Compartment that contains the VM cluster that you want to move.
  3. Click VM Clusters.
  4. Click the name of the VM cluster that you want to move.

    The VM Cluster Details page displays information about the selected VM cluster.

  5. Click Move Resource.
  6. In the resulting dialog, choose the new compartment for the VM cluster, and click Move Resource.

Using the Console to Terminate a VM cluster

Before you can terminate a VM cluster, you must first terminate the databases that it contains.

Terminating a VM cluster removes it from the Cloud Control Plane. In the process, the compute node VMs and their contents are destroyed.
  1. Open the navigation menu. Under Database, click Exadata Cloud@Customer.
  2. Choose the Region and Compartment that contains the VM cluster that you want to terminate.
  3. Click VM Clusters.
  4. Click the name of the VM cluster that you want to terminate.

    The VM Cluster Details page displays information about the selected VM cluster.

  5. Click Terminate.
  6. In the resulting dialog, enter the name of the VM cluster, and click Terminate VM Cluster to confirm the action.

Using the API for VM Clusters on Exadata Cloud@Customer

Review the list of API calls to manage your Exadata Cloud@Customer VM cluster networks and VM clusters.

For information about using the API and signing requests, see "REST APIs" and "Security Credentials". For information about SDKs, see "Software Development Kits and Command Line Interface".

Use these API operations to manage Exadata Cloud@Customer VM cluster networks and VM clusters:

VM cluster networks:
  • GenerateRecommendedVmClusterNetwork
  • CreateVmClusterNetwork
  • DeleteVmClusterNetwork
  • GetVmClusterNetwork
  • ListVmClusterNetwork
  • UpdateVmClusterNetwork
  • ValidateVmClusterNetwork
VM clusters:
  • CreateVmCluster
  • DeleteVmCluster
  • GetVmCluster
  • ListVmCluster
  • UpdateVmCluster

For the complete list of APIs, see "Database Service API".

Introduction to Scale Up or Scale Down Operations

With the Multiple VMs per Exadata system (MultiVM) feature release, you can scale up or scale down your VM cluster resources.

Scaling Up or Scaling Down the VM Cluster Resources

You can scale up or scale down the memory, local disk size (/u02), ASM Storage, and CPUs. Scaling up or down of these resources requires thorough auditing of existing usage and capacity management by the customer DB administrator. Review the existing usage to avoid failures during or after a scale down operation. While scaling up, consider how much of these resources are left for the next VM cluster you are planning to create. Exadata Cloud@Customer Cloud tooling calculates the current usage of memory, local disk, and ASM storage in the VM cluster, adds headroom to it, and arrives at a "minimum" value below which you cannot scale down, and expects that you specify the value below this minimum value.

Note

For memory and /u02 scale up or scale down operations, if the difference between the current value and the new value is less than 2%, then no change will be made to that VM. This is because memory change involves rebooting the VM, and /u02 change involves bringing down the Oracle Grid Infrastructure stack and un-mounting /u02. Productions customers will not resize for such a small increase or decrease, and hence such requests are a no-op.

Calculating the Minimum Required Memory

Cloud tooling provides dbaasapi to identify the minimum required memory. As root user, you have to run dbaasapi and pass a JSON file with sample content as follows. The only parameter that you need to update in the input.json is new_mem_size, which is the new memory to which you want the VM Cluster to be re-sized.

Copy# cat input.json
{
"object": "db",
"action": "get",
"operation": "precheck_memory_resize",
"params": {
"dbname": "grid",
"new_mem_size" : "30 gb",
"infofile": "/tmp/result.json"
},
"outputfile": "/tmp/info.out",
"FLAGS": ""
}
# dbaasapi -i input.json
# cat /tmp/result.json
{
"is_new_mem_sz_allowed" : 0,
"min_req_mem" : 167
}

The result indicates that 30 GB is not sufficient and the minimum required memory is 167 GB, and that is the maximum you can reshape down to. On a safer side, you must choose a value greater than 167 GB, as there could be fluctuations of that order between this calculation and the next reshape attempt.

Calculating the ASM Storage

Use the following formula to calculate the minimum required ASM storage:

  • For each disk group, for example, DATA, RECO, note the total size and free size by running the asmcmd lsdg command on any domU of the VM cluster.
  • Calculate the used size as (Total size - Free size) / 3 for each disk group. The /3 is used because the disk groups are triple mirrored.
  • DATA:RECO ratio is:

    80:20 if Local Backups option was NOT selected in the user interface.

    40:60 if Local Backups option was selected in the user interface.

  • Ensure that the new total size as given in the user interface passes the following conditions:

    Used size for DATA * 1.15 <= (New Total size * DATA % )

    Used size for RECO * 1.15 <= (New Total size * RECO % )

Example 4-1 Calculating the ASM Storage

  1. Run the asmcmd lsdg command in the domU:
    • Without SPARSE:
      [root@scaqak01dv0305 ~]# /u01/app/19.0.0.0/grid/bin/asmcmd lsdg
      ASMCMD>
      State   Type Rebal Sector Logical_Sector Block AU     Total_MB   Free_MB    Req_mir_free_MB   Usable_file_MB   Offline_disks    Voting_files   Name
      MOUNTED HIGH N        512     512        4096 4194304 12591936   10426224   1399104           3009040           0                       Y      DATAC5/
      MOUNTED HIGH N        512     512        4096 4194304 3135456    3036336    348384            895984            0                       N      RECOC5/
      ASMCMD>
    • With SPARSE:
      [root@scaqak01dv0305 ~]# /u01/app/19.0.0.0/grid/bin/asmcmd lsdg
      ASMCMD>
      State   Type Rebal Sector Logical_Sector Block AU       Total_MB   Free_MB   Req_mir_free_MB   Usable_file_MB   Offline_disks    Voting_files   Name
      MOUNTED HIGH N        512     512        4096 4194304   12591936   10426224  1399104           3009040            0                       Y     DATAC5/
      MOUNTED HIGH N        512     512        4096 4194304   3135456    3036336   348384            895984             0                       N     RECOC5/
      MOUNTED HIGH N        512     512        4096 4194304   31354560   31354500  3483840           8959840            0                       N     SPRC5/
      ASMCMD>
    Note

    The listed values of all attributes for SPARSE diskgroup (SPRC5) present the virtual size. In Exadata DB Systems and Exadata Cloud@Customer, we use the ratio of 1:10 for physicalSize:virtualSize. Hence, for all purposes of our calculation we must use 1/10th of the values displayed above in case of SPARSE for those attributes.

  2. Used size for a disk group = (Total_MB - Free_MB) /3
    • Without SPARSE:

      Used size for DATAC5 = (12591936 - 10426224 ) / 3 = 704.98 GB

      Used size for RECO5 = (3135456 - 3036336 ) / 3 = 32.26 GB

    • With SPARSE:

      Used size for DATAC5 = (12591936 - 10426224 ) / 3 ~= 704.98 GB

      Used size for RECO5 = (3135456 - 3036336 ) /3 ~= 32.26 GB

      Used size for SPC5 = (1/10 * (31354560 - 31354500)) / 3 ~= 0 GB

  3. Storage distribution among diskgroups
    • Without SPARSE:

      DATA:RECO ratio is 80:20 in this example.

    • With SPARSE:

      DATA RECO: SPARSE ratio is 60:20:20 in this example.

  4. New requested size should pass the following conditions:
    • Without SPARSE: (For example, 5 TB in user interface.)

      5 TB = 5120 GB ; 5120 *.8 = 4096 GB; 5120 *.2 = 1024 GB

      For DATA: (704.98 * 1.15 ) <= 4096 GB

      For RECO: (32.36 * 1.15) <= 1024 GB

    • With SPARSE: (For example, 8 TB in the user interface.)

      8 TB = 8192 GB; 8192 *.6 = 4915 GB; 8192 *.2 = 1638 GB; 8192 *.2 = 1638 GB

      For DATA: (704.98 * 1.15 ) <= 4915 GB

      For RECO: (32.36 * 1.15) <= 1638 GB

      For SPR: (0 * 1.15) <= 1638 GB

Above resize will go through. If above conditions are not met by the new size, then resize will fail the precheck.

Estimating How Much Local Storage You Can Provision to Your VMs

X8-2 and X7-2 Systems

You specify how much space is provisioned from local storage to each VM. This space is mounted at location /u02, and is used primarily for Oracle Database homes. The amount of local storage available will vary with the number of virtual machines running on each physical node, as each VM requires a fixed amount of storage (137 GB) for the root file systems, GI homes, and diagnostic log space. Refer to the table below to see the maximum amount of space available to provision to local storage (/u02) across all VMs.

Table 4-1 Space allocated to VMs

#VMs Space Consumed by VM Image or GI X8-2 Space for ALL /u02 (GB) X7-2 Space for ALL /u02 (GB)

1

137

900

1100

2

274

763

963

3

411

626

826

4

548

489

689

5

685

352

552

6

822

N/A

415

For an X8-2, to get the max space available for the nth VM, take the number in the table above and subtract anything previously allocated for /u02 to the other VMs. So if you allocated 60 GB to VM1, 70 GB to VM2, 80 GB to VM3, 60 GB to VM4 (total 270 GB) in an X8-2, the maximum available for VM 5 would be 352 - 270 = 82 GB.

In ExaCC Gen 2, we require a minimum of 60 GB per /u02, so with that minimum size there is a maximum of 5 VMs in X8-2 and 6 VMs in X7-2.

X8M-2 Systems

The maximum number of VMs for an X8M-2 will be 8, regardless of whether there is local disk space or other resources available.

For an X8M-2 system, the fixed consumption per VM is 160 GB.

Table 4-2 Space allocated to VMs

#VMs Space Consumed by VM Image or GI X8M-2 Base System Space for All /u02 (GB) X8M-2 Quarter/Half/Full Rack Space for All /u02 (GB)*

1

160

900

900

2

320

740

1800

3

480

580

2020

4

640

420

1860

5

800

N/A

1700

6

960

N/A

1540

7

1120

N/A

1380

8

1280

N/A

1220

*Max 900 GB per VM

For an X8M-2, to get the max space available for the nth VM, take the number in the table above and subtract anything previously allocated for /u02 to the other VMs. So, for a quarter and larger rack, if you allocated 60 GB to VM1, 70 GB to VM2, 80 GB to VM3, 60 GB to VM4 (total 270 GB) in an X8M-2, the maximum available for VM 5 would be 1700 - 270 = 1430 GB. However, the per VM maximum is 900GB, so that would take precedent and limits VM5 to 900GB.

For ExaCC Gen 2, we require a minimum of 60 GB per /u02, so with that minimum size there is a maximum of 4 VMs for the base system.

Scaling Local Storage Down

Scale Down Local Space Operation Guidelines

Scale down operation expects you to input local space value that you want each node to scale down to.

  • Resource Limit Based On Recommended Minimums

    Scale down operation must meet 60 GB recommended minimum size requirement for local storage.

  • Resource Limit Based On Current Utilization

    The scale down operation must leave 15% buffer on top of highest local space utilization across all nodes in the cluster.

The lowest local space per node allowed is higher of the above two limits.

Run df –kh command on each node to find out the node with the highest local storage.

You can also use the utility like cssh to issue the same command from all hosts in a cluster by typing it just once.

Lowest value of local storage each node can be scaled down to would be = 1.15x (highest value of local space used among all nodes).