Known Issues

The following lists describe the known issues with Oracle Cloud Infrastructure.

Announcements

Currently, there are no known Announcements issues.

API Gateway

API gateways do not inherit custom DNS servers from subnets

Details: The default Oracle Cloud Infrastructure Resolver resolves public URL endpoints (and URL endpoints with public hostnames) to IP addresses. Additionally, a subnet can be configured with a custom DNS server that resolves internal hostname (and private hostname) URL endpoints to IP addresses. However, API gateways you create with the API Gateway service do not inherit custom DNS servers from subnets. Instead API gateways use the default Oracle Cloud Infrastructure Resolver, which does not resolve internal/private hostname URL endpoints.

Due to this restriction, if you create an API gateway that has an internal/private hostname URL endpoint as the HTTP or HTTPS URL back end, calls to the API will fail because the hostname cannot be resolved to an IP address.

Workaround: We are aware of the issue and working on a resolution. In the meantime, if you want to create an API gateway that has an internal/private URL endpoint as the HTTP or HTTPS URL back end, you must specify the host's IP address in the URL rather than the hostname. In addition, if the back end is an HTTPS URL, you must also select the Disable SSL Verification option in the Console (or include isSSLVerifyDisabled: true in the API deployment specification JSON file).

Direct link to this issue: API gateways do not inherit custom DNS servers from subnets

Application Migration

Migration fails for Oracle Java Cloud Service applications that have long names

Details: Application Migration does not support the migration of Oracle Java Cloud Service applications that have names greater than 28 characters.

Workaround: We are aware of the issue and working on a resolution. Before you start migrating an Oracle Java Cloud Service application, rename the application so that the name has less than 28 characters.

Direct link to this issue: Migration fails for Oracle Java Cloud Service applications that have long names

Unsupported attributes for Oracle Analytics Cloud - Classic application

Details: When you create a migration for the Oracle Analytics Cloud - Classic application using the API or CLI, it is mandatory to provide values for the serviceInstanceUser and serviceInstancePassword attributes. However, Application Migration ignores these values.

Workaround: We are aware of the issue and working on a resolution. You can enter any value for these attributes, such as "unused."

Direct link to this issue: Unsupported attributes for Oracle Analytics Cloud - Classic application

Unsupported query parameters in the listSourceApplications command

Details: The listSourceApplications command does not support the following query parameters: limit, page, sortOrder, and sortBy.

Workaround: We are aware of the issue and working on a resolution. Do not use these query parameters to filter your search results.

Direct link to this issue: Unsupported query parameters in the listSourceApplications operation

Unsupported query parameter in the listMigrations command

Details: The listMigrations command does not support the lifecycleState query parameter.

Workaround: We are aware of the issue and working on a resolution. Do not use this query parameter to filter your search results.

Direct link to this issue: Unsupported query parameter in the listMigrations command

Audit

Currently, there are no known Audit issues.

Block Volume

Change compartment end event not emitted for block volumes and boot volumes

Details: The com.oraclecloud.blockvolumes.changevolumecompartment.end and com.oraclecloud.blockvolumes.changebootvolumecompartment.end events are not emitted after their corresponding begin events by the Block Volume service even when the operations completed successfully.

Workaround: We are aware of the issue and working on a resolution. Verify directly that your resource was moved to the new compartment.

Direct link to this issue: Change compartment end event not emitted for block volumes and boot volumes

updatevolumekmskey and updatebootvolumekmskey events missing information for block volumes and boot volumes

Details: The com.oraclecloud.blockvolumes.updatevolumekmskey.begin and com.oraclecloud.blockvolumes.updatebootvolumekmskey.begin events are missing the current field, which should contain the KMS key ID of the new key to configure for the volume. Instead, the previous field contains this value, when the previous field should contain the previous KMS key ID.

Workaround: We are aware of the issue and working on a resolution. Verify that your resource has the expected KMS key ID after the update.

Direct link to this issue: updatevolumekmskey and updatebootvolumekmskey events missing information for block volumes and boot volumes

volumeId field format is incorrect in create event with manual volume and boot volume backups

Details:The volumeId field in additionalDetails for the com.oraclecloud.blockvolumes.createvolumebackup.end and com.oraclecloud.blockvolumes.createbootvolumebackup.end events is formatted as an object and not as a string for manually created backups. This means that rules set to trigger on this field will not be triggered for manually created backups. This field is formatted correctly as a string for scheduled backups.

Workaround: We are aware of the issue and working on a resolution.

Direct link to this issue: volumeId field format is incorrect for create event with manual volume and boot volume backups

additionalDetails information missing for copyvolumebackup.begin and copyvolumebackup.end events

Details: The sourceBackupId field and the destinationRegion field are missing in additionalDetails for the com.oraclecloud.blockvolumes.copyvolumebackup.begin and com.oraclecloud.blockvolumes.copyvolumebackup.end events, so rules set to trigger based on these fields will not be triggered.

Workaround: We are aware of the issue and working on a resolution.

Direct link to this issue: additionalDetails information missing for copyvolumebackup.begin and copyvolumebackup.end events

Device path option not available for instances launched before January 11, 2019
409 error occurs when cloning a volume

Details: When you clone a volume that is still attached to an instance, delete the clone, and then clone the volume again, you may encounter the following error:

Volume <volume-OCID> cannot be cloned in parallel while attached

This error may also return with a 409 response code.

Workaround: If you're using the API, CLI, SDK, or Terraform you need to monitor the isHydrated attribute of the deleted clone and not create the second clone until this attribute value is true. If you're using the Console, monitor the Hydrated field on the Block Volume Details page for the deleted clone and not create the second clone until this field value is true.

Direct link to this issue: 409 error occurs when cloning a volume

Attaching a Windows boot volume as a data volume to another instance fails

Details: When you attach a Windows boot volume as a data volume to another instance, when you try to connect to the volume using the steps described in Connecting to a Volume the volume fails to attach and you may encounter the following error:

Connect-IscsiTarget : The target has already been logged in via an iSCSI session.

Workaround: You need to append the following to the Connect-IscsiTarget command copied from the Console:

-IsMultipathEnabled $True

Direct link to this issue: Attaching a Windows boot volume as a data volume to another instance fails

volume-group create operation fails on Windows instances using the CLI

Details: When you use the CLI on Windows to create a volume group and supply inline JSON input for the source-details parameter, the operation fails.

Workaround: We are aware of the issue and working on a resolution. To work around this issue, wrap the inline JSON in double quotes instead of single quotes. You also need to escape the double quotes within the JSON itself. For example, the following code excerpt works on Linux instances:

--source-details '{"type": "volumeIds", 

To get it to work on Windows instances, modify it to:

--source-details "{\"type\": \"volumeIds\", 

Direct link to this issue: volume-group create operation fails on Windows instances using the CLI

Boot Volume resize fails for clone and restore from backup using the CLI

Details: When you use the CLI to clone a boot volume or restore a boot volume from a backup, you cannot resize the volume.

Workaround: We are aware of the issue and working on a resolution. To work around this issue, clone the boot volume or restore it from a backup without resizing it and then you can resize the volume after the clone or restore operation is complete.

Direct link to this issue: Boot Volume resize fails for clone and restore from backup using the CLI

CLI help text is incorrect for Volume and Boot Volume create commands

Details: The help text for the size-in-gbs option and size-in-mbs option are incorrect for the oci bv volume create and the oci bv boot-volume create CLI commands. They incorrectly state that these options cannot be supplied when cloning a volume or restoring a volume from a backup. This is incorrect, they are available to specify when you clone a volume or restore a volume from a backup to a larger size volume than the original source volume. You cannot specify a value smaller than the size of the original source volume.

Workaround: We are aware of the issue and working on a resolution. You can ignore the help text for these command options.

Direct link to this issue: CLI help text is incorrect for Volume and Boot Volume create commands

bootVolumeSizeInGBs attribute is null

Blockchain Platform

For known issues with Blockchain Platform, see Known Issues.

Cloud Guard

Reporting region cannot be changed

Details: Reporting region is assigned during Cloud Guard enablement. Once assigned, this setting cannot be changed, even upon disable and enable of Cloud Guard.

Workaround: We are aware of the issue and working on a resolution.

Direct link to this issue: Reporting region cannot be changed

No value checking for conditional groups

Details: Detector and responder rules apply to a particular resource type. Conditional groups allow you specify particular resources of that type to be included or excluded from applying a rule.

Scenario 1: You can provide resource OCIDs to a conditional group as custom values or in a managed list. Cloud Guard does not check the validity of these values.

Scenario 2: When you add a country or region as a conditional group parameter to an activity detector, Cloud Guard does not check the validity of these values.

Workaround: In both scenarios above, ensure that you provide valid values. For a list of valid country and region values, see Using Conditional Groups with Detectors in Modifying a Cloned Detector Recipe.

Direct link to this issue: No value checking for conditional groups

Compliance Documents

Currently, there are no known Compliance Documents issues.

Compute

Out of host capacity error when creating compute instances

Details: When you try to create an instance, the instance launch fails with the error "InternalError: Out of host capacity". This happens because of a lack of capacity for the shape in the requested fault domain and availability domain.

Workaround: Capacity usually becomes available soon for most shapes. To work around this issue, do the following things:

  • If you’re using a legacy shape, launch the instance using a current generation shape instead. Capacity is limited for legacy shapes.
  • Launch the instance in a different fault domain or availability domain.
  • Launch the instance using a smaller shape, or using a shape in a different series.
  • Wait a few minutes and try again.

Direct link to this issue: Out of host capacity error when creating compute instances

Incorrect storage size is displayed for the BM.GPU4.8 shape

Details: For the BM.GPU4.8 compute shape, an incorrect value for the size of the NVMe drives is displayed in the Console and returned by the ListShapes API operation. The value that is shown is 25.6 TB NVMe SSD (4 drives). The correct value is 27.2 TB NVMe SSD (4 drives).

Workaround: We are aware of the issue and working on a resolution.

Direct link to this issue: Incorrect storage size is displayed for the BM.GPU4.8 shape

CentOS 6 instances lose network connectivity under sustained heavy load

Details: A race condition has been discovered in the CentOS 6 kernel that affects VM instances operating under sustained heavy load. When such a race occurs, instances might lose network connectivity.

Workaround: We are aware of the issue and working on a resolution. As a workaround, remove irqpoll from the kernel command line by running the following command on the instance:

sudo sed -i.backup 's/irqpoll//g' /etc/grub.conf /boot/efi/EFI/redhat/grub.conf

This will modify the relevant files, leaving a backup copy of the original state. After the files have been modified, reboot the instance.

Direct link to this issue: CentOS 6 instances lose network connectivity under sustained heavy load

In-transit encryption for a boot volume attachment can be edited when unsupported by the image
Monitoring and OS Management are not available on domain controllers

Details: When you use a Windows Server instance as a domain controller, the Monitoring service and the OS Management service are not available. This happens because the services installed by Oracle Cloud Agent on Windows run with virtual accounts, but virtual accounts are not supported in the domain controller scope.

Workaround: Use the following workarounds:

  • Monitoring service: The Oracle Cloud Agent NT service (including Monitoring) runs as NT Service\OCA. Using services.msc, update the user running for the NT service to use a domain service or user account. Then, add the user to the domain local group Performance Monitoring Groups.
  • OS Management and Oracle Cloud Agent updater service:

    • The OS Management NT service runs as NT Service\OCAOSMS. Using services.msc, update the user running for the NT service to use either a domain service account or a domain user account that has local administrative privileges. Then, add the user to the domain local Administrators Group.
    • The Oracle Cloud Agent Updater NT service runs as NT Service\OCAU. Using services.msc, update the user running for the NT service to use either a domain service account or a domain user account that has local administrative privileges. Then, add the user to the domain local Administrators Group.

Direct link to this issue: Monitoring and OS Management are not available on domain controllers

Boot volume backup size larger than expected

Details: Due to a recent change in how the Compute service handles images, when you create a boot volume backup, the backup is larger than expected. In some cases the boot volume backup may be larger than the boot volume size.

Workaround: We are aware of the issue and working on a resolution.

Direct link to this issue: Boot volume backup size larger than expected

Intermittent issues with SSH access, DNS lookups, and access to the metadata service

Details:

You may experience intermittent errors with any of the following for your Compute instance:

  • Connecting to the instance using SSH.

  • Performing a DNS lookup

  • Accessing the metadata service at http://169.254.169.254/*.

Workaround: We are aware of the issue and working on a resolution.

To temporarily work around this issue, run the following command on the instance:

sudo ethtool -G ens3 tx 513 && sudo ethtool -G ens3 tx 512

Direct link to this issue: Intermittent issues with SSH access, DNS lookups, and access to the metadata service

iSCSI-attached volumes do not connect on reboot

Details: If you performed a yum update on your instance using the Oracle Linux 7 yum repos between March 22, 2019 and April 9, 2019, you may encounter an issue where iSCSI-attached block volumes are not available after you reboot the instance.

Workaround: This occurs when the instance is not configured to automatically login to iSCSI nodes on reboot. To configure automatic login, update the version of the iscsi-initiator-utils package by running the following command:

sudo yum update -y iscsi-initiator-utils-6.2.0.874-10.0.7.el7

Direct link to this issue: iSCSI-volumes do not connect on reboot

iscsid service should be configured to restart automatically

Details: Oracle Cloud Infrastructure supports iSCSI attached remote boot and block volumes to Compute instances. These iSCSI attached volumes are managed by the iscsid service. In scenarios where this service is stopped for any reason, such as the service crashes or a system administrator inadvertently stops the service, it's important that the iscsid service is automatically restarted to increase the stability of your infrastructure.

Workaround: See Updating the Linux iSCSI Service to Restart Automatically for steps on how to configure the iscsid service to restart automatically.

Direct link to this issue: iscsid service should be configured to restart automatically

Virtual machine (VM) DenseIO instances launch with an iSCSI attached boot volume

Details: When you create an instance using one of the following shapes:

  • VM.DenseIO1.4

  • VM.DenseIO1.8

  • VM.DenseIO1.16

  • VM.DenseIO2.8

  • VM.DenseIO2.16

  • VM.DenseIO2.24

the instance launches with an iSCSI attachment for the boot volume instead of a paravirtualized attachment. This means that features that require paravirtualized boot volume attachments, such as in-transit encryption, won't be available.

Direct link to this issue: Virtual machine (VM) DenseIO instances launch with an iSCSI attached boot volume

Virtual machine (VM) instances launch with an iSCSI attached boot volume when you specify a value for the ipxeScript attribute
Instances experience system hang after running firewall-cmd --reload

Details: A Compute instance may experience a system hang after you run the following command to reload the firewall:

firewall-cmd --reload

Reloading the firewall using this command on a running instance may cause the instance’s boot volume to lose its iSCSI connection and crash, based on the order in which firewall rules are reloaded.

Workaround: To prevent this from happening, do not use the reload parameter for firewall-cmd. Instead, run the firewall-cmd command twice, using the permanent parameter the first time you call it to ensure you do not lose iSCSI connectivity.

For example:

firewall-cmd --permanent
firewall-cmd

Direct link to this issue: Instances experience system hang after running firewall-cmd --reload

Network icon on Windows 2016 instances displays incorrect status

Details: On instances running Windows 2016, a red "x" is displayed on the network connection icon in the taskbar even though there is no issue with the instance's network connectivity.

Workaround: We are aware of the issue and working on a resolution. If you recycle the explorer.exe process the icon will display the correct status. However, this is not a permanent fix; the red "x" will reappear when you reboot the instance.

Direct link to this issue: Network icon on Windows 2016 instances displays incorrect status

Instances running October 2018 release of Ubuntu 18.04 experience system hang

Details: iSCSId is disabled by default in the October release of the Oracle-provided Ubuntu 18.04 image, so instances using this operating system may experience a system hang if there is a momentary break in the iSCSI communication.

Workaround: To work around this issue, run the following command to enable iSCSId on the instance:

sudo systemctl enable iscsid && sudo systemctl start iscsid

Direct link to this issue: Instances running October 2018 release of Ubuntu 18.04 experience system hang

kmsKeyId attribute is null
Ubuntu instance fails to reboot after enabling Uncomplicated Firewall (UFW)

Details: After you enable UFW on a Compute instance running Ubuntu, the instance fails to reboot successfully.

Workaround: Do not use UFW to edit firewall rules. Oracle-provided images are preconfigured with firewall rules to enable instances to make outgoing connections to the instance's boot and block volumes. For more information, see Essential Firewall Rules. UFW may remove these rules so that during a reboot the instance is not able to connect to the boot and block volumes.

To modify or add new firewall rules, update the /etc/iptables/rules.v4 file instead. Modifications to firewall rules here will take effect after a reboot. To have the rules take effect immediately, run the following:

$ sudo su -
# iptables-restore < /etc/iptables/rules.v4

Direct link to this issue: Instance fails to reboot after enabling UFW

Unable to log in to instance launched from new generalized Windows custom image
Custom image created from Windows instance may cause Windows to boot into safe mode

Details: After creating a Windows custom image, the initial instance or instances launched from the image may boot into safe mode or recovery mode. Instances booted into either mode will not respond to RDP. This can occur when the instance is not able to fully shut down prior to the custom image being taken. You can still access the instance by connecting to the VNC console, using the steps described in Connecting to the VNC Console.

Workaround: To work around this issue, prior to creating the custom image, connect and log in to the instance using RDP and initiate the shutdown from there.

Direct link to this issue: Custom image created from Windows instance may cause Windows to boot into safe mode

Instances launched from Ubuntu 16 custom images require custom network configuration

Details: When importing Ubuntu 16 LTS and newer releases of Ubuntu, DHCP fails to get the gateway configuration, and thus fails to set up a default route to the gateway on the VNIC.

Workaround: We are aware of the issue and working on a resolution. To work around this issue, statically configure the default route after import. To do this:

  1. Create the following script:

    #! /bin/bash -e
    								ROUTER_IP=$(/usr/bin/curl --silent http://169.254.169.254/opc/v1/vnics/ | grep "virtualRouterIp" | grep -oP "\d+\.\d+\.\d+\.\d+" | head -n 1)
    								echo "Found Router IP $ROUTER_IP"
    
    							ip route add default via $ROUTER_IP

    and save it to: /usr/local/bin/configure_default_route.sh

  2. Run the following command to make the script executable:

    sudo chmod +x /usr/local/bin/configure_default_route.sh
  3. Add the following to /etc/network/interfaces so that it is launched each time the system boots up:

    # OCI Emulated boot network interface
    								auto ens3
    								iface ens3 inet dhcp
    							post-up /usr/local/bin/configure_default_route.sh

Direct link to this issue: Instances launched from Ubuntu 16 custom images require custom network configuration

Secondary VNIC detachment times out for some instances launched from imported custom images

Details: When you detach a secondary VNIC from instances launched from imported custom images, the operation may time out.

Workaround: The hot plug module, acpiphp, needs to be loaded for secondary VNICs to detach correctly in Linux. If a VNIC fails to detach, run the lsmod command to display the list of loaded modules, and check the list for acpiphp. If you don't see it in the list, load the module by running the following command:

modprobe acpiphp

Retry the detachment operation for the secondary VNIC. You might need to reboot the system for the operation to complete successfully.

Direct link to this issue: Secondary VNIC detachment times out for some instances launched from imported custom images

Secondary VNIC may be non-functional for older CentOS, Oracle Linux, and RHEL images
Invalid image error when exporting an image

Details: When you try to export an image, the export fails with an error indicating that the image is invalid. This error only occurs in the US West (Phoenix) region.

Workaround: We are aware of the issue and working on a resolution. To work around this issue:

  1. Launch a new instance based on the image you're trying to export, and specify one of the following shapes for the image:

    • BM.Standard1.36

    • BM.DenseIO1.36

    • VM.DenseIO1.4

    • VM.DenseIO1.8

    • VM.DenseIO1.16

  2. Create a custom image using the steps described in To create a custom image.

After you have created the custom image, you can export this new image.

Direct link to this issue: Invalid image error when exporting an image

CentOS 6.x instances experience delays or crashes when creating an ext2, ext3, or ext4 file system

Details: You experience delays or crashes when creating an ext2, ext3, or ext4 file system on locally attached NVMe drives for CentOS 6.x instances with the following shapes:

  • VM.DenseIO1.4
  • VM.DenseIO1.8
  • VM.DenseIO1.16
  • BM.DenseIO1.36

Workaround: The ext2, ext3, and ext4 file systems are not supported on CentOS 6.x instances with the shapes listed in the Details section. We recommend that you use a different file system.

Direct link to this issue: CentOS 6.x instances experience delays or crashes when creating an ext2, ext3, or ext4 file system

Authentication error occurs when connecting to the serial console for a bare metal instance

Details: When establishing an SSH connection to a bare metal instance, your SSH client must send the correct key the first time. If you have more than one SSH key configured under ~/.ssh or in your ~/.ssh/config file, your client may not send the correct key on the first authorization attempt, and you may encounter the following error message:

Received disconnect from UNKNOWN port 65535:2: Too many authentication failures.

Workaround: We are aware of the issue and working on a resolution. To work around this issue, modify the connection string in the SSH command to use the configfile flag, -F to override the default configuration file, the -o IdentitiesOnly=yes option to force the SSH client to use the specified key, and the identity file flag, -i to specify the SSH key to use, as shown in the following example:

ssh -F /dev/null -o IdentitiesOnly=yes -i /<path>/<ssh_key> -o ProxyCommand='ssh -i /<path>/<ssh_key> -W %h:%p -p 443...

Direct link to this issue: Authentication error occurs when connecting to the serial console for a bare metal instance

Incorrect system time on Windows VM instances when you change the default time zone

Details: If you change the time zone from the default setting on Windows VM instances, when the instance reboots or syncs with the hardware clock, the system time will revert back to the time for the default time zone. However, the time zone setting will stay set to the new time zone, so the system clock will be incorrect.

You will also see events in the event log indicating that the system time was changed with the following details:

Change Reason: System time synchronized with the hardware clock.

Workaround: We are aware of the issue and working on a resolution. To work around this issue:

  1. Open Registry Editor and navigate to:

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation
  2. Create a new DWORD key named RealTimeIsUniversal and set the value to 1.

  3. Reboot the instance.
  4. Reset the time and time zone manually.

Direct link to this issue: Incorrect system time on Windows VM instances when you change the default time zone

Serial console connections do not work for older instances

VM instance details: You can only create serial console connections to virtual machine (VM) instances launched on August 26, 2017 or later.

Bare metal instance details: You can only create serial console connections to bare metal instances launched on October 21, 2017 or later.

Workaround: If you need serial console access to an instance launched prior to the dates specified for VM and bare metal instances, you can work around this issue by creating a custom image of the instance. When you launch a new instance based on the custom image, the new instance will have serial console access. For details on creating a custom image, see Managing Custom Images.

Direct link to this issue: Serial console connections do not work for older instances

Inactive listImage parameters and missing Image response fields

Details: The Compute API ListImages operation includes parameters for server-side filtering on operatingSystem and operatingSystemVersion. However, these parameters are currently inactive. Also, the Image response object documentation includes the operatingSystem and operatingSystemVersion attributes, but the object currently does not return these fields.

Workaround: The display name for Oracle-provided images includes the operating system and operating system version, for example "Oracle-Linux-7.2-2016.09.18-0". "Oracle Linux" is the operating system and the version is "7.2".

We are aware of the omission and plan to support these parameters and attributes.

Direct link to this issue: Inactive listImage parameters and missing Image response fields

Instance reboot fails if the Network Manager service is installed

Details: If the Network Manager service is installed, an instance can fail to reboot.

Workaround: If the Network Manager service is not required, you can uninstall it. If the Network Manager service is required, modify the network interface configuration file before you reboot the instance. Set the NM_CONTROLLED configuration key to "no":

NM_CONTROLLED="no"

Usually, the network interface configuration file is located in:

/etc/sysconfig/network-scripts/ifcfg-<interface_name>

Direct link to this issue: Instance reboot fails if the Network Manager service is installed

Non-ASCII characters in the instance name can cause Windows launch failures

Details: When the name of a Windows instance includes non-ASCII characters, the instance might fail to launch. This happens because the instance name is used to set the Windows computer name during instance creation. Windows restricts the characters that are allowed in computer names, and non-ASCII characters can cause Windows instance creation failures.

Workaround: We are aware of the issue and working on a resolution. To temporarily work around this issue, name Windows instances using only these ASCII characters: uppercase letters (A-Z), lowercase letters (a-z), numbers (0-9), and hyphens (-).

Direct link to this issue: Non-ASCII characters in the instance name can cause Windows launch failures

Instance pools and cluster networks fail to launch when the associated instance configuration or load balancer includes defined tags

Details: When you attempt to launch an instance pool or cluster network from an instance configuration that includes defined tags, the instances can't be launched. Instance launch also fails when you attempt to attach instances in a pool or cluster network to a load balancer that includes defined tags. The instance launch fails with any of these error messages:

The following tag namespaces / keys are not authorized or not found: Policies not set for the tenancy.
Failed to launch instance in pool <instance_pool_OCID> because the defined tags could not be operated on.
Failed to create instance pool in cluster network <cluster_network_OCID> because the defined tags could not be operated on.
Failed to create backend in load balancer: <load_balancer_OCID> because the defined tags could not be operated on.

For instance configurations, this happens because the Compute service fails to propagate the defined tags to the instances in the pool or cluster network. For load balancers, this happens because the Compute service fails to apply the tags to the instances.

Workaround: We are aware of the issue and working on a resolution. As a workaround, you must authorize the Compute service to manage tag namespaces on your behalf. Add either of the following policies:

  • To authorize the service to use the tag namespace in all compartments in the tenancy:

    Allow service compute_management to use tag-namespace in tenancy
  • To reduce the scope of access by compartment, use the following statement:

    Allow service compute_management to use tag-namespace in compartment <compartment_name>

Direct link to this issue: Instance pools and cluster networks fail to launch when the associated instance configuration or load balancer includes defined tags

Automatic updates using Oracle Ksplice fail with some FastConnect networking setups
Oracle Autonomous Linux images cannot be managed by the OS Management service
Missing flag is required for the OS Management service for instances created before September 2019

Details: When using the OS Management service on Oracle Linux instances that were created before September 2019, the Instance Details page might incorrectly indicate that the OS Management service is enabled (Oracle Cloud Management Agent: Enabled) when the service is not enabled.

This issue affects instances that were created before the isManagementDisabled flag was defined in the metadata for Compute instances. Because this flag is not present, the metadata for these instances is not set properly for the OS Management service.

Workaround: To resolve this issue, set the isManagementDisabled flag to false:

  1. In the agent configuration for the instance, set the isManagementDisabled option to false:

    oci compute instance update --instance-id <instance_OCID> --agent-config '{"isManagementDisabled": false, "isMonitoringDisabled": false}'
  2. Use the CLI to verify that the flag has been updated:

    oci compute instance get --instance-id <instance_OCID>

    In the output, the updated flag appears as "is-management-disabled": false.

    {
      "data":
        "agent-config": {
          "is-management-disabled": false,
          "is-monitoring-disabled": false
        },
    ...
    }
  3. Connect to the instance using SSH, and then use cURL to call the instance metadata service and verify that the flag has been updated within the Compute instance:

    curl http://169.254.169.254/opc/v1/instance/

    In the output, the updated flag appears as "managementDisabled" : false.

    {
      ...
      "agentConfig" : {
        "monitoringDisabled" : false,
        "managementDisabled" : false
      }
    }

Direct link to this issue: Missing flag is required for the OS Management service for instances created before September 2019

Console

Bug in the Firefox browser can cause the Console not to load

Details: When you try to access the Console using Firefox, the Console page never loads in the browser. This problem is likely caused by a corrupted Firefox user profile.

Workaround: Create a new Firefox user profile as follows:

  1. Ensure that you are on the latest version of Firefox. If not, update to the latest version.
  2. Create a new user profile and remove your old user profile. See Mozilla Support for instructions to create and remove user profiles: https://support.mozilla.org/en-US/kb/profile-manager-create-and-remove-firefox-profiles.
  3. Open Firefox with the new profile.

Alternatively, you can use one of the other Supported Browsers.

Direct link to this issue: Bug in the Firefox browser can cause the Console not to load

Container Engine for Kubernetes

Worker node properties out-of-sync with updated node pool properties

Details: The properties of new worker nodes starting in a node pool do not reflect the latest changes to the node pool's properties. The likely cause is use of the deprecated quantityPerSubnet and subnetIds attributes when using the UpdateNodePoolDetails API operation to update node pool properties.

Workarounds: Do one of the following:
  • Start using the nodeConfigDetails attribute when using the UpdateNodePoolDetails API operation. First, scale the node pool to 0 using quantityPerSubnet. Then stop using the subnetIds and quantityPerSubnet attributes, and use the nodeConfigDetails attribute instead.
  • Contact Oracle Support to restart the back-end component responsible for synchronization (the tenant-agent component).

Direct link to this issue: Worker node properties out-of-sync with updated node pool properties

Unable to launch Kubernetes Dashboard

Details: When you launch the Kubernetes Dashboard, in some situations you might encounter "net/http: TLS handshake timeout" and "connection reset by peer" error messages in your web browser. This issue has only been observed in newly created clusters running Kubernetes version 1.11. For details about a related Kubernetes issue, see https://github.com/kubernetes/dashboard/issues/3038.

Workaround:

  1. In a terminal window, enter:

    $ kubectl -n kube-system port-forward svc/kubernetes-dashboard 8443:443
  2. In your web browser, go to https://localhost:8443

Direct link to this issue: Unable to launch Kubernetes Dashboard

Unable to access in-cluster Helm

Details: When you use a Kubeconfig token version 2.0.0 to access Helm/Tiller versions prior to version 2.11, you will receive one of the following errors:

  • Error: Unauthorized
  • Error: could not get Kubernetes client: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1beta1"

Workaround: Upgrade Helm/Tiller as follows:

  1. In a terminal window, download a Kubeconfig token version 1.0.0 by entering the following command:

    $ oci ce cluster create-kubeconfig --token-version=1.0.0 --cluster-id=<cluster_ocid>
  2. Identify the region key to use to specify the Oracle Cloud Infrastructure Registry registry in the cluster's region (see Availability by Region). For example, if the cluster is in US East (Ashburn), iad is the region key to use to specify the registry in that region.

  3. Upgrade Tiller by entering the following command:

    $ helm init --upgrade -i <region-key>.ocir.io/odx-oke/oke-public/tiller:v2.14.3

    where <region-key> is the key that you identified in the previous step.

  4. In a browser, navigate to https://helm.sh/docs/using_helm/#installing-the-helm-client and follow the instructions to download and install the Helm client binary.

  5. Having upgraded Helm/Tiller, download a Kubeconfig token version 2.0.0 by entering the following command:

    $ oci ce cluster create-kubeconfig --token-version=2.0.0 --cluster-id=<cluster_ocid>

Direct link to this issue: Unable to access in-cluster Helm

Some Kubernetes features (for example, the Metrics Server) cannot communicate with the kubelet via http/2

Details: The Container Engine for Kubernetes 1.8.0 release included a security improvement to improve cipher strength on the kubelet running on customer worker nodes. New worker nodes created between August 20, 2019 and September 16, 2019 include this configuration. The new set of ciphers does not allow connections to the kubelet via http/2. This restriction impacts the Metric Server, and also the Horizontal Pod Autoscaler which depends on the Metrics Server.

Workaround:

For each existing worker node in turn:

  1. Prevent new pods from starting and delete existing pods on the worker node by entering kubectl drain <node_name>. For more information:

    Recommended: Leverage pod disruption budgets as appropriate for your application to ensure that there's a sufficient number of replica pods running throughout the drain operation.

  2. Delete the worker node (for example, by terminating it in the Console).
  3. Wait for a replacement worker node to start.

The replacement worker nodes include include new settings to enable communication with the kubelet.

Direct link to this issue: Some Kubernetes features (for example, the Metrics Server) cannot communicate with the kubelet via http/2

Kubernetes pods fail to mount volumes due to timeouts

Details: When a new pod starts on a worker node in a cluster, in some situations the pod fails to mount all volumes attached to the node due to timeouts and you see a message similar to the following:

Unable to mount volumes for pod "<pod_name>(<pod_uid>)": timeout expired waiting for volumes to attach or mount for pod "<namespace>"/"<pod_name>". list of unmounted volumes=[<failed_volume>]. list of unattached volumes=[<… list of volumes >]

One possible cause identified for this issue is if the pod spec includes an fsGroup field in the securityContext field. If the container is running on a worker node as a non-root user, setting the fsGroup field in the securityContext can cause timeouts due to the number of files to which Kubernetes must make ownership changes (see https://github.com/kubernetes/kubernetes/issues/67014).

If the pod spec does not include an fsgroup field in the securityContext, the cause is unknown.

Workarounds:

If the pod spec includes the fsgroup field in the securityContext and the container is running a non-root user, consider the following workarounds:

  • Remove the fsgroup field from the securityContext.
  • Use the supplementalGroups field in the securityContext (instead of fsgroup), and set supplementalGroups to the volume identifier.
  • Change the pod spec so that the container runs as root.

If the pod spec does not include the fsgroup field in the securityContext, or if the container is already running as root, you have to restart or replace the worker node. For example, by stopping and starting the instance, by rebooting the instance, or by terminating the instance so that a new instance is started. Follow the instructions in Stopping and Starting an Instance or Terminating an Instance as appropriate to use the Console or the API. Alternatively you can use CLI commands, such as the following example to terminate an instance:

$ INSTANCE_OCID=$(kubectl get node <name> -ojsonpath='{.spec.providerID}')
$ oci compute instance terminate --instance-id $INSTANCE_OCID

where <name> is the worker node name, derived from the Private IP Address property of the instance (for example, 10.0.10.5).

Direct link to this issue: Kubernetes pods fail to mount volumes due to timeouts

Data Catalog

Rich-Text formatting lost while exporting a glossary
Partial deletion of large data assets

Details: When deleting data assets with a large number of data entities, you receive an error notification.

Workaround: We are aware of the issue and working on a resolution. Retry the delete operation till you receive a successfully deleted notification.

Direct link to this issue: Partial deletion of large data assets

Incremental harvest of Autonomous Database data assets

Details: When using the incremental harvest option for harvesting Autonomous Database data assets, changes to the comments column in the Oracle Database are not identified by Data Catalog.

Workaround: Re-harvest the data asset without selecting the incremental harvest option. This ensures that the latest state of the data asset is reflected in the Data Catalog.

Direct link to this issue: Incremental harvest of Autonomous Database data assets

RESOLVED: Unable to import a large business glossary file

Details: When importing a large glossary, the import operation fails.

Workaround: We are aware of the issue and working on a resolution. Split your glossary file into smaller files with less than 100 terms and then import the individual files into Data Catalog.

Direct link to this issue: Unable to import a large business glossary file

RESOLVED: Issues in incremental harvest of data assets

Details: When using the incremental harvest option for harvesting data assets, it does not work correctly for certain data entities, especially excel file sources. Additionally, when re-harvesting data assets, data entities that were deleted in the data source are still present in Data Catalog.

Workaround: We are aware of the issue and working on a resolution. Re-harvest the data asset without selecting the incremental harvest option. This ensures that the latest state of the data asset is reflected in the Data Catalog.

Direct link to this issue: Issues in incremental harvest of data assets

Data Flow

Files required for each application should be in the same region where the application is created.

Details: Applications must be created in the same region as the Object Store bucket that contains all related files, jars and configs required for a successful run of the application. Cross region scenario is not supported.

Workaround: We are aware of this issue. There is no workaround but to ensure that all files, jars, configuration, and so on are in the same region.

Direct link to this issue: Files required for each application should be in the same region where the application is created.

Stream processing of high throughput data is currently not supported.
Spark UI errors

Details: You might encounter an error accessing the Spark UI. The typical cause for this error is that the Spark application is ending.

Workaround: If you encounter an error, wait for one minute before accessing the Spark UI again.

Direct link to this issue: Spark UI errors

Data Integration

The node resulting from extract transformation with regex mismatch shows blank values in Data Xplorer.
Logs for failed tasks are not displayed.

Details: When a task fails, the Log Messages panel for the task run does not display the error messages.

Workaround: We are aware of the issue and working on a resolution.

Direct link to this issue: Logs for failed tasks are not displayed.

Target Attributes for Object Storage fail to populate the sum if you specify a scale and length, and the resultant simple aggregation values exceed the character limit.

Details: In Object Storage, if you specify a scale and length for simple aggregation, the resultant sum needs to fit within that specified length and scale. If the sum exceeds the specified length and scale, it returns a null value in Data Xplorer, and doesn't show anything in the target output.

Workaround: We are aware of the issue and working on a resolution.

Direct link to this issue: Target Attributes for Object Storage fail to populate the sum if you specify a scale and length, and the resultant simple aggregation values exceed the character limit.

The error message incorrectly describes a failure caused by unsupported data types in a task as an issue in fetching Data Xplorer data.

Details: If you select a data entity that uses unsupported data types while using Data Xplorer or running a task, the error message incorrectly shows it as a problem in fetching Data Xplorer data. The following Oracle Database data types are not supported:

  • ROWID
  • UROWID
  • BFILE
  • TIMESTAMP WITH LOCAL TIMEZONE
  • INTERVAL DAY TO SECOND
  • INTERVAL YEAR TO MONTH
  • XMLTYPE
  • SDO_GEOMETRY

Additionally, BINARY data types in Hive or MySQL data sources are not supported.

Workaround: We are aware of the issue and working on a resolution. Meanwhile, do not select a data entity that has unsupported data types.

Direct link to this issue: The error message incorrectly describes a failure caused by unsupported data types in a task as an issue in fetching Data Xplorer data.

Only delimiter value appears after applying a merge attribute transformation, when using one or more attributes containing null values.

Details: Suppose you create a data flow or data loader task with Oracle database as source. You perform merge attribute transformation on one or more attributes, at least one having null value, and the other non-null value. The result only contains the specified delimiter and none of the values.

Workaround: We are aware of the issue and working on a resolution.

Direct link to this issue: Only delimiter value appears after applying a merge attribute transformation, when using one or more attributes containing null values.

Data Xplorer fails when data source has Numeric or Float data types and decimal scale (x) is greater than precision(y).

Details: When you create a data flow and select a data entity for the source operator that has Numeric or Float data types, and the decimal scale(x) is greater than precision(y), Data Xplorer fails with the following error message:

error in fetching data from data grid

Workaround: We are aware of the issue and working on a resolution. Meanwhile, make sure that for the Numeric or Float data types, the decimal scale(x) is lesser than the precision(y).

Direct link to this issue: Data Xplorer fails when data source has Numeric or Float data types and decimal scale (x) is greater than precision(y).

Task execution fails when source data entity has Integer data type.

Details: In a data flow, when you select a data entity for the source operator that has Integer data types and you select Create New Data Entity, task execution fails.

Workaround: We are aware of the issue and working on a resolution. Meanwhile, make sure that your source operator does not have Integer data type attributes when you select the Create New Data Entity option.

Direct link to this issue: Task execution fails when source data entity has Integer data type.

When two sources have the same attribute name, then the target attribute is loaded with null values.

Details: In a data flow, when you have two sources with the same attribute name, that attribute in the target is loaded with null values and the attribute data from both sources is lost.

Workaround: We are aware of the issue and working on a resolution. Meanwhile, make sure that your source operators do not have the same attribute names. You can use the Rename transformation to rename the attributes.

Direct link to this issue: When two sources have the same attribute name, then the target attribute is loaded with null values.

When change case transformation is applied to multiple attributes, the transformed columns are not available in the target.
Directories created in Object Storage and ending with a colon(:) are not supported.

Details: While working in a data flow, when you try to select an Object Storage directory that ends with a colon(:) for the target operator, you get the following error message:

Select a Directory

Workaround: We are aware of the issue and working on a resolution. Currently, Data Integration only supports directories that end with a slash(/). Oracle Storage now creates directories ending with a colon(:). You can either use the Create New Data Entity option to create a new directory in Object Storage and specify the directory name ending with a slash (for example, mynewdir/), or you must create a directory in Object Storage ending with a slash(/) using SDK.

Direct link to this issue: Directories created in Object Storage and ending with a colon(:) are not supported.

Spaces in attribute name are not supported.

Details: If there are spaces in an attribute name, then Data Xplorer fails.

Workaround: We are aware of the issue and working on a resolution.

Direct link to this issue: Spaces in attribute name are not supported.

Data Science

Currently, there are no known issues with the Data Science service.

Database

All DB Systems

Billing issue when changing license type

Details: When you change the license type of your Database or DB system from BYOL to license included, or the other way around, you are billed for both types of licenses for the first hour. After that, you are billed according to your updated license type.

Workaround: We are aware of the issue and working on a resolution.

Direct link to this issue: Billing issue when changing license type

RESOLVED: Service gateway does not currently support OS updates

Details: If you configure your VCN with a service gateway, the private subnet blocks access to the YUM repositories needed to update the OS. This issue affects all types of DB systems.

Workaround: This issue is now resolved. Here is the workaround that was recommended before the issue's resolution:

The service gateway enables access to the Oracle YUM repos if you use the Available Service CIDR Labels called All <region> Services in Oracle Services Network. However, you still might have issues accessing the YUM services through the service gateway. There's a solution to the issue. For details, see Issues with access to Oracle yum services through service gateway.

Direct link to this issue: Service gateway does not currently support OS updates

Bare Metal and Virtual Machine DB Systems Only

Backing up to Object Storage using dbcli or RMAN fails due to certificate change

Details: Unmanaged backups to Object Storage using the database CLI (dbcli) or RMAN fail with the following errors:

-> Oracle Error Codes found:
-> ORA-19554: error allocating device, device type: SBT_TAPE, device name:
-> ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
-> KBHS-00712: ORA-29024 received from local HTTP service
-> ORA-27023: skgfqsbi: media manager protocol error

In response to policies implemented by two common web browsers regarding Symantec certificates, Oracle recently changed the certificate authority used for Oracle Cloud Infrastructure. The resulting change in SSL certificates can cause backups to Object Storage to fail if the Oracle Database Cloud Backup Module still points to the old certificate.

Workaround for dbcli: Check the log files for the errors listed and, if found, update the backup module.

Review the RMAN backup log files for the errors listed above:

  1. Determine the ID of the failed backup job.

    dbcli list-jobs

    In this example output, the failed backup job ID is "f59d8470-6c37-49e4-a372-4788c984ea59".

    root@<node name> ~]# dbcli list-jobs
     
    ID                                       Description                                                                 Created                             Status
    ---------------------------------------- --------------------------------------------------------------------------- ----------------------------------- ----------
    cbe852de-c0f3-4807-85e8-7523647ec78c     Authentication key update for DCS_ADMIN                                     March 30, 2018 4:10:21 AM UTC       Success
    db83fdc4-5245-4307-88a7-178f8a0efa48     Provisioning service creation                                               March 30, 2018 4:12:01 AM UTC       Success
    c1511a7a-3c2e-4e42-9520-f156b1b4cf0e     SSH keys update                                                             March 30, 2018 4:48:24 AM UTC       Success
    22adf146-9779-4a2c-8682-7fd04d7520b2     SSH key delete                                                              March 30, 2018 4:50:02 AM UTC       Success
    6f2be750-9823-4ed5-b5ff-8e49f136dd22     create object store:bV0wqIaoLA4xLT4dGjOu                                    March 30, 2018 5:33:38 AM UTC       Success
    0716f464-1a10-40df-a303-cadee0302b1b     create backup config:bV0wqIaoLA4xLT4dGjOu_BC                                March 30, 2018 5:33:49 AM UTC       Success
    e08b21c3-cd09-4e3a-944c-d1da96cb21d8     update database : hfdb1                                                     March 30, 2018 5:34:04 AM UTC       Success
    1c3d7c58-79c3-4039-8f48-787057ce7c6e     Create Longterm Backup with TAG-DBTLongterm<identity number> for Db:<dbname>    March 30, 2018 5:37:11 AM UTC       Success
    f59d8470-6c37-49e4-a372-4788c984ea59     Create Longterm Backup with TAG-DBTLongterm<identity number> for Db:<dbname>    March 30, 2018 5:43:45 AM UTC       Failure
  2. Use the ID of the failed job to obtain the location of the log file to review.

    
    dbcli describe-job -i <failed_job_ID>

    Relevant output from the describe-job command should look like this:

    Message: DCS-10001:Internal error encountered: Failed to run Rman statement.
    Refer log in Node <node_name>: /opt/oracle/dcs/log/<node_name>/rman/bkup/<db_unique_name>/rman_backup/<date>/rman_backup_<date>.log.

Update the Oracle Database Cloud Backup Module:

  1. Determine the Swift object store ID and user the database is using for backups.

    1. Run the dbcli list-databases command to determine the ID of the database.

    2. Use the database ID to determine the backup configuration ID (backupConfigId).

      dbcli list-databases
      dbcli describe-database -i <database_ID> -j
    3. Using the backup configuration ID you noted from the previous step, determine the object store ID (objectStoreId).

      dbcli list-backupconfigs
      dbcli describe-backupconfig –i <backupconfig_ID> -j
    4. Using the object store ID you noted from the previous step, determine the object store user (userName).

      dbcli list-objectstoreswifts
      dbcli describe-objectstoreswift –i <objectstore_ID> -j
  2. Using the object store credentials you obtained from step 1, update the backup module.

    dbcli update-objectstoreswift –i <objectstore_ID> -p –u <user_name>

Workaround for RMAN: Check the RMAN log files for the error messages listed. If found, log on to the host as the oracle user, and use your Swift credentials to reinstall the backup module.

Note

Swift passwords are now called "Auth tokens." For details, see Using an Auth Token with Swift.
java -jar <opc_install.jar_path> -opcId '<swift_user_ID>' -opcPass '<auth_token>' -container <objectstore_container> -walletDir <wallet_directory> -configfile <config_file> -host https://swiftobjectstorage.<region_name>.oraclecloud.com/v1/<object_storage_namespace> -import-all-trustcerts

For a multi-node DB system, perform the workaround on all nodes in the cluster.

See Oracle Database Cloud Backup Module documentation for details on using this command.

Direct link to this issue: Backing up to Object Storage using dbcli or RMAN fails due to certificate change

Breaking changes in Database service SDKs

Details: The SDKs released on October 18, 2018 introduce code-breaking changes to the database size and the database edition attributes in the database backup APIs.

Workaround: Refer to the following language-specific documentation for more details about the breaking changes, and update your existing code as applicable:

Direct link to this issue: Breaking changes in Database service SDKs

Unable to use Managed Backups in your DB system

Details: Backup and restore operations might not work in your DB system when you use the Console or the API.

Workaround: Install the Oracle Database Cloud Backup Module, and then contact Oracle Support Services for further instructions.

To install the Oracle Database Cloud Backup Module:

  1. SSH to the DB system, and log in as opc.

    
    ssh -i <SSH_key> opc@<DB_system_IP address>
    login as: opc

    Alternatively, you can use opc@<DB_system_hostname> to log in.

  2. Download the Oracle Database Cloud Backup Module from http://www.oracle.com/technetwork/database/availability/oracle-cloud-backup-2162729.html.
  3. Extract the contents of opc_installer.zip to a target directory, for example, /home/opc.
  4. In your tenancy, create a temporary user, and grant them privileges to access the tenancy's Object Storage.
  5. For this temporary user, create an Working with Auth Tokens and note down the password.
  6. Verify that credentials work by running the following curl command:

    Note

    Swift passwords are now called "Auth tokens." For details, see Using an Auth Token with Swift.
    curl -v -X HEAD -u  <user_id>:'<auth_token>' https://swiftobjectstorage.<region_name>.oraclecloud.com/v1/<object_storage_namespace>

    See https://cloud.oracle.com/infrastructure/storage/object-storage/faq for the correct region to use.

    The command should return either the HTTP 200 or the HTTP 204 No Content success status response code. Any other status code indicates a problem connecting to Object Storage.

  7. Run the following command:

    java -jar opc_install.jar -opcid <user_id> -opcPass '<auth_token>' -libDir <target_dir> -walletDir <target_dir> -host https://swiftobjectstorage.<region_name>.oraclecloud.com/v1/<object_storage_namespace> -configFile config.txt

    Note that <target_dir> is the directory to which you extracted opc_installer.zip in step 3.

    This command might take a few minutes to complete because it downloads libopc.so and other files. Once the command completes, you should see several files (including libopc.so) in your target directory.

  8. Change directory to your target directory, and copy the lipopc.so and opc_install.jar files into the /opt/oracle/oak/pkgrepos/oss/odbcs directory.

    cp libopc.so /opt/oracle/oak/pkgrepos/oss/odbcs
    
    
    cp opc_install.jar /opt/oracle/oak/pkgrepos/oss/odbcs

    (You might have to use sudo with the copy commands to run them as root.)

  9. Run the following command to check whether the directory indicated exists:

    
    
    ls /opt/oracle/dcs/commonstore/pkgrepos/oss/odbcs

    If this directory exists, perform the following steps:

    1. Back up the files in the /opt/oracle/dcs/commonstore/pkgrepos/oss/odbcs directory.
    2. Run these two commands to replace the existing libopc.so and opc_install.jar files in that directory:

      
      cp libopc.so /opt/oracle/dcs/commonstore/pkgrepos/oss/odbcs
      cp opc_install.jar /opt/oracle/dcs/commonstore/pkgrepos/oss/odbcs
  10. Verify the version of opc_install.jar.

    
    java -jar /opt/oracle/oak/pkgrepos/oss/odbcs/opc_install.jar |grep -i build
    

    If /opt/oracle/dcs/commonstore/pkgrepos/oss/odbcs exists, also run the following command:

    
    java -jar /opt/oracle/dcs/commonstore/pkgrepos/oss/odbcs/opc_install.jar |grep -i build

    Both commands should return the following output:

    Oracle Database Cloud Backup Module Install Tool, build MAIN_2017-08-16.
  11. (Optional) Delete the temporary user and the target directory you used to install the backup module.

After you complete the procedure, contact Oracle Support or your tenant administrator for further instructions. You must provide the OCID of the DB system for which you would like to enable backups.

Direct link to this issue: Unable to use Managed Backups in your DB System

Managed Automatic Backups fail on the VM.Standard1.1 shape due to a process crash

Details: Memory limitations of host machines running the VM.Standard1.1 shape can cause failures for automatic database backup jobs managed by Oracle Cloud Infrastructure (jobs managed by using either the Console or the API). You can change the systems' memory parameters to resolve this issue.

Workaround: Change the systems' memory parameters as follows:

  1. Switch to the oracle user in the operating system.

    [opc@hostname ~]$ sudo su - oracle
  2. Set the environment variable to login to the database instance. For example:

    
    [oracle@hostname ~]$ . oraenv
     ORACLE_SID = [oracle] ? orcl
    				
  3. Start SQL*Plus.

    [oracle@hostname ~]$ sqlplus / as sysdba
  4. Change the initial memory parameters as follows:

    
    SQL> ALTER SYSTEM SET SGA_TARGET = 1228M scope=spfile;
    SQL> ALTER SYSTEM SET PGA_AGGREGATE_TARGET = 1228M;
    SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT = 2457M;
    SQL> exit
    							
  5. Restart the database instance.

    
    [oracle@hostname ~]$ srvctl stop database -d db_unique_name -o immediate
    [oracle@hostname ~]$ srvctl start database -d db_unique_name -o open								

Direct link to this issue: Managed Automatic Backups fail on the VM.Standard1.1 shape due to a process crash

Oracle Data Pump operations return "ORA-00439: feature not enabled"

Details: On High Performance and Extreme Performance DB systems, Data Pump utility operations that use compression and/or parallelism might fail and return the error ORA-00439: feature not enabled. This issue affects database versions 12.1.0.2.161018 and 12.1.0.2.170117.

Workaround: Apply patch 25579568 or 25891266 to Oracle Database homes for database versions 12.1.0.2.161018 or 12.1.0.2.170117, respectively. Alternatively, use the Console to apply the April 2017 patch to the DB system and database home.

Note

Determining the Version of a Database in a Database Home

To determine the version of a database in a database home, run either $ORACLE_HOME/OPatch/opatch lspatches as the oracle user or dbcli list-dbhomes as the root user.

Direct link to this issue: Oracle Data Pump operations return "ORA-00439: feature not enabled"

Unable to connect to the EM Express console from your 1-node DB system

Details: You might get a "Secure Connection Failed" error message when you try to connect to the EM Express console from your 1-node DB system because the correct permissions were not applied automatically.

Workaround: Add read permissions for the asmadmin group on the wallet directory of the DB system, and then retry the connection:

  1. SSH to the DB system host, log in as opc, sudo to the grid user.

    [opc@dbsysHost ~]$ sudo su - grid
    [grid@dbsysHost ~]$ . oraenv
    ORACLE_SID = [+ASM1] ?
    The Oracle base has been set to /u01/app/grid
    
  2. Get the location of the wallet directory, shown in red below in the command output.

    [grid@dbsysHost ~]$ lsnrctl status | grep xdb_wallet
    
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=dbsysHost.sub04061528182.dbsysapril6.oraclevcn.com)(PORT=5500))(Security=(my_wallet_directory=/u01/app/oracle/admin/dbsys12_phx3wm/xdb_wallet))(Presentation=HTTP)(Session=RAW))
  3. Return to the opc user, switch to the oracle user, and change to the wallet directory.

    [opc@dbsysHost ~]$ sudo su - oracle
    [oracle@dbsysHost ~]$ cd /u01/app/oracle/admin/dbsys12_phx3wm/xdb_wallet
  4. List the directory contents and note the permissions.

    
    [oracle@dbsysHost xdb_wallet]$ ls -ltr
    total 8
    -rw------- 1 oracle asmadmin 3881 Apr  6 16:32 ewallet.p12
    -rw------- 1 oracle asmadmin 3926 Apr  6 16:32 cwallet.sso
    
  5. Change the permissions:

    
    [oracle@dbsysHost xdb_wallet]$ chmod 640 /u01/app/oracle/admin/dbsys12_phx3wm/xdb_wallet/*
  6. Verify that read permissions were added.

    [oracle@dbsysHost xdb_wallet]$ ls -ltr
    total 8
    -rw-r----- 1 oracle asmadmin 3881 Apr  6 16:32 ewallet.p12
    -rw-r----- 1 oracle asmadmin 3926 Apr  6 16:32 cwallet.sso
    

Direct link to this issue: Unable to connect to the EM Express console from your 1-node DB system

Exadata DB Systems Only

Backing up to Object Storage using bkup_api or RMAN fails due to certificate change

Details: Backup operations to Object Storage using the Exadata backup utility (bkup_api) or RMAN fail with the following errors:

* DBaaS Error trace:
-> API::ERROR -> KBHS-00715: HTTP error occurred 'oracle-error'
-> API::ERROR -> ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
-> API::ERROR -> ORA-19554: error allocating device, device type: SBT_TAPE, device name:
-> API::ERROR -> ORA-27023: skgfqsbi: media manager protocol error
-> API::ERROR Unable to verify the backup pieces
-> Oracle Error Codes found:
-> ORA-19554: error allocating device, device type: SBT_TAPE, device name:
-> ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
-> KBHS-00712: ORA-29024 received from local HTTP service
-> ORA-27023: skgfqsbi: media manager protocol error

In response to policies implemented by two common web browsers regarding Symantec certificates, Oracle recently changed the certificate authority used for Oracle Cloud Infrastructure. The resulting change in SSL certificates can cause backups to Object Storage to fail if the Oracle Database Cloud Backup Module still points to the old certificate.

Important

Before using the applicable workaround in this section, follow the steps in Updating the Cloud Tooling on Each Compute Node Manually to ensure the latest version of dbaastools_exa is installed on the system.

Workaround for bkup_api: Check the log files for the errors listed above, and if found, reinstall the backup module.

Use the following command to check the status of the failed backup:

/var/opt/oracle/bkup_api/bkup_api bkup_status --dbname=<database_name>

Run the following command to reinstall the backup module:

/var/opt/oracle/ocde/assistants/bkup/bkup -dbname=<database_name>

Workaround for RMAN: Check the RMAN log files for the error messages listed. If found, log on to your host as the oracle user, and reinstall the backup module using your Swift credentials.

Note

Swift passwords are now called "Auth tokens." For details, see Using an Auth Token with Swift.
java -jar <opc_install.jar_path> -opcId '<Swift_user_ID>' -opcPass '<auth_token>' -container <objectstore_container> -walletDir <wallet_directory> -configfile <config_file> -host https://swiftobjectstorage.<region_name>.oraclecloud.com/v1/<object_storage_namespace> -import-all-trustcerts

Perform this workaround on all nodes in the cluster.

See Oracle Database Cloud Backup Module documentation for details on using this command.

Direct link to this issue: Backing up to Object Storage using bkup_api or RMAN fails due to certificate change

Console information not synced for Data Guard enabled databases when using dbaascli

Details: With the release of the shared Database Home feature for Exadata DB systems, the Console now also synchronizes and displays information about databases that are created and managed by using the dbaasapi and dbaascli utilities. However, databases with Data Guard configured do not display correct information in the Console under the following conditions:

  • If Data Guard was enabled by using the Console, and then a change is made to the primary or standby database by using dbaascli (such as moving the database to a different home), the result is not reflected in the Console.
  • If Data Guard was configured manually, the Console does not show a Data Guard association between the two databases.

Workaround: We are aware of the issue and working on a resolution. In the meantime, Oracle recommends that you manage your Data Guard enabled databases by using either only the Console or only command line utilities.

Direct link to this issue: Console information not synced for Data Guard enabled databases when using dbaascli

Grid Infrastructure does not start after offlining and onlining a disk

Details: This is a clusterware issue that occurs only when the Oracle GI version is 12.2.0.1 without any bundle patch. The problem is caused by corruption of a voting disk after you offline then online the disk.

Workaround: Determine the version of the GI, and whether the voting disk is corrupted. Repair the disk, if applicable, and then apply the latest GI bundle.

  1. Verify the GI version is 12.2.0.1 without any bundle patch applied:

    
    [root@rmstest-udaau1 ~]# su - grid
    [grid@rmstest-udaau1 ~]$ . oraenv
    ORACLE_SID = [+ASM1] ? +ASM1
    The Oracle base has been set to /u01/app/grid
    [grid@rmstest-udaau1 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory
    Oracle Interim Patch Installer version 12.2.0.1.6
    Copyright (c) 2018, Oracle Corporation.  All rights reserved.
    
    
    Oracle Home       : /u01/app/12.2.0.1/grid
    Central Inventory : /u01/app/oraInventory
       from           : /u01/app/12.2.0.1/grid/oraInst.loc
    OPatch version    : 12.2.0.1.6
    OUI version       : 12.2.0.1.4
    Log file location : /u01/app/12.2.0.1/grid/cfgtoollogs/opatch/opatch2018-01-15_22-11-10PM_1.log
    
    Lsinventory Output file location : /u01/app/12.2.0.1/grid/cfgtoollogs/opatch/lsinv/lsinventory2018-01-15_22-11-10PM.txt
    
    --------------------------------------------------------------------------------
    Local Machine Information::
    Hostname: rmstest-udaau1.exaagclient.sretest.oraclevcn.com
    ARU platform id: 226
    ARU platform description:: Linux x86-64
    
    Installed Top-level Products (1):
    
    Oracle Grid Infrastructure 12c                                       12.2.0.1.0
    There are 1 products installed in this Oracle Home.
    
    
    There are no Interim patches installed in this Oracle Home.
    
    
    --------------------------------------------------------------------------------
    
    OPatch succeeded.
  2. Check the /u01/app/grid/diag/crs/<hostname>/crs/trace/ocssd.trc file for evidence that the GI failed to start due to voting disk corruption:

    ocssd.trc
     
    2017-01-17 23:45:11.955 :    CSSD:3807860480: clssnmvDiskCheck:: configured 
    Sites = 1, Incative sites = 1, Mininum Sites required = 1 
    2017-01-17 23:45:11.955 :    CSSD:3807860480: (:CSSNM00018:)clssnmvDiskCheck: 
    Aborting, 2 of 5 configured voting disks available, need 3 
    ...... 
    . 
    2017-01-17 23:45:11.956 :    CSSD:3807860480: clssnmCheckForNetworkFailure: 
    skipping 31 defined 0 
    2017-01-17 23:45:11.956 :    CSSD:3807860480: clssnmRemoveNodeInTerm: node 4, 
    slcc05db08 terminated. Removing from its own member and connected bitmaps 
    2017-01-17 23:45:11.956 :    CSSD:3807860480: 
    ################################### 
    2017-01-17 23:45:11.956 :    CSSD:3807860480: clssscExit: CSSD aborting from 
    thread clssnmvDiskPingMonitorThread 
    2017-01-17 23:45:11.956 :    CSSD:3807860480: 
    ################################### 
    2017-01-17 23:45:11.956 :    CSSD:3807860480: (:CSSSC00012:)clssscExit: A 
    fatal error occurred and the CSS daemon is terminating abnormally 
     
    ------------
     
    2017-01-19 19:00:32.689 :    CSSD:3469420288: clssnmFindVF: Duplicate voting disk found in the queue of previously configured disks 
    queued(o/192.168.10.18/PCW_CD_02_slcc05cel10|[66223efc-29254fbb-bf901601-21009 
    cbd]), 
    found(o/192.168.10.18/PCW_CD_02_slcc05cel10|[66223efc-29254fbb-bf901601-21009c 
    bd]), is not corrupted 
    2017-01-19 19:01:06.467 :    CSSD:3452057344: clssnmvVoteDiskValidation: 
    Voting disk(o/192.168.10.19/PCW_CD_02_slcc05cel11) is corrupted
  3. You can also use SQL*Plus to confirm that the voting disks are corrupted:

    1. Log in as the grid user, and set the environment to ASM.

      [root@rmstest-udaau1 ~]# su - grid
      [grid@rmstest-udaau1 ~]$ . oraenv
      ORACLE_SID = [+ASM1] ? +ASM1
      The Oracle base has been set to /u01/app/grid
    2. Log in to SQL*Plus as SYSASM.

      $ORACLE_HOME/bin/sqlplus / as sysasm
    3. Run the following two queries:

      SQL> select name, voting_file from v$asm_disk where VOTING_FILE='Y' and group_number !=0;
      SQL> select  CC.name, count(*) from x$kfdat AA JOIN (select disk_number, name from v$asm_disk where VOTING_FILE='Y' and group_number !=0) CC ON CC.disk_number = AA.NUMBER_KFDAT where AA.FNUM_KFDAT= 1048572 group by CC.name;

      If the system is healthy, the results should look like the following example.

      Query 1 Results

      NAME                           VOTING_FILE
      ------------------------------ ---------------
      DBFSC3_CD_02_SLCLCX0788        Y
      DBFSC3_CD_09_SLCLCX0787        Y
      DBFSC3_CD_04_SLCLCX0786        Y

      Query 2 Results

      NAME                           COUNT(*)
      ------------------------------ ---------------
      DBFSC3_CD_02_SLCLCX0788        8
      DBFSC3_CD_09_SLCLCX0787        8
      DBFSC3_CD_04_SLCLCX0786        8

      In a healthy system, every voting disk returned in the first query should also be returned in the second query and the counts for all the disks should be non-zero. Otherwise, one or more of your voting disks are corrupted.

  4. If a voting disks is corrupted, offline the grid disk that contains the voting disk. The cells will automatically move the bad voting disk to the other grid disk and online that voting disk.

    1. The following command offlines a grid disk named DATAC01_CD_05_SCAQAE08CELADM13.

      SQL> alter diskgroup DATAC01 offline disk DATAC01_CD_05_SCAQAE08CELADM13;
           Diskgroup altered.
    2. Wait 30 seconds and then rerun the two queries in step 3c to verify that the voting disk migrated to the new grid disk and that it is healthy.

    3. Verify the grid disk you offlined is now online:

      SQL> select name, mode_status, voting_file from v$asm_disk where name='DATAC01_CD_05_SCAQAE08CELADM13';

      The mode_status should be ONLINE, and the voting_file should NOT be Y.

    Repeat steps 4a through 4c for each remaining grid disk that contains a corrupt voting disk.
    Note

    If the CRS does not start because of the voting disk corruption, start it using Exclusive mode before you execute the command in step 4.

    crsctl start crs -excl
     
  5. If you are using Oracle GI version 12.2.0.1 without any bundle patch, you must upgrade the GI version to the latest GI bundle, whether or not a voting disk was corrupted.

    See Patching an Exadata Cloud Service Instance Manually for instructions on how to use the exadbcpatchmulti utility to perform patching operations for Oracle Grid Infrastructure and Oracle Database on an Exadata DB system.

Direct link to this issue: Grid Infrastructure does not start after offlining and onlining a disk

Managed features not enabled for systems provisioned before June 15, 2018

Details: Exadata DB systems launched on June 15, 2018 or later automatically include the ability to create, list, and delete databases by using the Console, API, or Oracle Cloud Infrastructure CLI. However, systems provisioned before this date require extra steps to enable this functionality.

Attempts to use this functionality without the extra steps result in the following error messages:

  • On creating a database - "Create Database is not supported on this Exadata DB system. To enable this feature, please contact Oracle Support."
  • On terminating a database - "DeleteDbHome is not supported on this Exadata DB system. To enable this feature, please contact Oracle Support."

Workaround: You need to install the Exadata agent on each node of the Exadata DB system.

First, create a service request for assistance from Oracle Support Services. Oracle Support will respond by providing you with a preauthenticated URL for an Oracle Cloud Infrastructure Object Storage location where you can obtain the agent.

Before you install the Exadata agent:

  • Upgrade the tooling (dbaastools rpm) to latest version on all the nodes on the Exadata DB system. See Updating Tooling on an Exadata Cloud Service Instance.
  • Ensure that the system is configured to access Oracle Cloud Infrastructure Object Storage with the required security lists for the region in which the DB system was created. For more information about connectivity to Oracle Cloud Infrastructure Object Storage, see Prerequisites.

To install the Exadata agent:

  1. Log on to the node as root.
  2. Run the following commands to install the agent:

    [root@<node_n>~]# cd /tmp
    [root@<node_n>~]# wget https://objectstorage.<region_name>.oraclecloud.com/p/1q523eOkAOYBJVP9RYji3V5APlMFHIv1_6bAMmxsS4E/n/dbaaspatchstore/b/dbaasexadatacustomersea1/o/backfill_agent_package_iwwva.tar
    [root@<node_n>~]# tar -xvf /tmp/backfill_agent_package_*.tar -C /tmp
    [root@<node_n>~]# rpm -ivh /tmp/dbcs-agent-2.5-3.x86_64.rpm

    Example output:

    [root@<node_n>~]# rpm -ivh dbcs-agent-2.5-3.x86_64.rpm
    Preparing...                ########################################### [100%]
    Checking for dbaastools_exa rpm on the system
    Current dbaastools_exa version = dbaastools_exa-1.0-1+18.1.4.1.0_180725.0000.x86_64
    dbaastools_exa version dbaastools_exa-1.0-1+18.1.4.1.0_180725.0000.x86_64 is good. Continuing with dbcs-agent installation
       1:dbcs-agent             ########################################### [100%]
    initctl: Unknown instance:
    initctl: Unknown instance:
    initzookeeper start/running, process 85821
    initdbcsagent stop/waiting
    initdbcsadmin stop/waiting
    initdbcsagent start/running, process 85833
    initdbcsadmin start/running, process 85836
    
  3. Confirm that the agent is installed and running.

    [root@<node_n>~]# rpm -qa | grep dbcs-agent
    dbcs-agent-2.5-0.x86_64
    [root@<node_n>~]# initctl status initdbcsagent
    initdbcsagent start/running, process 97832
  4. Repeat steps 1 through 3 on the remaining nodes.

After the agent is installed on all nodes, allow up to 30 minutes for Oracle to complete additional workflow tasks such as upgrading the agent to the latest version, rotating the agent credentials, and so on. When the process is complete, you should be able to use the Exadata managed features in the Console, API, or Oracle Cloud Infrastructure CLI.

Direct link to this issue: Managed features not enabled for systems provisioned before June 15, 2018

Patching configuration file points to wrong region

Details: The patching configuration file (/var/opt/oracle/exapatch/exadbcpatch.cfg) points to the object store of the us-phoenix-1 region, even if the Exadata DB system is deployed in another region.

This problem occurs if the release version of the database tooling package (dbaastools_exa) is 17430 or lower.

Workaround: Follow the instructions in Updating the Cloud Tooling on Each Compute Node Manually to confirm that the release version of the tooling package is 17430 or lower, and then update it to the latest version.

Direct link to this issue: Patching configuration file points to wrong region

Various database workflow failures due to Oracle Linux 7 removal of required temporary files

Details: A change in how Oracle Linux 7 handles temporary files can result in the removal of required socket files from the /var/tmp/.oracle directory. This issue affects only Exadata DB systems running the version 19.1.2 operating system image.

Workaround: Run sudo /usr/local/bin/imageinfo as the opc user to determine your operating system image version. If your image version is 19.1.2.0.0.190306, follow the instructions in Doc ID 2498572.1 to fix the issue.

Direct link to this issue: Various database workflow failures due to Oracle Linux 7 removal of required temporary files

Developer Tools

Potential data corruption issue with OCI Java SDK on binary data upload with RefreshableOnNotAuthenticatedProvider

Details: When using version 1.25.1 or earlier of the OCI Java SDK clients that upload streams of data (for example ObjectStorageClient or FunctionsInvokeClient), either synchronously and asynchronously, and you use a RefreshableOnNotAuthenticatedProvider (for example, for Resource Principals or Instance Principals) you may be affected by silent data corruption.

Workaround: Update the OCI Java SDK client to version 1.25.2 or later. For more information about this issue and workarounds, see Potential data corruption issue for OCI Java SDK on binary data upload with RefreshableOnNotAuthenticatedProvider.

Direct link to this issue: Potential data corruption issue with OCI Java SDK on binary data upload with RefreshableOnNotAuthenticatedProvider

Potential data corruption issue with OCI HDFS Connector on binary data upload with RefreshableOnNotAuthenticatedProvider

Details: If you are using version 3.2.1.1 or earlier of the OCI HDFS Connector clients and you use a RefreshableOnNotAuthenticatedProvider (e.g. InstancePrincipalsCustomAuthenticator, or generally for Resource Principals or Instance Principals) you may be affected by silent data corruption.

Workaround: Update the OCI HDFS Connector client to version 3.2.1.3 or later. For more information about this issue and workarounds, see Potential data corruption issue for OCI HDFS Connector with RefreshableOnNotAuthenticatedProvider.

Direct link to this issue: Potential data corruption issue with OCI HDFS Connector on binary data upload with RefreshableOnNotAuthenticatedProvider

Potential data corruption with SDK for Python on binary upload

DNS

Currently, there are no known DNS issues.

Email Delivery

Unable to access SMTP credentials for older federated tenancies

Details: Federated users are supported for Email Delivery with the exception of older tenancies that do not use System for Cross-domain Identity Management (SCIM). SCIM will be the standard for all identity information access. All tenancies after Dec 2018 are SCIM.

Workaround: Ask an administrator in your Oracle Cloud Infrastructure tenancy to create a new user in the Console to be used with the Email Delivery service. Logging into the Console directly (not federated) will allow access to User Settings and SMTP Credentials.

Direct link to this issue: Unable to access SMTP credentials for any federated tenancy

Error occurs when attempting to add a suppression from a compartment other than root

Details: In the Console, if you choose a compartment other than root and then navigate to the Email Suppression list, the following error will occur when you attempt to add a suppression:

Error: The required compartmentId ocid1.compartment.oc1..aaaaaaaacq3ztcbrxvgfb35zj6wztdpwlkmzfh4rnsq63sugge624qr5cdla must be the root compartment for suppressions

Workaround: Navigate to the Approved Senders page, choose the root compartment, and then return to the Email Suppression list.

Direct link to this issue: Error occurs when attempting to add a suppression from a compartment other than root

Events

Currently, there are no known Events issues.

File Storage

File Storage does not currently support Access Control Lists (ACLs)
Details: File Storage does not support file level Access Control Lists (ACLs). Only user, group, and world permissions are supported. File Storage uses the NFSv3 protocol, which doesn't include support for ACLs. setfacl fails on mounted file systems. getfacl returns only standard permissions.
Note

Some implementations might extend the NFSv3 protocol and add support for ACLs as part of a separate rpc program.

Workaround: We are aware of the issue and working on a resolution.

Direct link to this issue:File Storage does not currently support Access Control Lists (ACLs)

Semaphore timeout error when creating a snapshot with the Windows command line

Details: When using the mkdir command in Windows CMD to create a snapshot of a mounted file system, an error appears. For example: 

C:\>mkdir X:\.snapshot\snapshot1

The semaphore timeout period has expired.

Although the error appears, the snapshot is successfully created.

Workaround: Use the Console, API or CLI to create snapshots.

Direct link to this issue: Semaphore timeout error when creating a snapshot with Windows command line

Unable to move file storage resources to a different compartment

Details: When moving a file system or mount target from one compartment to another, the operation fails. Users are required to be members of the Administrators group.

Workaround: We are aware of the issue and working on a resolution. To work around this problem, be sure the user is a member of the Administrators group. For more information, see Managing Groups.

Direct link to this issue:Unable to move file storage resources to a different compartment.

409 error occurs when creating or moving a file system or mount target

Details: When creating or moving a file system or mount target from one compartment to another, you might encounter one of the following 409 API errors:

Create File System:

oci.exceptions.ServiceError: {'opc-request-id': <<OPC REQUEST ID>>, 'code': 'Conflict', 'message': 'Another filesystem is currently being provisioned, try again later', 'status': 409}

Move File System:

oci.exceptions.ServiceError: {'opc-request-id': <<OPC REQUEST ID>>, 'code': 'Conflict', 'message': 'filesystem <<FILE SYSTEM OCID>> is currently being modified, try again later', 'status': 409}

Create Mount Target:

oci.exceptions.ServiceError: {'opc-request-id': <<OPC REQUEST ID>>, 'code': 'Conflict', 'message': 'Another mount target is currently being provisioned, try again later', 'status': 409}

Move Mount Target:

oci.exceptions.ServiceError: {'opc-request-id': <<OPC REQUEST ID>>, 'code': 'Conflict', 'message': 'mount target<<MOUNT TARGET OCID>> is currently being modified, try again later', 'status': 409}

The Compartment Quotas feature introduces constraints that limit the number of concurrent operations that a tenancy can perform on file system and mount target resources in a region:

  • Each tenancy in a region can have 1 CreateFileSystem or ChangeFilesystemCompartment operation in progress at a time.
  • Each tenancy in a region can have 1 CreateMountTarget or ChangeMountTargetCompartment operation in progress at a time.

If a tenancy attempts to do more than one simultaneous operation, one operation succeeds and the others receive the 409 error response code. The default retry strategy for the OCI SDK is to not retry 409 conflicts. See SDK Behaviors - Retries.

Workaround: We are aware of the issue and working on a resolution. To work around this problem, create a custom retry strategy that retries on 409. Several examples of building a custom retry strategy are provided at https://github.com/oracle/oci-python-sdk/blob/master/examples/retries.py.

Direct link to this issue: 409 error occurs when creating or moving a file system or mount target

Functions

Currently, there are no known Functions issues.

Health Checks

Currently, there are no known Health Checks issues.

IAM

Unable to set up new federations with Microsoft Active Directory
Deleted compartments continue to count against service limits

Details: Deleted compartments continue to count against the compartment service limit for your tenancy. A deleted compartment is removed from the count after 365 days.This is also the setting that specifies the time period for deleted compartments to remain displayed in the Console.

Workaround: Until this issue is resolved, you can request to have your service limit increased for compartments. See Requesting a Service Limit Increase.

Direct link to this issue: Deleted compartments continue to count against service limits

Load Balancing

Load Balancer displays "Unknown" on Console Backend Sets Health indicator

Details: When performing Load Balancer checks in the Console, the backend sets health status may appear as "Unknown" even if the load balancer is performing properly. The "Unknown" status does not impact the data path.

Workaround: Refer to the public telemetry graphs in the Oracle Cloud Infrastructure Console for an accurate indication of the load balancer's backend set health.

Direct link to this issue: Load Balancer displays "Unknown" on Console Backend Sets Health indicator

Logging

Some agent warnings can be ignored

Details: Benign warnings may occur for the Oracle fluentd-based agent, similar to the following:

Sep 22 05:47:43 ociutv3mgftp02 ruby[1278962]: /opt/unified-monitoring-agent/embedded/lib/ruby/gems/2.6.0/gems/oci-2.9.0.1125/lib/oci/identity/models/base_tag_definition_validator.rb:23: warning: already initialized constant OCI::Identity::Models::BaseTagDefinitionValidator::VALIDATOR_TYPE_ENUM
Sep 22 05:47:43 ociutv3mgftp02 ruby[1278962]: /opt/unified-monitoring-agent/embedded/lib/ruby/gems/2.6.0/gems/oci-2.9.0.1125/lib/oci/identity/models/base_tag_definition_validator.rb:24: warning: previous definition of VALIDATOR_TYPE_ENUM was here

You can ignore benign warnings. These warnings have no impact on agent functionality.

Direct link to this issue: Some agent warnings can be ignored

Logging Analytics

Special handling when monitoring logs in large folders

Details: Folders containing more than 10,000 files can cause log collection issues (as well as operating system issues).

When large folders are encountered by the Management Agent Logging Analytics plug-in, a message similar to the following example message is added to the Management Agent mgmt_agent.log file:

2020-07-30 14:46:51,653 [LOG.Executor.2388 (LA_TASK_os_file)-61850] INFO - ignore large dir /u01/service/database/logs. set property loganalytics.enable_large_dir to enable.

Resolution: We recommend avoiding large folders.

However, if you want to continue monitoring logs in large folders, then you can enable the property indicated in the mgmt_agent.log file by performing the following action:

sudo -u mgmt_agent echo "loganalytics.enable_large_dir=true" >> INSTALL_DIRECTORY/agent_inst/config/emd.properties

Replace INSTALL_DIRECTORY with the path to the agent_inst folder.

Direct link to this issue: Special handling when monitoring logs in large folders

Management Agent

Currently, there are no known Management Agent issues.

Marketplace

Currently, there are no known Marketplace issues.

Monitoring

Currently, there are no known Monitoring issues.

Networking

Configuring Secondary VNICs in Linux

Details: The script Oracle makes available at https://docs.cloud.oracle.com/iaas/Content/Resources/Assets/secondary_vnic_all_configure.sh is intended for use in situations where non-hypervisor compute instances need to be assigned an additional VNIC and IP address. The script is not useful for Kernel-based Virtual Machine (KVM) applications on a bare metal instance.

Workaround: Perform the following actions: For KVM applications on a bare metal instance, refer to the white paper "Installing and Configuring KVM on Bare Metal Instances with Multi-VNIC."

Direct link to this issue: Configuring Secondary VNICs in Linux

CPE Configuration Helper: Specifying the CPE vendor

Details: If the following things occur, in the Oracle Console you receive an error that says The CPE is missing the vendor information (the device type). Update the CPE and add the vendor information:

  • You have a CPE that existed before the CPE Configuration Helper feature was released.
  • You have not yet edited the CPE in the Oracle Console and specified which vendor makes your CPE.
  • You try to generate the Helper content for the CPE or any IPSec connections that use that CPE.

Workaround: Perform the following actions:

  1. In the Oracle Console, view the CPE.
  2. Click Edit.
  3. In the CPE Vendor Information section, select the vendor that makes your CPE. If you're not sure which vendor makes your CPE, or it's not in the list, select Other.
  4. If prompted, select a value for Platform/Version. Here are guidelines:

    • Oracle recommends using a route-based configuration if possible.
    • If you do not see your specific CPE platform or version in the list, choose the closest platform/version that predates your CPE version.
  5. Click Save Changes. It's important to click this even if you did not change the value for the vendor.

You can then generate the Helper content successfully for the CPE or any IPSec connections that use that CPE.

Direct link to this issue: CPE Configuration Helper: Specifying the CPE vendor

RESOLVED: VPN Connect: US East (Ashburn) Support for NAT-T
VPN Connect: Incorrect data in several Monitoring charts

Details: Several of the Monitoring charts for VPN Connect tunnels show incorrect data and should not be used to determine recent traffic levels in the tunnel.

To summarize, these are the available Monitoring charts for VPN Connect tunnels:

  • IPSec tunnel state: This chart is accurate and correctly shows the up or down state of the tunnel.
  • Bytes or packets sent or received: These four charts are inaccurate and do not show the correct level of bytes or packets sent or received through the tunnel.
  • Packets with errors: This chart is inaccurate and does not show the correct number of packets dropped with errors.

For more information about the charts, see VPN Connect Metrics.

Workaround: We are aware of the issue and working on a resolution.

Direct link to this issue: VPN Connect: Incorrect data in several Monitoring charts

RESOLVED: VPN Connect: Issue with regional NAT-T availability and Libreswan

Details: If all of the following are true:

  • You're using Libreswan as your CPE for VPN Connect
  • Your CPE is behind a NAT device
  • You're connecting to one of these Oracle Cloud Infrastructure regions: US East (Ashburn), South Korea Central (Seoul), Japan East (Tokyo), or Canada Southeast (Toronto)

Then you might have connectivity issues with VPN Connect because of a NAT traversal (NAT-T) interoperability issue with the current software on the Oracle routers in those regions.

Oracle is aware of the issue and is in the process of updating the software to remove that issue.

Background: Oracle enabled NAT traversal (NAT-T) for some VPN Connect routers in US East (Ashburn), and for all routers in South Korea Central (Seoul), Japan East (Tokyo), and Canada Southeast (Toronto). However, the current version of software on those routers has an interoperability issue with NAT-T. If you're following Oracle's current documentation, which says to NOT enable NAT-T on your CPE, then you should experience NO problems related to this issue.

However, if you're using Libreswan for your CPE, with the CPE behind a NAT device, and you're connecting to one of those regions, you might experience connectivity issues during the period when Oracle is updating the router software. Specifically, the tunnel might come up but not pass traffic.

Workaround: This issue is now resolved. Here is the workaround that was recommended before the issue's resolution:

For applicable Libreswan users in the affected regions, if you have connectivity issues:

A. Enable NAT-T on your CPE:

  1. In your ipsec.conf file for the relevant connection, change the value of encapsulation from no to yes.
  2. Restart the Libreswan service.

If you continue to have connectivity issues:

B. Set up the connection again:

  1. Recreate the IPSec connection in the Oracle Console.
  2. Recreate the Librewswan configuration with the information for the new IPSec connection.
  3. Reload the Libreswan service.

If you continue to have connectivity issues:

C. Contact My Oracle Support for further assistance. For instructions, see Open a support service request.

Direct link to this issue: RESOLVED: VPN Connect: Issue with regional NAT-T availability and Libreswan

Issues with private access to Oracle Analytics Cloud through a service gateway for your on-premises network
Details: If you do all of the following, asymmetric routing can occur for the traffic between your on-premises network and Oracle Analytics Cloud: Asymmetric routing means that the request traffic and response traffic go over different paths. Here are more details about why asymmetric routing can occur: When Oracle Analytics Cloud initiates connections to clients in your on-premises network, the connection requests must go over a public path (either the internet or FastConnect public peering). However, the response travels over a private path, based on the recommendation in Routing Preferences for Traffic from Your On-Premises Network to Oracle.

Workaround: You have two options:

  • Option 1 (preferred): With Oracle Analytics Cloud, switch from using a Remote Data Connector to a Data Gateway.
  • Option 2: Configure your customer-premises equipment (CPE) to prefer either an internet or FastConnect public peering path by adding static routes for the regional source IP address for Oracle Analytics Cloud. That way, any response traffic to Oracle Analytics Cloud will return on the same path as the incoming connection request.

A workaround is required only if you use Oracle Analytics Cloud so that it initiates connections to clients in your on-premises network, and you are not yet using a Data Gateway in your network.

Direct link to this issue: Issues with private access to Oracle Analytics Cloud through a service gateway for your on-premises network

Issues with access from Oracle services through a service gateway to your public instances
Details: If the route table associated with your public subnet in a VCN includes the following two conflicting route rules, Oracle services might be unable to access your public instances in that subnet.
  1. Route rule with the Target Type set as internet gateway.
  2. Route rule with the Destination Service set as All <region> Services in Oracle Services Network and the Target Type set as service gateway.
The foregoing two route rules can lead to asymmetric routing when Oracle services initiate connections to public instances in your VCN. Oracle Cloud Infrastructure does not support these rules simultaneously within the same route table. Oracle has updated the service APIs and the Console to disable support for this configuration.

Workaround: We recommend that you remove the route rule that has the Destination Service set as All <region> Services in Oracle Services Network and the Target Type set as service gateway. Revert to the configuration you used before adopting the service gateway for Oracle Services Network. With this change, your public instances retain access to all Oracle services through the internet gateway. Oracle services can continue to access your public instances.

However, your instances in the public subnet can continue to access Object Storage through the service gateway. Update the subnet's route table to include a route rule with Destination Service set as OCI <region> Object Storage and the Target set to the VCN's service gateway.

This known issue applies only to public subnets that have access to an internet gateway. Regarding private subnets: you can still configure a private subnet's route table to provide access to All <region> Services in Oracle Services Network or to OCI <region> Object Storage through the VCN's service gateway.

Direct link to this issue: Issues with access from Oracle services through a service gateway to your public instances

RESOLVED: Service gateway route rules and Console restriction

Details: If you use the Console to set up a route rule that uses a service gateway as a target, the rule's destination service must match the service CIDR label that is enabled for that gateway. For example: let's say you use the Console to enable the label called All <region> Services in Oracle Services Network for the service gateway. Next, you use the Console to set up a route rule and choose OCI <region> Object Storage for the destination service instead of the service CIDR label you specified for the service gateway. When you try to set the target for the route rule, your service gateway does not appear in the list of service gateways to choose from. This is because the Console takes the destination service you specified for the rule and shows only service gateways with that service CIDR enabled. Your VCN can have only one service gateway, and, in this case, it does not match that logic. This restriction exists only when you set up the route rule in the Console. Other interfaces (SDKs, CLI, Terraform) do not have this restriction. Oracle intends to remove this restriction from the Console interface.

Workaround: This issue is now resolved. Here is the workaround that was recommended before the issue's resolution:

Use the same service CIDR label for the service gateway and the route rule's destination service. Or if you like, use a different interface that doesn't have the restriction (for example, the CLI). Also remember that you can filter traffic to and from instances by using security lists, and you can use any service CIDR label (or a specific CIDR) in a security list rule.

Direct link to this issue: Service gateway route rules and Console restriction

Issues with access to Oracle yum services through service gateway
Details: If you want to use a service gateway with your VCN without also using an internet gateway or NAT gateway for internet access, your instances might not have access to the applicable regional Oracle yum server. There are two possible issues:
  • Instances created before November 2018 might have their repos pointed to URLs that are not accessible through the service gateway
  • Instances that were not able to contact their local region's yum server before may have fallen back to using yum.oracle.com, which is not accessible through the service gateway
Prerequisite: To use either of the following mitigation strategies, you must have one of the following gateways configured so you can reach out to the region's yum server: service gateway, NAT gateway, or internet gateway.

Automated mitigation:

Try the following automated mitigation. If it fails for some reason, use the manual mitigation method that follows.

Copy the following script the to local system and run it. The script disables existing repos and downloads the repo file, which directs the system to the region's local yum servers accessible through the service gateway.

#!/bin/bash
REPODIR='/etc/yum.repos.d'
REPOS=$REPODIR/*
REGION=$(curl -sfm 3 http://169.254.169.254/opc/v1/instance/ | jq -r '.region' | cut -d '-' -f 2)
VERSION=$(egrep '^VERSION_ID' /etc/os-release | cut -d '"' -f 2 | cut -d '.' -f 1)
REPOURL="http://yum-${REGION}.oracle.com/yum-${REGION}-ol${VERSION}.repo"

echo "Disabling existing repos"
for i in $REPOS
do
  if [[ "$i" != *".disabled" ]]; then
    mv $i $i.disabled
    echo "$i disabled"
  else
    echo "$i repofile already disabled"
  fi
done
yum clean all
echo "Pulling new regional repository file"
wget -q $REPOURL -O "$REPODIR/yum-${REGION}-ol${VERSION}.repo"
retval=$?
if [[ "$retval" -ne 0 ]]; then
  echo "Unable to pull repo file, please run manual steps"
  exit 1
fi
yum makecache fast
Manual mitigation:

If the automated mitigation fails, you can manually mitigate the issue. Here you disable the existing repo files and pull down the latest repo file from your region's yum server. To identify your instance's region key, look at the region list in Regions and Availability Domains.

To disable the existing repo files, navigate to the /etc/yum.repos.d directory and rename all files present to include .disabled at the end of the file name.

Example:

ls /etc/yum.repos.d ksplice-uptrack.repo.disabled public-yum-ol7.repo.disabled

Download the repo file for your region to the local system. The following example uses Ashburn (with region key iad). Replace iad with the region key applicable to your instance.

cd /etc/yum.repos.d/
wget http://yum-iad.oracle.com/yum-iad-ol7.repo
chown root:root yum-iad-ol7.repo
yum makecache fast

Direct link to this issue: Access to Yum services through service gateway for certain images

RESOLVED: Existing instances in a subnet don't get updated list of DNS servers in DHCP options

Details: If you update the list of DNS servers in a subnet's set of DHCP options, new instances/VNICs you later create in that subnet get that updated list of DNS servers, but existing instances/VNICs in the subnet do not.

Workaround: This issue is now resolved. Here is the workaround that was recommended before the issue's resolution:

You can create new instances in the subnet to replace the existing instances. Another option is to contact My Oracle Support with the following information:

  • VCN's OCID
  • Subnet's OCID
  • Affected instance's OCID

That specific instance's issue can then be resolved, typically within one day.

Direct link to this issue: Existing instances in a subnet don't get updated list of DNS servers in DHCP options

Notifications

Currently, there are no known Notifications issues.

Object Storage

Currently, there are no known Object Storage issues.

Operations Insights

Currently, there are no known Operations Insights issues.

OS Management

Unable to apply all Windows updates categorized as Other to a managed instance group

Details: OS Management does not currently provide an action for applying all Windows updates categorized as Other to a managed instance group.

Workaround: We are aware of the issue and working on a resolution. To apply all updates on a managed instance group, including Windows updates categorized as Other, select All for the update type. For more information, see Installing Updates on Windows Instances.

Direct link to this issue: Unable to apply all Windows updates categorized as Other to a managed instance group

Unable to manage AppStreams in Oracle Linux 8

Details: For Oracle Linux 8, the OS Management service does not currently support AppStreams, also known as modules or module streams.

Workaround: We are aware of the issue and working on a resolution. You can still update individual RPM packages but the versioning associated with modules is ignored using this method. For more information about AppStreams, see Oracle Linux 8: Managing Software on Oracle Linux.

Direct link to this issue: Unable to manage AppStreams in Oracle Linux 8

Discrepancy in Windows updates displayed in Control Panel compared to the OS Management Console and API

Details: You may notice a discrepancy in the Windows updates shown in the Control Panel compared with the OS Management Console and API.

The OS Management service depends on the data it receives from the Windows Update Agent (WUA) API. When using the WUA API, the OS Management service does not have full access to all the APIs available like it would using the Windows Server Update Service (WSUS) API. The policies controlling what you are allowed to do are different when you access the upstream Microsoft update service as opposed to when you create your own policies using WSUS.

Workaround: This behavior is expected at this point. We are aware of the issue and investigating improvements.

Direct link to this issue: Discrepancy in Windows updates displayed in Control Panel compared to the OS Management Console and API

Software sources can take several minutes to initially load in the Console
Unable to use the OS Management service with instances found in ManagedCompartmentForPaaS compartments

Registry

Registry API not available

Details: Registry functionality to create and manage respositores is not exposed via the API.

Workaround: We are aware of the issue and working on a resolution. To work around this issue, use the Console.

Direct link to this issue: Registry API not available

Use Tenancy Namespace instead of Tenancy Name in image tags and Docker login credentials on or before September 30, 2019

Details: Up to now, you might have been using either the tenancy name or the tenancy namespace when logging in to Oracle Cloud Infrastructure Registry and when performing operations on images in the Registry.

After September 30, 2019, you will have to use the tenancy namespace rather than the tenancy name when using Oracle Cloud Infrastructure Registry.

Background: After September 30, 2019, you will not be able to:

  • Specify the tenancy name when logging in to Oracle Cloud Infrastructure Registry.
  • Perform operations on images that include tenancy name in the repository path.

Instead, you will have to use the tenancy namespace rather than the tenancy name when using Oracle Cloud Infrastructure Registry.

A tenancy namespace is an auto-generated and immutable random string of alphanumeric characters. For example, the namespace of the acme-dev tenancy might be ansh81vru1zp. You can see the tenancy namespace on the Registry page of the Console.

Note that for some older tenancies, the tenancy namespace might be the same as the tenancy name. If that is the case, no action is required.

On or before September 30, 2019, if the tenancy namespace and the tenancy name are different, you must:

  • Start specifying the tenancy namespace when logging in to Oracle Cloud Infrastructure Registry, instead of the tenancy name.
  • Start specifying the tenancy namespace when pushing new images to Oracle Cloud Infrastructure Registry, instead of the tenancy name.
  • Migrate any existing images in Oracle Cloud Infrastructure Registry that include the tenancy name in the path.

The following workarounds and examples assume:

  • tenancy name is acme-dev
  • tenancy namespace is ansh81vru1zp
  • username is jdoe@acme.com

Workaround for logging into Oracle Cloud Infrastructure Registry: Previously, when you logged in to Oracle Cloud Infrastructure Registry and were prompted for a username, you could have entered it in the format <tenancy-name>/<username>.

For example:

$ docker login phx.ocir.io

Username: acme-dev/jdoe@acme.com
Password:

On or before September 30, 2019, you must start using the tenancy namespace instead of the tenancy name when logging in to Oracle Cloud Infrastructure Registry. When you are prompted for username, enter it in the format <tenancy-namespace>/<username>.

For example:

$ docker login phx.ocir.io

Username: ansh81vru1zp/jdoe@acme.com
Password:

Workaround for pushing new images to Oracle Cloud Infrastructure Registry: Previously, when you pushed a new image to Oracle Cloud Infrastructure Registry, you could have specified the tenancy name as part of the repository path in the docker push command. You could have entered the command in the format:

$ docker push <region-key>.ocir.io/<tenancy-name>/<image-name>:<tag>

For example:

$ docker push phx.ocir.io/acme-dev/helloworld:latest

On or before September 30, 2019, you must start using the tenancy namespace instead of the tenancy name in the docker push command when you push new images. Enter the command in the format:

$ docker push <region-key>.ocir.io/<tenancy-namespace>/<image-name>:<tag>

For example:

$ docker push phx.ocir.io/ansh81vru1zp/helloworld:latest

Workaround for existing images in Oracle Cloud Infrastructure Registry that include the tenancy name in the repository path: If you have previously pushed images to Oracle Cloud Infrastructure Registry, those existing images could have included the tenancy name as part of the repository path. For example, phx.ocir.io/acme-dev/helloworld:latest.

After September 30, 2019, you will not be able to perform operations on existing images in the Registry that include the tenancy name in the repository path.

So on or before September 30, 2019, for every existing image that contains the tenancy name in the repository path, you must replace tenancy name with tenancy namespace.

To replace tenancy name with tenancy namespace in the repository path of an existing image:

  1. Pull the image by entering:

    $ docker pull <region-key>.ocir.io/<tenancy-name>/<image-name>:<tag>

    For example:

    $ docker pull phx.ocir.io/acme-dev/helloworld:latest
  2. Use the docker tag command to change the repository path by entering:

    $ docker tag <region-key>.ocir.io/<tenancy-name>/<image-name>:<tag> <region-key>.ocir.io/<tenancy-namespace>/<image-name>:<tag>

    For example:

    $ docker tag phx.ocir.io/acme-dev/helloworld:latest phx.ocir.io/ansh81vru1zp/helloworld:latest
  3. Push the image with the new repository path to the Registry by entering:

    $ docker push <region-key>.ocir.io/<tenancy-namespace>/<image-name>:<tag>

    For example:

    $ docker push phx.ocir.io/ansh81vru1zp/helloworld:latest
  4. Repeat the above steps for every existing image that has tenancy name in the repository path.

Direct link to this issue: Use Tenancy Namespace instead of Tenancy Name in image tags and Docker login credentials on or before September 30, 2019

Resource Manager

RESOLVED: The type oci:core:image:id does not populate
Resource Discovery fails

Details: When using Resource Discovery to create a stack from a compartment, the work request fails.

Workaround: To work around this issue, make sure that the user who is creating the stack has permissions to inspect compartments for the tenancy. For the group that the user belongs to, create the following policy.

Allow group <group name> to inspect compartments in tenancy

Direct link to this issue: Resource Discovery fails

Missing attributes in some discovered resources

Details: Attributes are missing from some supported resources captured using resource discovery.

Service Resource type Missing fields (with links to oci Terraform provider documentation)
Big Data Instances

cluster_admin_password

cluster_public_key

Block Volume (core) Volumes volume_backup_id
Compute (core) Images instance_id

image_source_details

Compute (core) Instance Configurations instance_id

source

Compute (core) Instance Console Connections public_key
Compute (core) Instances

hostname_label (deprecated)

is_pv_encryption_in_transit_enabled

subnet_id (deprecated)

Compute (core) Volume Attachments use_chap
Container Engine for Kubernetes Node Pools node_source_details
Data Catalog Connections enc_properties
Database Autonomous Container Databases maintenance_window_details
Database Autonomous Databases

admin_password

autonomous_database_backup_id

autonomous_database_id

clone_type

is_preview_version_with_service_terms_accepted

source

source_id

timestamp

Database Autonomous Exadata Infrastructures maintenance_window_details
Database Databases

admin_password

backup_id

backup_tde_password

db_version

source

Database Db Homes

admin_password

backup_id

backup_tde_password

source

Database Db Systems

admin_password

backup_id

backup_tde_password

maintenance_window_details

IAM Identity Providers metadata
Load Balancing Load Balancers ip_mode
Marketplace Accepted Agreements signature
Networking (core) Cross Connects

far_cross_connect_or_cross_connect_group_id

near_cross_connect_or_cross_connect_group_id

NoSQL Database Cloud Indexes is_if_not_exists
Object Storage Objects

cache_control

content

content_disposition

content_encoding

content_language

source

source_uri_details

Web Application Acceleration and Security Certificates

certificate_data

is_trust_verification_disabled

private_key_data

Web Application Acceleration and Security Policies

are_redirects_challenged

is_case_sensitive

is_nat_enabled (human_interaction_challenge)

is_nat_enabled (js_challenge)

Workaround: We are aware of the issue and working on a resolution.

Direct link to this issue: Missing attributes in some discovered resources

RESOLVED: Error running drift detection

Details: Error occurs when attempting to run drift detection on a stack.

Workaround: This issue is now resolved. Here is the workaround that was recommended before the issue's resolution:

To work around this issue, update your stack to include a region variable and then retry. Example region variable: {"region": "eu-frankfurt-1"}

Direct link to this issue: RESOLVED: Error running drift detection

Service Connector Hub

Currently, there are no known Service Connector Hub issues.

Storage Gateway

Exceptions to POSIX compliance

Details: The following file to object translations are not supported:

  • ACLs
  • Symlinks, hard links, named pipes, and special devices
  • Sticky bits

Workaround: If you need to copy special files to Object Storage, create a tar archive of the files.

Direct link to this issue: Exceptions to POSIX compliance

df command cannot report accurate size and capacity

Details: If you run the df command on a filesystem in an NFS client, df reports a filesystem size of 0 (zero) bytes and a capacity of 8 EB (maximum capacity). Because Object Storage does not have quotas and can store an unlimited amount of data, there is not a way to report filesystem size. Because the Object Storage bucket does not report storage usage, there is not a way to report capacity.

Workaround: You can run the du command to get usage, however this command is metadata intensive and takes longer to report usage. You could also list all objects in Object Storage and add the object size to determine current Object Storage usage. However, this method doesn’t take into account the amount of data stored in the filesystem cache. You can also explore out‑of‑band mechanisms that approximate storage usage.

Direct link to this issue: df command cannot report accurate size and capacity

Tagging

Creating a tag default fails under specific conditions

Details: The creation of a tag default fails when both of the following conditions are met: The tag namespace contains only one active key definition and the tag namespace contains multiple retired tag key definitions.

Workaround: To work around this issue, you can create another tag key definition in the tag namespace.

Direct link to this issue: Creating a tag default fails under specific conditions

Deleting a tag default fails when the tag is retired
Terraform state drift with tag defaults and tags for secondary resources

Details: In some Terraform builds, no tags are created for secondary resources and default tags configured for a compartment are not automatically applied to resources. This can result in missing default tags and secondary resources that do not have tags that match primary resources. In some cases, Terraform can go into an infinite loop.

Workaround: Evaluate your Terraform script and each compartment in the tenancy for potential tagging issues.

  1. Evaluate your Terraform script.

    • For any primary resource you find that contain tags, copy the free-form or defined tag to the secondary resource. For example, if your Terraform configuration has a primary resource such as a compute instance and a nested secondary resource such as an attached VNIC, copy the tags on the compute instance to the VNIC. VCNs and instance pools are also primary resources that can create secondary resources.

  2. Evaluate the directory tree in the tenancy you will create.

    1. Start at the root compartment and determine if that compartment has any default tags configured. Although tag values are optional, tag defaults can specify that a tag requires a value.

    2. Tag defaults are defined for a specific compartment, and in the Console you manage them on the Compartment Details page.

      This screenshot shows the Compartment Details page in the Console

    3. If the compartment has any default tags for resources created in this compartment, apply the tags and required tag values that would be created by these defaults to all the resources you will create with your Terraform script. Because of tag inheritance, the default tag is applied to all resources that get created in the compartment, including child compartments and the resources created in the child compartments. See Tag Inheritance.
    4. Repeat these steps for all child compartments, updating your Terraform script to account for any default tags.

Direct link to this issue: Terraform state drift with tag defaults and tags for secondary resources

Traffic Management Steering Policies

Currently, there are no known Traffic Management Steering Policies issues.

Vault

Currently, there are no known Vault service issues.

Web Application Firewall (WAF)

Global DNS change will cause service disruption if new subnets are not whitelisted

Details: Global DNS changes will be made for all Oracle Web Application Firewall (WAF) customers beginning in December 2019. All customers that have an origin lock-down (using an explicit IP whitelisting) and will not whitelist the new subnets will have downtime and service degradation.

Workaround: (Action Required) Customers must whitelist the new subnets to avoid service disruption. For the API documentation, see ListEdgeSubnets.

OCI WAF Expansion Whitelist

130.35.0.0/20

130.35.128.0/20

130.35.240.0/20

138.1.32.0/21

138.1.128.0/19

147.154.96.0/19

192.29.96.0/20

130.35.16.0/20

130.35.48.0/20

130.35.64.0/19

130.35.96.0/20

130.35.120.0/21

130.35.144.0/20

130.35.176.0/20

130.35.192.0/19

130.35.224.0/22

130.35.232.0/21

138.1.48.0/21

147.154.0.0/18

147.154.64.0/20

147.154.80.0/21

130.35.112.0/22

138.1.16.0/20

138.1.80.0/20

138.1.208.0/20

138.1.224.0/19

147.154.224.0/19

138.1.0.0/20

138.1.40.0/21

138.1.64.0/20

138.1.96.0/21

138.1.104.0/22

138.1.160.0/19

138.1.192.0/20

147.154.128.0/18

147.154.192.0/20

147.154.208.0/21

192.29.0.0/20

192.29.64.0/20

192.29.128.0/21

192.29.144.0/21

192.29.16.0/21

192.29.32.0/21

192.29.48.0/21

192.29.56.0/21

Direct link to this issue: Global DNS change will cause service disruption if new subnets are not whitelisted