Details: In Synthetic Monitoring, the Browser and Scripted Browser monitors might fail to run against applications that use frames.
Workaround: We are aware of the issue and working on a resolution. For Scripted Browser monitors, you can work around this issue by replacing index=<frame-index> with either id=<id-of-frame> or name=<name-of-frame> in the .side script.
For example, if this script is the original version:
Details: Authorization policies based on the apm-domains resource tags do not work for the Trace Explorer and Synthetic Monitoring APIs, causing authorization failures.
Workaround: We are aware of the issue and working on a resolution.
Details: When you try to enable cross-region replication for a volume configured
to use a Vault encryption key, the following error message occurs: Edit Volume
Error: You cannot enable cross-region replication for volume
<volume_ID> as it uses a Vault encryption
key.
Workaround:We're working on a resolution.
Cross-region replication is not supported for volumes encrypted with a customer-managed
key. As a workaround to enable replication, unassign the Vault encryption key from the
volume. In this scenario, the volume is encrypted with an Oracle-managed key.
Details: To achieve the optimal performance level for volumes configured for ultra high performance, the volume attachment must be multipath-enabled. Multipath-enabled attachments to VM instances are only supported for instances based on shapes with 16 or greater OCPUs.
If you have an instance with fewer than 16 OCPUs, you can resize it so that it has 16 or
more OCPUs to support multipath-enabled attachments. This step will not work for
instances where the original number of OCPUs was less than 8 and the volume attachment
is paravirtualized. In this scenario, after the volume is detached and reattached, the
volume attachment will still not be multipath-enabled even though the instance now
supports multipath-enabled attachments.
Workaround: As a workaround, we recommend that you create a new instance based on a shape with 16 or more OCPUs, and then attach the volume to the new instance.
Details: When you attempt to attach the maximum number of block volumes to a smaller VM.Standard.A1.Flex instance, in some cases, the volumes might fail to attach. This happens because of limitations with the underlying physical host configuration.
Workaround:We're working on a resolution. As a workaround, we recommend that you increase the size of the VM by resizing the VM, and then try attaching the volumes again.
Details: When you schedule volume and volume group backups using a backup policy
that is enabled for cross-region copy for volumes that are encrypted using Vault service
encryption keys, the encryption keys are not copied with the volume backup to the
destination region. The volume backup copies in the destination region are instead
encrypted using Oracle-provided keys.
Workaround:We're working on a resolution. As a workaround, you can manually copy volume backups and volume group backups across regions, either manually or using a script, and specify the key management key ID in the target region for the copy operation. For more information about manual cross region copy, see Copying a Volume Backup Between Regions.
Details: When you attach a Windows boot volume as a data volume to another instance, when you try to connect to the volume using the steps described in Connecting to a Block Volume the volume fails to attach and you may encounter the following error:
Connect-IscsiTarget : The target has already been logged in via an iSCSI session.
Workaround: You need to append the following to the Connect-IscsiTarget command copied from the Console:
Details: When you try to access the Console using
Firefox, the Console page never loads in the browser.
This problem is likely caused by a corrupted Firefox user profile.
Workaround: Create a new Firefox user profile as follows:
Ensure that you are on the latest version of Firefox. If not, update to the latest version.
Details: Existing PDBs do not appear in a newly created database and it may take
up to a few hours before they appear in the Console. This includes
the default PDB for a new database and existing PDBs for cloned or restored databases.
In case of an in-place restore to an older version, the PDB list is updated similarly
and may have some delay.
Details: Using the Database Service API to migrate a file-based TDE wallet to a
customer-managed key-based TDE wallet on Oracle Database 12c release 1 (12.1.0.2)
fails with the following error:
[FATAL] [DBAAS-11014] - Required patches (30128047) are not present in the Oracle home <ORACLE_HOME> ACTION: Apply the required patches (30128047) and re-try the operation
Workaround: Use the DBAASCLI utility with the --skip_patch_check
true flag to skip the validation of the patch for bug 30128047. Ensure that
you have applied the patch for bug 31527103 in the Oracle home and then run the
following dbaascli
command:
Details: Using the Database Service API to migrate a customer-managed key-based
TDE wallet to a file-based TDE wallet on Oracle Database 12c release 1 (12.1.0.2)
fails with the following error:
[FATAL] [DBAAS-11014] - Required patches (30128047) are not present in the Oracle home <ORACLE_HOME> ACTION: Apply the required patches (30128047) and re-try the operation
Workaround: Use the DBAASCLI utility with the --skip_patch_check
true flag to skip the validation of the patch for bug 30128047. Ensure that
you have applied the patch for bug 29667994 in the Oracle home and then run following
dbaascli
command:
Details: Using the Database Service API to migrate a file-based TDE wallet to
customer-managed key-based TDE wallet on Oracle Database 12c release 2 (12.2.0.1)
fails with the following error:
[FATAL] [DBAAS-11014] - Required patches (30128047) are not present in the Oracle home <ORACLE_HOME> ACTION: Apply the required patches (30128047) and re-try the operation
Workaround: Migrate a file-based TDE wallet to a customer-managed key-based TDE
wallet, as follows:
Determine whether the database has encrypted UNDO or TEMP tablespaces in any of the
Autonomous Databases or in CDB$ROOT, as follows:
Run the following query from
CDB$ROOT, to list all encrypted tablespaces contained within all Autonomous
Databases:
SQL> select tstab.con_id, tstab.name from v$tablespace tstab, v$encrypted_tablespaces enctstab where tstab.ts# = enctstab.ts# and encryptedts = 'YES';
In
then NAME column of the result of the query, search for the names of UNDO
and TEMP tablespaces. If there are encrypted UNDO or TEMP tablespaces, then
proceed to the next step.
Unencrypt UNDO or TEMP tablespaces, as follows:
If an UNDO tablespace is
encrypted
Unencrypt existing UNDO tablespaces, as
follows:
SQL> alter tablespace <undo_tablespace_name> encryption online decrypt;
Repeat
this procedure for all encrypted UNDO tablespaces.
If a TEMP tablespace
is encrypted
Check the default TEMP tablespace, as
follows:
SQL> select property_value from database_properties where property_name = 'DEFAULT_TEMP_TABLESPACE';
If
the default TEMP tablespace is not encrypted but other TEMP
tablespaces are encrypted, then drop the other TEMP tablespaces, as
follows:
SQL> drop tablespace <temp_tablespace_name>;
Skip
the remainder of the steps in this procedure.
If the default
TEMP tablespace is encrypted, then proceed with the remaining steps
to create and set an unencrypted default TEMP tablespace.
Set the encrypt_new_tablespaces parameter to DDL, as
follows:
SQL> alter system set "encrypt_new_tablespaces" = DDL scope = memory;
Create a TEMP tablespace with the specifications of the current TEMP
tablespace, as
follows:
Set the new TEMP tablespace as the default TEMP tablespace for the
database, as
follows:
SQL> alter database default temporary tablespace <temp_tablespace_name>;
Drop existing TEMP tablespaces, as
follows:
SQL> drop tablespace <temp_tablespace_name>;
Repeat this procedure for all encrypted TEMP tablespaces.
The
database is now running with default UNDO and TEMP tablespaces that are not
encrypted and any older TEMP and UNDO tablespaces are also decrypted.
Set
encrypt_new_tablespaces to its original value, as
follows:
SQL> alter system set "encrypt_new_tablespaces" = cloud_only;
Proceed
with keystore migration to customer-managed keys.
Once you confirm that there are no UNDO or TEMP tablespaces encrypted in any of the
pluggable databases or in CDB$ROOT, use the DBAASCLI utility with the
--skip_patch_check true flag to skip the validation of the
patch for bug 30128047. Ensure that you have applied the patch for bug 31527103 in
the Oracle home and then run following dbaascli
command:
Details: Using the Database Service API to migrate a customer-managed key-based
TDE wallet to a file-based TDE wallet on Oracle Database 12c release 2 (12.2.0.1)
fails with the following error:
[FATAL] [DBAAS-11014] - Required patches (30128047) are not present in the Oracle home <ORACLE_HOME> ACTION: Apply the required patches (30128047) and re-try the operation
Workaround: Migrate a customer-managed key-based TDE wallet to a file-based TDE
wallet, as follows:
Determine whether the database has encrypted UNDO or TEMP tablespaces in any of the
Autonomous Databases or in CDB$ROOT, as follows:
Run the following query from
CDB$ROOT, to list all encrypted tablespaces contained within all Autonomous
Databases:
SQL> select tstab.con_id, tstab.name from v$tablespace tstab, v$encrypted_tablespaces enctstab where tstab.ts# = enctstab.ts# and encryptedts = 'YES';
In
then NAME column of the result of the query, search for the names of UNDO
and TEMP tablespaces. If there are encrypted UNDO or TEMP tablespaces, then
proceed to the next step.
Unencrypt UNDO or TEMP tablespaces, as follows:
If an UNDO tablespace is
encrypted
Unencrypt existing UNDO tablespaces, as
follows:
SQL> alter tablespace <undo_tablespace_name> encryption online decrypt;
Repeat
this procedure for all encrypted UNDO tablespaces.
If a TEMP tablespace
is encrypted
Check the default TEMP tablespace, as
follows:
SQL> select property_value from database_properties where property_name = 'DEFAULT_TEMP_TABLESPACE';
If
the default TEMP tablespace is not encrypted but other TEMP
tablespaces are encrypted, then drop the other TEMP tablespaces, as
follows:
SQL> drop tablespace <temp_tablespace_name>;
Skip
the remainder of the steps in this procedure.
If the default
TEMP tablespace is encrypted, then proceed with the remaining steps
to create and set an unencrypted default TEMP tablespace.
Set the encrypt_new_tablespaces parameter to DDL, as
follows:
SQL> alter system set "encrypt_new_tablespaces" = DDL scope = memory;
Create a TEMP tablespace with the specifications of the current TEMP
tablespace, as
follows:
Set the new TEMP tablespace as the default TEMP tablespace for the
database, as
follows:
SQL> alter database default temporary tablespace <temp_tablespace_name>;
Drop existing TEMP tablespaces, as
follows:
SQL> drop tablespace <temp_tablespace_name>;
Repeat this procedure for all encrypted TEMP tablespaces.
The
database is now running with default UNDO and TEMP tablespaces that are not
encrypted and any older TEMP and UNDO tablespaces are also decrypted.
Set
encrypt_new_tablespaces to its original value, as
follows:
SQL> alter system set "encrypt_new_tablespaces" = cloud_only;
Proceed
with keystore migration to customer-managed keys.
Once you confirm that there are no UNDO or TEMP tablespaces encrypted in any of the
pluggable databases or in CDB$ROOT, use the DBAASCLI utility with the
--skip_patch_check true flag to skip the validation of the
patch for bug 30128047. Ensure that you have applied the patch for bug 29667994 in
the Oracle home and then run following dbaascli
command:
Details: When you change the license type of your Database or DB system from BYOL to license included, or the other way around, you are billed for both types of licenses for the first hour. After that, you are billed according to your updated license type.
Details: If you configure your VCN with a service gateway, the private subnet blocks access to the YUM repositories needed to update the OS. This issue affects all types of DB systems.
Workaround: This issue is now resolved. Here is the workaround that was recommended before the issue's resolution:
Details: Unmanaged backups to Object Storage using
the database CLI (dbcli) or RMAN fail with the following errors:
-> Oracle Error Codes found:
-> ORA-19554: error allocating device, device type: SBT_TAPE, device name:
-> ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
-> KBHS-00712: ORA-29024 received from local HTTP service
-> ORA-27023: skgfqsbi: media manager protocol error
In response to policies implemented by two common web browsers regarding Symantec
certificates, Oracle recently changed the certificate authority used for Oracle Cloud Infrastructure. The resulting change in SSL certificates
can cause backups to Object Storage to fail if the
Oracle Database Cloud Backup Module still points to the old certificate.
Workaround for dbcli: Check the log files for the errors listed and, if found, update the backup module.
Review the RMAN backup log files for the errors listed above:
Determine the ID of the failed backup job.
dbcli list-jobs
In this example output, the failed backup job ID is "f59d8470-6c37-49e4-a372-4788c984ea59".
Copy
root@<node name> ~]# dbcli list-jobs
ID Description Created Status
---------------------------------------- --------------------------------------------------------------------------- ----------------------------------- ----------
cbe852de-c0f3-4807-85e8-7523647ec78c Authentication key update for DCS_ADMIN March 30, 2018 4:10:21 AM UTC Success
db83fdc4-5245-4307-88a7-178f8a0efa48 Provisioning service creation March 30, 2018 4:12:01 AM UTC Success
c1511a7a-3c2e-4e42-9520-f156b1b4cf0e SSH keys update March 30, 2018 4:48:24 AM UTC Success
22adf146-9779-4a2c-8682-7fd04d7520b2 SSH key delete March 30, 2018 4:50:02 AM UTC Success
6f2be750-9823-4ed5-b5ff-8e49f136dd22 create object store:bV0wqIaoLA4xLT4dGjOu March 30, 2018 5:33:38 AM UTC Success
0716f464-1a10-40df-a303-cadee0302b1b create backup config:bV0wqIaoLA4xLT4dGjOu_BC March 30, 2018 5:33:49 AM UTC Success
e08b21c3-cd09-4e3a-944c-d1da96cb21d8 update database : hfdb1 March 30, 2018 5:34:04 AM UTC Success
1c3d7c58-79c3-4039-8f48-787057ce7c6e Create Longterm Backup with TAG-DBTLongterm<identity number> for Db:<dbname> March 30, 2018 5:37:11 AM UTC Success
f59d8470-6c37-49e4-a372-4788c984ea59 Create Longterm Backup with TAG-DBTLongterm<identity number> for Db:<dbname> March 30, 2018 5:43:45 AM UTC Failure
Use the ID of the failed job to obtain the location of the log file to review.
dbcli describe-job -i <failed_job_ID>
Relevant output from the describe-job command should look like this:
Message: DCS-10001:Internal error encountered: Failed to run Rman statement.
Refer log in Node <node_name>: /opt/oracle/dcs/log/<node_name>/rman/bkup/<db_unique_name>/rman_backup/<date>/rman_backup_<date>.log.
Update the Oracle Database Cloud Backup Module:
Determine the Swift object store ID and user the database is using for backups.
Run the dbcli list-databases command to determine the ID of the database.
Use the database ID to determine the backup configuration ID (backupConfigId).
Workaround for RMAN: Check the RMAN log files for the error messages listed. If found, log on to the host as the oracle user, and use your Swift credentials to reinstall the backup module.
Details: The SDKs released on October 18, 2018 introduce code-breaking changes to the database size and the database edition attributes in the database backup APIs.
Workaround: Refer to the following language-specific documentation for more details about the breaking changes, and update your existing code as applicable:
The command should return either the HTTP 200 or the HTTP
204 No Content success status response code. Any other status code
indicates a problem connecting to Object Storage.
Note that <target_dir> is the directory to which you extracted opc_installer.zip in step 3.
This command might take a few minutes to complete because it downloads libopc.so and other files. Once the command completes, you should see several files (including libopc.so) in your target directory.
Change directory to your target directory, and copy the lipopc.so and opc_install.jar files into the /opt/oracle/oak/pkgrepos/oss/odbcs directory.
(Optional) Delete the temporary user and the target directory you used to install the backup module.
After you complete the procedure, contact Oracle Support or your tenant administrator for further instructions. You must provide the OCID of the DB system for which you would like to enable backups.
Details: Memory limitations of host machines running the VM.Standard1.1 shape can cause failures for automatic database backup jobs managed by Oracle Cloud Infrastructure (jobs managed by using either the Console or the API). You can change the systems' memory parameters to resolve this issue.
Workaround: Change the systems' memory parameters as follows:
Switch to the oracle user in the operating system.
Copy
[opc@hostname ~]$ sudo su - oracle
Set the environment variable to login to the database instance. For example:
SQL> ALTER SYSTEM SET SGA_TARGET = 1228M scope=spfile;
SQL> ALTER SYSTEM SET PGA_AGGREGATE_TARGET = 1228M;
SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT = 2457M;
SQL> exit
Details: On High Performance and Extreme Performance DB systems, Data Pump utility operations that use compression and/or parallelism might fail and return the error ORA-00439: feature not enabled. This issue affects database versions 12.1.0.2.161018 and 12.1.0.2.170117.
Workaround: Apply patch 25579568 or 25891266 to Oracle Database homes for database versions 12.1.0.2.161018 or 12.1.0.2.170117, respectively. Alternatively, use the Console to apply the April 2017 patch to the DB system and database home.
Note
Determining the Version of a Database in a Database Home
To determine the version of a database in a database home, run either $ORACLE_HOME/OPatch/opatch lspatches as the oracle user or dbcli list-dbhomes as the root user.
Details: You might get a "Secure Connection Failed" error message when you try to connect to the EM Express console from your 1-node DB system because the correct permissions were not applied automatically.
Workaround: Add read permissions for the asmadmin group on the wallet directory of the DB system, and then retry the connection:
SSH to the DB system host, log in as opc, sudo to the grid user.
Copy
[opc@dbsysHost ~]$ sudo su - grid
[grid@dbsysHost ~]$ . oraenv
ORACLE_SID = [+ASM1] ?
The Oracle base has been set to /u01/app/grid
Get the location of the wallet directory, shown in red below in the command output.
Copy
[grid@dbsysHost ~]$ lsnrctl status | grep xdb_wallet
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=dbsysHost.sub04061528182.dbsysapril6.oraclevcn.com)(PORT=5500))(Security=(my_wallet_directory=/u01/app/oracle/admin/dbsys12_phx3wm/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Return to the opc user, switch to the oracle user, and change to the wallet directory.
Copy
[opc@dbsysHost ~]$ sudo su - oracle
[oracle@dbsysHost ~]$ cd /u01/app/oracle/admin/dbsys12_phx3wm/xdb_wallet
List the directory contents and note the permissions.
Details: Backup operations to Object Storage using
the Exadata backup utility (bkup_api) or RMAN fail with the following errors:
* DBaaS Error trace:
-> API::ERROR -> KBHS-00715: HTTP error occurred 'oracle-error'
-> API::ERROR -> ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
-> API::ERROR -> ORA-19554: error allocating device, device type: SBT_TAPE, device name:
-> API::ERROR -> ORA-27023: skgfqsbi: media manager protocol error
-> API::ERROR Unable to verify the backup pieces
-> Oracle Error Codes found:
-> ORA-19554: error allocating device, device type: SBT_TAPE, device name:
-> ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
-> KBHS-00712: ORA-29024 received from local HTTP service
-> ORA-27023: skgfqsbi: media manager protocol error
In response to policies implemented by two common web browsers regarding Symantec
certificates, Oracle recently changed the certificate authority used for Oracle Cloud Infrastructure. The resulting change in SSL certificates
can cause backups to Object Storage to fail if the
Oracle Database Cloud Backup Module still points to the old certificate.
Workaround for RMAN: Check the RMAN log files for the error messages listed. If found, log on to your host as the oracle user, and reinstall the backup module using your Swift credentials.
Details: With the release of the shared Database Home feature for Exadata DB systems, the Console now also synchronizes and displays information about databases that are created and managed by using the dbaasapi and dbaascli utilities. However, databases with Data Guard configured do not display correct information in the Console under the following conditions:
If Data Guard was enabled by using the Console, and then a change is made to the primary or standby database by using dbaascli (such as moving the database to a different home), the result is not reflected in the Console.
If Data Guard was configured manually, the Console does not show a Data Guard association between the two databases.
Workaround: We are aware of the issue and working on a resolution. In the meantime, Oracle recommends that you manage your Data Guard enabled databases by using either only the Console or only command line utilities.
Details: This is a clusterware issue that occurs only when the Oracle GI version is 12.2.0.1 without any bundle patch. The problem is caused by corruption of a voting disk after you offline then online the disk.
Workaround: Determine the version of the GI, and whether the voting disk is corrupted. Repair the disk, if applicable, and then apply the latest GI bundle.
Verify the GI version is 12.2.0.1 without any bundle patch applied:
[root@rmstest-udaau1 ~]# su - grid
[grid@rmstest-udaau1 ~]$ . oraenv
ORACLE_SID = [+ASM1] ? +ASM1
The Oracle base has been set to /u01/app/grid
[grid@rmstest-udaau1 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.6
Copyright (c) 2018, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/12.2.0.1/grid
Central Inventory : /u01/app/oraInventory
from : /u01/app/12.2.0.1/grid/oraInst.loc
OPatch version : 12.2.0.1.6
OUI version : 12.2.0.1.4
Log file location : /u01/app/12.2.0.1/grid/cfgtoollogs/opatch/opatch2018-01-15_22-11-10PM_1.log
Lsinventory Output file location : /u01/app/12.2.0.1/grid/cfgtoollogs/opatch/lsinv/lsinventory2018-01-15_22-11-10PM.txt
--------------------------------------------------------------------------------
Local Machine Information::
Hostname: rmstest-udaau1.exaagclient.sretest.oraclevcn.com
ARU platform id: 226
ARU platform description:: Linux x86-64
Installed Top-level Products (1):
Oracle Grid Infrastructure 12c 12.2.0.1.0
There are 1 products installed in this Oracle Home.
There are no Interim patches installed in this Oracle Home.
--------------------------------------------------------------------------------
OPatch succeeded.
Check the /u01/app/grid/diag/crs/<hostname>/crs/trace/ocssd.trc file for evidence that the GI failed to start due to voting disk corruption:
Copy
ocssd.trc
2017-01-17 23:45:11.955 : CSSD:3807860480: clssnmvDiskCheck:: configured
Sites = 1, Incative sites = 1, Mininum Sites required = 1
2017-01-17 23:45:11.955 : CSSD:3807860480: (:CSSNM00018:)clssnmvDiskCheck:
Aborting, 2 of 5 configured voting disks available, need 3
......
.
2017-01-17 23:45:11.956 : CSSD:3807860480: clssnmCheckForNetworkFailure:
skipping 31 defined 0
2017-01-17 23:45:11.956 : CSSD:3807860480: clssnmRemoveNodeInTerm: node 4,
slcc05db08 terminated. Removing from its own member and connected bitmaps
2017-01-17 23:45:11.956 : CSSD:3807860480:
###################################
2017-01-17 23:45:11.956 : CSSD:3807860480: clssscExit: CSSD aborting from
thread clssnmvDiskPingMonitorThread
2017-01-17 23:45:11.956 : CSSD:3807860480:
###################################
2017-01-17 23:45:11.956 : CSSD:3807860480: (:CSSSC00012:)clssscExit: A
fatal error occurred and the CSS daemon is terminating abnormally
------------
2017-01-19 19:00:32.689 : CSSD:3469420288: clssnmFindVF: Duplicate voting disk found in the queue of previously configured disks
queued(o/192.168.10.18/PCW_CD_02_slcc05cel10|[66223efc-29254fbb-bf901601-21009
cbd]),
found(o/192.168.10.18/PCW_CD_02_slcc05cel10|[66223efc-29254fbb-bf901601-21009c
bd]), is not corrupted
2017-01-19 19:01:06.467 : CSSD:3452057344: clssnmvVoteDiskValidation:
Voting disk(o/192.168.10.19/PCW_CD_02_slcc05cel11) is corrupted
You can also use SQL*Plus to confirm that the voting disks are corrupted:
Log in as the grid user, and set the environment to ASM.
[root@rmstest-udaau1 ~]# su - grid
[grid@rmstest-udaau1 ~]$ . oraenv
ORACLE_SID = [+ASM1] ? +ASM1
The Oracle base has been set to /u01/app/grid
Log in to SQL*Plus as SYSASM.
$ORACLE_HOME/bin/sqlplus / as sysasm
Run the following two queries:
Copy
SQL> select name, voting_file from v$asm_disk where VOTING_FILE='Y' and group_number !=0;
SQL> select CC.name, count(*) from x$kfdat AA JOIN (select disk_number, name from v$asm_disk where VOTING_FILE='Y' and group_number !=0) CC ON CC.disk_number = AA.NUMBER_KFDAT where AA.FNUM_KFDAT= 1048572 group by CC.name;
If the system is healthy, the results should look like the following example.
Query 1 Results
NAME VOTING_FILE
------------------------------ ---------------
DBFSC3_CD_02_SLCLCX0788 Y
DBFSC3_CD_09_SLCLCX0787 Y
DBFSC3_CD_04_SLCLCX0786 Y
Query 2 Results
NAME COUNT(*)
------------------------------ ---------------
DBFSC3_CD_02_SLCLCX0788 8
DBFSC3_CD_09_SLCLCX0787 8
DBFSC3_CD_04_SLCLCX0786 8
In a healthy system, every voting disk returned in the first query should also be returned in the second query and the counts for all the disks should be non-zero. Otherwise, one or more of your voting disks are corrupted.
If a voting disks is corrupted, offline the grid disk that contains the voting disk. The cells will automatically move the bad voting disk to the other grid disk and online that voting disk.
The following command offlines a grid disk named DATAC01_CD_05_SCAQAE08CELADM13.
SQL> alter diskgroup DATAC01 offline disk DATAC01_CD_05_SCAQAE08CELADM13;
Diskgroup altered.
Wait 30 seconds and then rerun the two queries in step 3c to verify that the voting disk migrated to the new grid disk and that it is healthy.
Verify the grid disk you offlined is now online:
SQL> select name, mode_status, voting_file from v$asm_disk where name='DATAC01_CD_05_SCAQAE08CELADM13';
The mode_status should be ONLINE, and the voting_file should NOT be Y.
Repeat steps 4a through 4c for each remaining grid disk that contains a corrupt voting disk.
Note
If the CRS does not start because of the voting disk corruption, start it using Exclusive mode before you execute the command in step 4.
crsctl start crs -excl
If you are using Oracle GI version 12.2.0.1 without any bundle patch, you must upgrade the GI version to the latest GI bundle, whether or not a voting disk was corrupted.
Details: Exadata DB systems launched on June 15, 2018 or later automatically
include the ability to create, list, and delete databases by using the Console, API, or Oracle Cloud Infrastructure CLI. However, systems provisioned before
this date require extra steps to enable this functionality.
Attempts to use this functionality without the extra steps result in the following error messages:
On creating a database - "Create Database is not supported on this Exadata DB
system. To enable this feature, please contact Oracle Support."
On terminating a database - "DeleteDbHome is not supported on this Exadata DB system. To enable this feature, please contact Oracle Support."
Workaround: You need to install the Exadata agent on each node of the Exadata DB system.
First, create a service request for assistance from Oracle Support Services. Oracle
Support will respond by providing you with a preauthenticated URL for an Oracle Cloud Infrastructure
Object Storage location where you can obtain the
agent.
Ensure that the system is configured to access Oracle Cloud Infrastructure
Object Storage with the required security lists for
the region in which the DB system was created. For more information about
connectivity to Oracle Cloud Infrastructure
Object Storage, see Prerequisites for Backups on Exadata Cloud
Service.
To install the Exadata agent:
Log on to the node as root.
Run the following commands to install the agent:
Copy
[root@<node_n>~]# cd /tmp
[root@<node_n>~]# wget https://objectstorage.<region_name>.oraclecloud.com/p/1q523eOkAOYBJVP9RYji3V5APlMFHIv1_6bAMmxsS4E/n/dbaaspatchstore/b/dbaasexadatacustomersea1/o/backfill_agent_package_iwwva.tar
[root@<node_n>~]# tar -xvf /tmp/backfill_agent_package_*.tar -C /tmp
[root@<node_n>~]# rpm -ivh /tmp/dbcs-agent-2.5-3.x86_64.rpm
Example output:
Copy
[root@<node_n>~]# rpm -ivh dbcs-agent-2.5-3.x86_64.rpm
Preparing... ########################################### [100%]
Checking for dbaastools_exa rpm on the system
Current dbaastools_exa version = dbaastools_exa-1.0-1+18.1.4.1.0_180725.0000.x86_64
dbaastools_exa version dbaastools_exa-1.0-1+18.1.4.1.0_180725.0000.x86_64 is good. Continuing with dbcs-agent installation
1:dbcs-agent ########################################### [100%]
initctl: Unknown instance:
initctl: Unknown instance:
initzookeeper start/running, process 85821
initdbcsagent stop/waiting
initdbcsadmin stop/waiting
initdbcsagent start/running, process 85833
initdbcsadmin start/running, process 85836
Confirm that the agent is installed and running.
[root@<node_n>~]# rpm -qa | grep dbcs-agent
dbcs-agent-2.5-0.x86_64
[root@<node_n>~]# initctl status initdbcsagent
initdbcsagent start/running, process 97832
Repeat steps 1 through 3 on the remaining nodes.
After the agent is installed on all nodes, allow up to 30 minutes for Oracle to complete
additional workflow tasks such as upgrading the agent to the latest version, rotating
the agent credentials, and so on. When the process is complete, you should be able to
use the Exadata managed features in the Console, API, or
Oracle Cloud Infrastructure CLI.
Details: The patching configuration file (/var/opt/oracle/exapatch/exadbcpatch.cfg) points to the object store of the us-phoenix-1 region, even if the Exadata DB system is deployed in another region.
This problem occurs if the release version of the database tooling package (dbaastools_exa) is 17430 or lower.
Workaround: Follow the instructions in Updating Tooling on an Exadata Cloud Service
Instance to confirm that the release version of the tooling package is 17430
or lower, and then update it to the latest version.
Details: A change in how Oracle Linux 7 handles temporary files can result in the removal of required socket files from the /var/tmp/.oracle directory. This issue affects only Exadata DB systems running the version 19.1.2 operating system image.
Workaround: Run sudo /usr/local/bin/imageinfo as the opc user to determine your operating system image version. If your image version is 19.1.2.0.0.190306, follow the instructions in Doc ID 2498572.1 to fix the issue.
If you are scaling either regular data storage or recovery area (RECO) storage from a
value less than 10,240 GB (10 TB) to a value exceeding 10,240 GB, perform the scaling in
two operations. First, scale the system to 10,240 GB. After this first scaling operation
is complete and the system is in the "available" state, perform a second scaling
operation, specifying your target storage value above 10,240 GB. Attempting to scale
from a value less than 10,240 GB to a value higher than 10,240 GB in a single operation
can lead to a failure of the scaling operation. For instructions on scaling, see Scale the DB System.
Details: When scaling a virtual machine DB system to use a larger system shape, the scaling operation fails if a DB_Cache_nX parameter is not set to 0 (zero).
Workaround: When scaling a virtual DB system, ensure that all DB_Cache_nX parameters (for example, DB_nK_CACHE_SIZE) are set to 0.
Details: If you use volume group backups when performing DR operations for compute and storage across different ADs within the same region, back and forth DR transitions will cause the compute and associated block storage (which uses volume group backups) to end up in a different AD each time.
Workaround: This issue does not affect block storage that is replicated using volume group replication.
Details: Auto-tune performance settings for block storage volumes are not carried over during DR operations.
Workaround: For block storage volumes which have auto-tuned performance enabled you must re-enable these settings after Full Stack DR transitions these block storage volumes to another region.
Details: If you perform a failover operation immediately after modifying an Full Stack DR-protected resource, then the resource recovery may fail, or the resource may not be recovered properly. For example, if you change the replication target or other properties for a volume group that you added to a DR protection group, and the primary region suffers an immediate outage thereafter, Full Stack DR may not detect the changes you made to the volume group replication settings, and this will affect recovery of that volume group.
Workaround: Perform a switchover precheck immediately after making any changes to any resources under DR protection.
Details: Full Stack DR uses the Oracle Cloud Agent (OCA) Run Command utility to run local scripts on instances. When you configure a user-defined step to run a local script on a Microsoft Windows instance, then you can't use the Full Stack DR Run As User feature that allows you to specify a different userid to run local scripts that reside on instances.
Workaround: On Microsoft Windows instances, the script can only run as the default ocarun userid used by the Oracle Cloud Agent Run Command utility. This limitation does not affect Oracle Linux instances.
Details: Full Stack DR uses the Oracle Cloud Agent (OCA) Run Command utility to run local scripts on instances. By default, these scripts are run as the ocarun user.
Workaround: On a Microsoft Windows instance, any local script that you configure to run as a user-defined step in a DR plan must be accessible and executable by this ocarun userid.
Details: When running a local script using a user-defined step in a DR plan, if you do not provide full paths to script interpreters or scripts, then Full Stack DR will throw errors.
Workaround: When you configure a user-defined step in a DR plan to run a local script that resides on an instance, ensure that you provide the full path to any interpreter that may precede the script name, as well as the full path to the script.
Specify /bin/sh /path/to/myscript.sh arg1 arg2 instead of sh myscript.sh arg1 arg2
Details: During DR operations, Full Stack DR attempts to reassign the original private IP assigned to an instance if the CIDR-block of the destination subnet matches the CIDR-block of the source subnet, and if the original private IP of the instance is not already assigned.
If you use Full Stack DR to relocate all the nodes in an OCFS2 cluster, and the private IP for any of the cluster node can't be reassigned, those cluster nodes will detach from the OCFS2 cluster after the nodes are launched in the standby region.
Workaround: Ensure that the destination subnet's CIDR-block matches the CIDR-block of the source subnet and all private IP addresses required for cluster nodes are available in the destination subnet.
Details: After Full Stack DR relocates an instance to a different region, the resource page of the instance may display the following message for Instance Access:
We are not quite sure how to connect to an instance this uses this image
Workaround: Ignore this message. SSH connections to the instance will work normally if you use the original SSH keyfile to connect to and authenticate the instance.
Details: After Full Stack DR relocates an instance to a different region, the resource page of the instance may display incorrect information for the Image portion of its boot volume.
For example, the Image information column may display the following message: Error loading data
Workaround: This error message is for the display of the Image name but does not affect the operation of the instance or its boot volume.
Details: During a DR transition, when the block volumes are moved to a different region, the performance settings (IOPS and Throughput) are not replicated and restored automatically. You may need to reconfigure these performance settings.
Workaround: After executing a DR plan, configure the performance setting manually.
Details: The on-demand upload of a zip file which is created on a Windows machine
might sometimes fail to upload the log content. The reason for the failure is that the
zip created on Windows has the same last modification time as the file's creation time.
So, when the file is unzipped, the file's last modification time is set as the file's
creation time which might be older than the timestamp of the log entries in the log
file. In such a case, the log entries with the timestamp more recent than the file's
last modification time are not uploaded.
An example of the issue:
Timestamp on the log entry: 2020-10-12 21:12:06
File last modification time of the log file: 2020-10-10 08:00:00
Workaround: Copy the log files to a new folder and create a zip file. This action
makes the file's last modification time more recent than the timestamp of the log
entries. Use this new zip file for on-demand upload.
Using the previous example, after the workaround is implemented:
Timestamp on the log entry: 2020-10-12 21:12:06
File last modification time of the log file: 2021-03-31 08:00:00
Details: Folders containing more than 10,000 files may cause high resource (memory / storage / CPU) usage by the Management Agent which may lead to slow log collection, affect other functionalities of the Management Agent, and may also slow down the host machine.
When large folders are encountered by the Management Agent Logging Analytics plug-in, a message similar to the following example message is added to the Management Agent
mgmt_agent_logan.log file:
2020-07-30 14:46:51,653 [LOG.Executor.2388 (LA_TASK_os_file)-61850] INFO - ignore large dir /u01/service/database/logs. set property loganalytics.enable_large_dir to enable.
Resolution: We recommend avoiding large folders. Utilize a cleanup mechanism to remove files soon after they are collected so that the Management Agent would have sufficient time to collect them again.
However, if you want to continue monitoring logs in large folders, then you can enable the support by performing the following action:
Replace INSTALL_DIRECTORY with the path to the agent_inst folder and restart the agent.
You may have to make some configuration changes on your host agent to enable this support. Try the new settings in a development or test environment before making them in production. Determine the increase for the following factors by using a representative environment to test them. The actual required increase will depend on factors such as number of files, rate of file creation, and the other types of collection that the Management Agent is doing.
Increase the heap size of the Management Agent. For directories with a large number of files, the required heap size increases with the number of files. See Management Agent documentation.
Ensure that sufficient disk space and inodes are available for handling the large number of state files that the Management Agent may have to keep. This depends on the type of log source and parser used. If your parser uses the Header-Detail function, then the agent creates and stores the header in a cache file as long as the original log file exists.
Ensure that the operating system setting for the number of open files can support the Management Agent reading the large folder and potentially large number of state files.