The source doesn't contain data matching the query in the connector's source configuration.
To find out if data exists at the source, do one of the following:
Get service logs for the connector. (If needed, enable logs first.) Following is an example log message that indicates a successful connector run, including the amount of data moved:
Service connector run succeeded - <number> messages (<number> bytes) written to target
For source logs, search the logs using the query from the connector's source configuration.
You don't have authorization to write to the target service.
To find out if authorization is missing, get service logs for the connector. (If needed, enable logs first.) Following is an example log message that indicates missing authorization:
Connector run failed due to <type> error, Error Code : 404 NotAuthorizedOrNotFound
Remedy: Get authorization 🔗
Ensure you have authorization, either through the default policy offered when creating or updating the connector or through a group-based policy. See Access to Source, Task, and Target Services.
Note
Your accepted default policies might take a few minutes to propagate to regions that aren't your home region. The connector doesn't move data until the policies are propagated.
Deactivation for Unknown Reasons 🔗
Troubleshoot a deactivated connector.
A connector's status is Deactivated and you didn't deactivate it.
Someone Else Deactivated the Connector 🔗
The connector was deactivated by someone else:
Another user at your organization
Oracle Cloud Infrastructure
For certain failure conditions, a connector that continuously fails is automatically deactivated by the service team at Oracle Cloud Infrastructure. Such a long-term continuous failure can indicate invalid configuration of the connector's source or target.
Cause: For certain failure conditions, a connector that continuously fails is automatically deactivated by the service team at Oracle Cloud Infrastructure. Such a long-term continuous failure can indicate invalid configuration of the connector's source or target.
Look for the following indicators of issues with connectors.
Data freshness for a single connector: Look for unexpected lapses of time between data movement.
Open the navigation menu and select Analytics & AI. Under Messaging, select Connector Hub.
Choose a Compartment.
Select the name of the connector that you want.
Under Resources, select Metrics.
Review the Data freshness metric chart.
Data freshness across connectors: Look for unexpected lapses of time between data movement.
Open the navigation menu and select Observability & Management. Under Monitoring, select Service Metrics.
Choose the Compartment that contains the connectors you want to view data freshness for.
For Metric namespace, select oci_service_connector_hub.
Review the following metric charts:
Data freshness
Logging source: If the connector retrieves data from a log, then it might be attempting more than the maximum amount of hourly retrieval of data per connector (1 GB). Log data at the target isn't delivered if this issue continues to occur past 24 hours (the maximum duration for catching missed data in previous transmissions by the connector). To determine if this issue is occurring, create alarms to monitor the following indicators.
The value 43200000 is the number of milliseconds in 12 hours.
Ignore occasional failures. We recommend setting the alarm trigger delay to 30 minutes or more. With this configuration, the alarm only alerts you when multiple consecutive failures occur over the specified time period.
Ignore occasional failures. We recommend setting the alarm trigger delay to 30 minutes or more. With this configuration, the alarm only alerts you when multiple consecutive failures occur over the specified time period.
Results are grouped by error code and connector.
Internal errors at source that don't resolve after 15 minutes (5xx) (Errors at source)
Internal errors might indicate an issue at the source, which could delay delivery of data.
To trigger the alarm at shorter intervals, change the interval ([15m]).
Ignore occasional failures. We recommend setting the alarm trigger delay to 30 minutes or more. With this configuration, the alarm only alerts you when multiple consecutive failures occur over the specified time period.
Throttling errors at source (429) (Errors at source)
For more information on throttling errors, see documented limits for the relevant service.
For example, for throttling errors related to the Streaming source, see Limits on Streaming Resources. Throttling at the Streaming source occurs when a connector attempts to read a stream from a partition, other calls to the same partition are also occurring, and the number of calls exceeds service limits.
Ignore occasional failures. We recommend setting the alarm trigger delay to 30 minutes or more. With this configuration, the alarm only alerts you when multiple consecutive failures occur over the specified time period.
Service communication errors at source (-1) (Errors at source)
Ignore occasional failures. We recommend setting the alarm trigger delay to 30 minutes or more. With this configuration, the alarm only alerts you when multiple consecutive failures occur over the specified time period.
Zero (0) bytes read (when data is expected) (Bytes read from source)
If errors aren't occurring at source, target, or task, then the log might not exist. Confirm that the specified log exists by searching for it in Logging.
Ignore occasional failures. We recommend setting the alarm trigger delay to 30 minutes or more. With this configuration, the alarm only alerts you when multiple consecutive failures occur over the specified time period.