Use the Connector Hub service to transfer data between services in Oracle Cloud Infrastructure.
Connector Hub is a cloud message bus platform that offers a single pane of glass for describing, executing, and monitoring interactions when moving data between Oracle Cloud Infrastructure services.
Connector Hub is formerly known as Service Connector Hub.
*The Functions task isn't supported for this source-target combination.
How Connector Hub
Works 🔗
Connector Hub orchestrates data movement between services in Oracle Cloud Infrastructure.
Data is moved using connectors. A connector specifies the source service
that contains the data to be moved, optional tasks, and the target service for delivery
of data when tasks are complete. An optional task might be a function task to process
data from the source or a log filter task to filter log data from the source.
Speed of Data Movement 🔗
The connector reads data as soon as it's available. Aggregation or buffering might delay delivery to the target service. For example, metric data points need to be aggregated first.
While connectors run continuously, data is moved sequentially by individual move operations. The amount of data and the speed of each move operation depend on the connector configuration (source, task, and target) and the relevant service limits. Relevant service limits are determined by the services selected for the source, task, and target, in addition to limits for Connector Hub.
Example 1: Logging source, Notifications target (no task) 🔗
Callouts for Example 1
Number
Description
1
Connector Hub reads log data from Logging.
2
Connector Hub writes the log data to the Notifications target service.
3
Notifications sends messages to all subscriptions in the configured topic.
Each move operation moves data from the log sources to the topic, within service limits, at a speed affected by the types of subscriptions in the selected topic. The time required for a single move operation to move log sources in this scenario is up to a few minutes.
Example 2: Streaming source, Functions task, Object Storage target 🔗
Callouts for Example 2
Number
Description
1
Connector Hub reads stream data from Streaming.
2
Connector Hub triggers theFunctions task for custom processing of stream data.
3
The task returns processed data to Connector Hub.
4
Connector Hub writes the stream data to the Object Storage target service.
5
Object Storage writes the stream data to a bucket.
Each move operation moves data from the selected stream to the function task and then to the bucket, within service limits and according to size of each batch. Batch size is configured in task and target settings for this scenario. Once Connector Hub receives the stream data from the Streaming service, a single move operation moves a batch of this stream data to the function task according to the task's batch size configuration and finally moves the processed batch to the bucket according to the target's batch size configuration. The time required to receive, process, and move a stream in this scenario is up to 17 minutes depending on the task and target batch size configurations.
Connector Hub follows "at least once" delivery. That is, when moving data to targets, connectors deliver each batch of data at least once.*
If a move operation fails, then the connector retries that operation. The connector doesn't move subsequent batches of data until the retried operation succeeds.
If the move operation continues to fail beyond the source's retention period, then that batch of data isn't delivered.
*The maximum message size for the Notifications target is 128 KB. Any message that exceeds the maximum size is dropped.
For certain failure conditions, a connector that continuously fails is automatically deactivated by the service team at Oracle Cloud Infrastructure. Such a long-term continuous failure can indicate invalid configuration of the connector's source or target.
Batch Settings 🔗
A batch is a list of entries received from the connector source or task service.
Batch settings are only available for a connector's Functions task, Functions target, or Object Storage target.
When batch settings are available, you can limit batches by both size and time. The first limit reached triggers a flush of data at the target. For example, if you set the batch size limit to 1 MB and the batch time limit to 10 seconds, then the connector sends a batch of data to the target as soon as the first condition occurs: either 1 MB is pulled from the source, or 10 seconds has elapsed.
Long Failing Connectors are Deactivated 🔗
Warning announcements, followed by automatic deactivation, occur for connectors that continuously fail because of the following conditions:
After four consecutive days of these failure conditions, Connector Hub sends a warning announcement indicating the possibility of a future deactivation and providing troubleshooting information.
After seven consecutive days of these failure conditions, Connector Hub automatically deactivates the connector and sends an announcement indicating the deactivation.
You can troubleshoot a deactivated connector, update it to a valid configuration, and then reactivate it. Confirm that the newly reactivated connector is moving data as expected by checking the target service. To get details on the data flow from a connector's source to its target, enable logs for the connector.
Connector Hub Concepts 🔗
The following concepts are essential to working with Connector Hub.
connector
The definition of the data to be moved. A connector specifies a source service, target service, and optional tasks.
source
The service that contains the data to be moved according to specified tasks—for example, Logging.
target
The service that receives data from the source, according to specified tasks. A given target service processes, stores, or delivers received data—the Functions service processes the received data; the Logging Analytics, Monitoring, Object Storage, and Streaming services store the data; and the Notifications service delivers the data.
task
Optional filtering to apply to the data before moving it from the source
service to the target service.
trigger
The condition that must be met for a connector to run. Currently, the
trigger is continuous; that is, connectors run
continuously.
Flow of Data 🔗
When a connector runs, it receives data from the source service, completes optional tasks on the data (such as filtering), and then moves the data to the target service.
Following are the supported targets and optional tasks for each available source, along with a description of the targets.
The retention period for the Logging source in Connector Hub is 24 hours. For more information about delivery, see Delivery Details.
If the first run of a new connector is successful, then it moves log data from the connector's creation time. If the first run fails (such as with missing policies), then after resolution the connector moves log data from the connector creation time or 24 hours before the current time, whichever is later.
Each later run moves the next log data. If a later run fails and resolution occurs within the 24-hour retention period, then the connector moves the next log data. If a later run fails and resolution occurs outside the 24-hour retention period, then the connector moves the latest log data, and any data generated between the failed run and that latest log data isn't delivered.
Monitoring Source 🔗
Select a Monitoring source to transfer metric data points from the Monitoring service.
The following targets are supported by a connector that's defined with a Monitoring source and (optional) Functions task: Functions, Object Storage, and Streaming.
Callouts for Monitoring source
Number
Description
1
Connector Hub reads metric data from Monitoring.
2
Optional: If configured, Connector Hub triggers the following task:
Functions task for custom processing of metric data.
3
The task returns processed data to Connector Hub.
4
Connector Hub writes the metric data to a target service.
The retention period for the Monitoring source in Connector Hub is 24 hours. For more information about delivery, see Delivery Details.
Queue Source 🔗
Select the Queue source to transfer messages from the Queue service.
The following targets are supported by a connector that's defined with a Queue source and (optional) Functions task:
Callouts for Queue source
Number
Description
1
Connector Hub reads messages from Queue.
2
Optional: If configured, Connector Hub triggers the following task:
Functions task for custom processing of messages.
3
The task returns processed data to Connector Hub.
4
Connector Hub writes the messages to a target service, then automatically deletes the transferred messages from the queue.
The retention period for the Queue source in Connector Hub depends on the queue configuration. See Creating a Queue. For more information about delivery, see Delivery Details.
Streaming Source 🔗
Select the Streaming source to transfer stream data from the Streaming service.
The following targets are supported by a connector that's defined with a Streaming source and (optional) Functions task:
Functions
Logging Analytics
Notifications*
The Notifications target (asterisked in illustration) is supported except when using the Functions task.
Object Storage
Streaming
Callouts for Streaming source
Number
Description
1
Connector Hub reads stream data from Streaming.
2
Optional: If configured, Connector Hub triggers the following task:
Functions task for custom processing of stream data.
3
The task returns processed data to Connector Hub.
4
Connector Hub writes the stream data to a target service.
Together with the retention period, the Streaming source's read position determines where in the stream to start moving data.
Latest read position: Starts reading messages published after creating the connector.
If the first run of a new connector with this configuration is successful, then it moves data from the connector's creation time. If the first run fails (such as with missing policies), then after resolution the connector either moves data from the connector's creation time or, if the creation time is outside the retention period, the oldest available data in the stream. For example, consider a connector created at 10 a.m. for a stream with a two-hour retention period. If failed runs are resolved at 11 a.m., then the connector moves data from 10 a.m. If failed runs are resolved at 1 p.m., then the connector moves the oldest available data in the stream.
Later runs move data from the next position in the stream. If a later run fails, then after resolution the connector moves data from the next position in the stream or the oldest available data in the stream, depending on the stream's retention period.
Trim Horizon read position: Starts reading from the oldest available message in the stream.
If the first run of a new connector with this configuration is successful, then it moves data from the oldest available data in the stream. If the first run fails (such as with missing policies), then after resolution the connector moves the oldest available data in the stream, regardless of the stream's retention period.
Later runs move data from the next position in the stream. If a later run fails, then after resolution the connector moves data from the next position in the stream or the oldest available data in the stream, depending on the stream's retention period.
The Connector Hub service is available in all Oracle Cloud Infrastructure commercial regions. See About Regions and Availability Domains for the list of available regions, along with associated locations, region identifiers, region keys, and availability domains.
Resource Identifiers 🔗
Most types of Oracle Cloud Infrastructure resources have a unique, Oracle-assigned identifier called an Oracle Cloud ID (OCID). For information about the OCID format and other ways to identify your resources, see Resource Identifiers.
Ways to Access Connector Hub 🔗
You can access Oracle Cloud Infrastructure (OCI) by using the Console (a browser-based interface), REST API, or OCI CLI. Instructions for using the Console, API, and CLI are included in topics throughout this documentation.For a list of available SDKs, see Software Development Kits and Command Line Interface.
To access the Console, you must use a supported browser. To go to the Console sign-in page, open the navigation menu at the top of this page and select Infrastructure Console. You are prompted to enter your cloud tenant, your user name, and your password.
Console: To access Connector Hub using the Console, you must use a supported browser. To go to the Console sign-in page, open the navigation menu at the top of this page and select Infrastructure Console. You are prompted to enter your cloud tenant, your user name, and your password.
Open the navigation menu and select Analytics & AI. Under Messaging, select Connector Hub.
You can also access Connector Hub from the following services in the Console:
Logging: Open the navigation menu and select Observability & Management. Under Logging, select Logs.
Streaming: Open the navigation menu and select Analytics & AI. Under Messaging, select Streaming.
Each service in Oracle Cloud Infrastructure integrates with IAM for authentication and authorization, for all interfaces (the Console, SDK or CLI, and REST API).
An administrator in an organization needs to set up groups , compartments , and policies that control which users can access which services, which resources, and the type of access. For example, the policies control who can create new users, create and manage the cloud network, create instances, create buckets, download objects, and so on. For more information, see Managing Identity Domains. For specific details about writing policies for each of the different services, see Policy Reference.
If you're a regular user (not an administrator) who needs to use the Oracle Cloud Infrastructure resources that the company owns, contact an administrator to set up a user ID for you. The administrator can confirm which compartment or compartments you can use.
Administrators: For common policies providing access to Connector Hub, see IAM Policies.
Access to Source, Task, and Target Services 🔗
Note
Ensure that any policy you create complies with your company guidelines. Automatically created policies remain when connectors are deleted. As a best practice, delete associated policies when deleting the connector.
To move data, your connector must have authorization to access the specified resources in the source , task , and target services. Some resources are accessible without policies.
Default policies providing the required authorization are offered when you use the Console to define a connector. These policies are limited to the context of the connector. You can either accept the default policies or ensure that you have the proper authorizations in custom policies for user and service access.
Default Policies 🔗
This section details the default policies offered when you create or update a connector in the Console.
Note
To accept default policies for an existing connector, simply edit the connector. The default policies are offered whenever you create or edit a connector. The only exception is when the exact policy already exists in IAM, in which case the default policy is not offered.
Applies when the connector specifies a function task or selects Functions as its target service.
Where this policy is created: The compartment where the function resides. The function is selected for the task or target when you create or update the connector.
Copy
Allow any-user to use fn-function in compartment id <target_function_compartment_ocid> where all {request.principal.type='serviceconnector', request.principal.compartment.id='<serviceconnector_compartment_ocid>'} Allow any-user to use fn-invocation in compartment id <target_function_compartment_ocid> where all {request.principal.type='serviceconnector', request.principal.compartment.id='<serviceconnector_compartment_OCID>'}
Following is the policy with line breaks added for clarity.
Allow any-user to use fn-function in compartment id <target_function_compartment_ocid>
where all {
request.principal.type='serviceconnector',
request.principal.compartment.id='<serviceconnector_compartment_ocid>'
}
Allow any-user to use fn-invocation in compartment id <target_function_compartment_ocid>
where all {
request.principal.type='serviceconnector',
request.principal.compartment.id='<serviceconnector_compartment_OCID>'
}
No default policies are offered. To create or edit a connector that specifies logs for the source or task, you must have read access to the specified logs. For more information, see Required Permissions for Working with Logs and Log Groups.
Applies when the connector specifies Logging Analytics as its target service.
Where this policy is created: The compartment where the log group resides. The log group is selected or entered for the target when you create or update the connector.
Copy
Allow any-user to use loganalytics-log-group in compartment id <target_log_group_compartment_OCID> where all {request.principal.type='serviceconnector', target.loganalytics-log-group.id=<log_group_OCID>, request.principal.compartment.id=<serviceconnector_compartment_OCID>}
Following is the policy with line breaks added for clarity.
Allow any-user to use loganalytics-log-group in compartment id <target_log_group_compartment_OCID>
where all {
request.principal.type='serviceconnector',
target.loganalytics-log-group.id=<log_group_OCID>,
request.principal.compartment.id=<serviceconnector_compartment_OCID>
}
Applies when the connector specifies Monitoring as its source service.
Where this policy is created: The compartment where the metric namespace resides. The metric namespace is selected or entered for the target when you create or update the connector.
Copy
Allow any-user to read metrics in tenancy where all {request.principal.type = 'serviceconnector', request.principal.compartment.id = '<compartment_OCID>', target.compartment.id in ('<compartment1_OCID>', '<compartment2_OCID>', '<compartment3_OCID>')}
Following is the policy with line breaks added for clarity.
Copy
Allow any-user to read metrics in tenancy
where all
{
request.principal.type = 'serviceconnector',
request.principal.compartment.id = '<compartment_OCID>',
target.compartment.id in ('<compartment1_OCID>', '<compartment2_OCID>', '<compartment3_OCID>')
}
Applies when the connector specifies Monitoring as its target service.
Where this policy is created: The compartment where the metric namespace resides. The metric namespace is selected or entered for the target when you create or update the connector.
Copy
Allow any-user to use metrics in compartment id <target_metric_compartment_OCID> where all {request.principal.type='serviceconnector', target.metrics.namespace='<metric_namespace>', request.principal.compartment.id='<serviceconnector_compartment_OCID>'}
Following is the policy with line breaks added for clarity.
Copy
Allow any-user to use metrics in compartment id <target_metric_compartment_OCID>
where all
{
request.principal.type='serviceconnector',
target.metrics.namespace='<metric_namespace>',
request.principal.compartment.id='<serviceconnector_compartment_OCID>'
}
Applies when the connector specifies Notifications as its target service.
Where this policy is created: The compartment where the topic resides. The topic is selected for the target when you create or update the connector.
Copy
Allow any-user to use ons-topics in compartment id <target_topic_compartment_OCID> where all {request.principal.type= 'serviceconnector', request.principal.compartment.id='<serviceconnector_compartment_OCID>'}
Following is the policy with line breaks added for clarity.
Allow any-user to use ons-topics in compartment id <target_topic_compartment_OCID>
where all {
request.principal.type= 'serviceconnector',
request.principal.compartment.id='<serviceconnector_compartment_OCID>'
}
Applies when the connector specifies Object Storage as its target service.
Where this policy is created: The compartment where the bucket resides. The bucket is selected for the target when you create or update the connector.
Copy
Allow any-user to manage objects in compartment id <target_bucket_compartment_OCID> where all {request.principal.type='serviceconnector', target.bucket.name='<bucket_name>', request.principal.compartment.id='<serviceconnector_compartment_OCID>'}
Following is the policy with line breaks added for clarity.
Allow any-user to manage objects in compartment id <target_bucket_compartment_OCID>
where all {
request.principal.type='serviceconnector',
target.bucket.name='<bucket_name>',
request.principal.compartment.id='<serviceconnector_compartment_OCID>'
}
Applies when the connector specifies Queue as its source service.
Where this policy is created: The compartment where the queue resides. The queue is selected for the source when you create or edit a connector.
Copy
Allow any-user to { QUEUE_READ , QUEUE_CONSUME } in compartment id <queue_compartment_OCID> where all {request.principal.type='serviceconnector', target.queue.id='<queue_OCID>', request.principal.compartment.id='<serviceconnector_compartment_OCID>'}
Following is the policy with line breaks added for clarity.
Copy
Allow any-user to { QUEUE_READ , QUEUE_CONSUME } in compartment id <queue_compartment_OCID>
where all {
request.principal.type='serviceconnector',
target.queue.id='<queue_OCID>',
request.principal.compartment.id='<serviceconnector_compartment_OCID>'
}
Applies when the connector specifies Streaming as its source service.
Where this policy is created: The compartment where the stream resides. The stream is selected for the source when you create or update the connector.
Copy
Allow any-user to {STREAM_READ, STREAM_CONSUME} in compartment id <source_stream_compartment_OCID> where all {request.principal.type='serviceconnector', target.stream.id='<stream_OCID>', request.principal.compartment.id='<serviceconnector_compartment_OCID>'}
Following is the policy with line breaks added for clarity.
Allow any-user to {STREAM_READ, STREAM_CONSUME} in compartment id <source_stream_compartment_OCID>
where all {
request.principal.type='serviceconnector',
target.stream.id='<stream_OCID>',
request.principal.compartment.id='<serviceconnector_compartment_OCID>'
}
Applies when the connector specifies Streaming as its target service.
Where this policy is created: The compartment where the stream resides. The stream is selected for the target when you create or update the connector.
Copy
Allow any-user to use stream-push in compartment id <target_stream_compartment_OCID> where all {request.principal.type='serviceconnector', target.stream.id='<stream_OCID>', request.principal.compartment.id='<serviceconnector_compartment_OCID>'}
Following is the policy with line breaks added for clarity.
Allow any-user to use stream-push in compartment id <target_stream_compartment_OCID>
where all {
request.principal.type='serviceconnector',
target.stream.id='<stream_OCID>',
request.principal.compartment.id='<serviceconnector_compartment_OCID>'
}
When reviewing group-based policies for required authorization to access a resource (service) in a connector, reference the default policy offered for that service in that context (see previous section) or see the policy details for the service at Policy Reference.
Note
To accept default policies for an existing connector, simply edit the connector. The default policies are offered whenever you create or edit a connector. The only exception is when the exact policy already exists in IAM, in which case the default policy is not offered.
Allow dynamic-group <dynamic-group-name> to use loganalytics-log-group in compartment id <log_group_compartment_ocid> where target.loganalytics-log-group.id='<log_group_ocid>'
Following is the policy with line breaks added for clarity.
Copy
Allow dynamic-group <dynamic-group-name> to use loganalytics-log-group in compartment id <log_group_compartment_ocid>
where target.loganalytics-log-group.id='<log_group_ocid>'
Allow dynamic-group <dynamic-group-name> to read metrics in compartment id <metric_compartment_ocid> where target.compartment.id in ('compartment1_OCID', 'compartment2_OCID', 'compartment3_OCID')
Following is the policy with line breaks added for clarity.
Copy
Allow dynamic-group <dynamic-group-name> to read metrics in compartment id <metric_compartment_ocid>
where target.compartment.id in ('<compartment1_OCID>', '<compartment2_OCID>', '<compartment3_OCID>')
Allow dynamic-group <dynamic-group-name> to use metrics in compartment id <metric_compartment_ocid> where target.metrics.namespace='<metric_namespace>'
Following is the policy with line breaks added for clarity.
Copy
Allow dynamic-group <dynamic-group-name> to use metrics in compartment id <metric_compartment_ocid>
where target.metrics.namespace='<metric_namespace>'
Allow dynamic-group <dynamic-group-name> to { QUEUE_READ , QUEUE_CONSUME } in compartment id <queue_compartment_ocid> where target.queue.id='<queue_ocid>'
Following is the policy with line breaks added for clarity.
Copy
Allow dynamic-group <dynamic-group-name> to { QUEUE_READ , QUEUE_CONSUME } in compartment id <queue_compartment_ocid>
where target.queue.id='<queue_ocid>'
Allow dynamic-group <dynamic-group-name> to {STREAM_READ, STREAM_CONSUME} in compartment id <stream_compartment_ocid> where target.stream.id='<stream_ocid>'
Following is the policy with line breaks added for clarity.
Copy
Allow dynamic-group <dynamic-group-name> to {STREAM_READ, STREAM_CONSUME} in compartment id <stream_compartment_ocid>
where target.stream.id='<stream_ocid>'
For certain failure conditions, a connector that continuously fails is automatically deactivated by the service team at Oracle Cloud Infrastructure. Such a long-term continuous failure can indicate invalid configuration of the connector's source or target. For more information, see Deactivation for Unknown Reasons.