This topic describes best practices for working with your The trigger rule and query to evaluate and related configuration, such as notification details to use when the trigger is breached. Alarms passively monitor your cloud resources using metrics in Monitoring..
Create a Set of Alarms for Each Metric
For each metric emitted by your resources, create alarms that define the following resource behaviors:
- At risk. The resource is at risk of becoming inoperable, as indicated by metric values.
- Non-optimal. The resource is performing at non-optimal levels, as indicated by metric values.
- Resource is up or down. The resource is either not reachable or not operating.
The following examples use the CpuUtilization metric emitted by the oci_computeagent metric namespace. This metric monitors the utilization of the Compute instance and the activity level of any services and applications running on the instance. CpuUtilization is a key performance metric for a cloud service because it indicates CPU usage for the Compute instance and it can be used to investigate performance issues. To learn more about CPU usage, see the following URL: https://en.wikipedia.org/wiki/CPU_time.
A typical at-risk threshold for the CpuUtilization metric is any value greater than 80 percent. A Compute instance breaching this threshold is at risk of becoming inoperable. Often the cause of this behavior is one or more applications consuming a high percentage of the CPU.
In this example, you decide to notify the operations team immediately, setting the severity of the alarm as “Critical” because repair is required to bring the instances back to optimal operational levels. You configure alarm notifications to the responsible team by both PagerDuty and email, requesting an investigation and appropriate fixes before the instances go into an inoperable state. You set repeat notifications every minute. When someone responds to the alarm notifications, you temporarily stop notifications using the best practice of A configuration to avoid publishing messages during the specified time range. Useful for suspending alarm notifications during system maintenance. (Monitoring service.). Once metrics return to optimal values, you remove the suppression.
A typical non-optimal threshold for the CpuUtilization metric is 60 to 80 percent. When the A measurement related to health, capacity, or performance of a given resource. (Monitoring service). Example: CpuUtilization values for a Compute instance are within this range, the instance is above the optimal operational range.
In this example, you decide to notify the appropriate individual or team that an application or process is consuming more CPU than usual. You configure a threshold alarm to notify the appropriate contacts, setting the severity of the alarm as “Warning,” as no immediate actions are required to investigate and reduce the CPU. You set notification to email only, directed to the appropriate developer or team, with repeat notifications every 24 hours to reduce email notification noise.
A typical indicator of resource availability is a five-minute absence of the CpuUtilization metric. A Compute instance breaching this threshold is either not reachable or not operating. The resource may have stopped responding, or it might have become unavailable due to connectivity issues.
In this example, you decide to notify the operations team immediately, setting the severity of your absence alarm as “Critical” because repair is required to bring the instances online. You configure alarm notifications to the responsible team by both PagerDuty and email, requesting an investigation and a move of the workloads to another available resource. You set repeat notifications every minute. When someone responds to the alarm notifications, you temporarily stop notifications using the best practice of suppressing the alarm. When the CpuUtilization metric is once again detected from the resource, you remove the suppression.
Suppress Alarms During Investigations
Once a team member responds to an alarm, A configuration to avoid publishing messages during the specified time range. Useful for suspending alarm notifications during system maintenance. (Monitoring service.) notifications for the duration of the effort to investigate or mitigate the issue. Temporarily stopping notifications helps to avoid distractions during the investigation and mitigation. Remove the suppression when the issue has been resolved. For instructions, see To suppress alarms.
Routinely Tune Your Alarms
On a regular basis, such as weekly, review your alarms to ensure optimal configuration. Calibrate each alarm's threshold, severity, and notification details, including method, frequency, and targeted audience.
Optimal alarm configuration addresses the following factors:
- Criticality of the resource.
- Appropriate resource behavior. Assess behavior singly and within the context of the service ecosystem. Review metric value fluctuations for a given period of time and then adjust thresholds as needed.
- Acceptable notification noise. Assess the notification method (for example, email or PagerDuty), the appropriate recipients, and the frequency of repeated notifications.
For instructions, see To update an alarm.