[DOCS] Add Kibana alerts to Stack Monitoring (#73762)

Co-authored-by: gchaps <33642766+gchaps@users.noreply.github.com>
This commit is contained in:
Lisa Cawley 2020-08-05 15:08:38 -07:00 committed by GitHub
parent b001301f5a
commit 4b3326d6fb
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
3 changed files with 37 additions and 0 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

View file

@ -2,6 +2,7 @@ include::xpack-monitoring.asciidoc[]
include::beats-details.asciidoc[leveloffset=+1]
include::cluster-alerts.asciidoc[leveloffset=+1]
include::elasticsearch-details.asciidoc[leveloffset=+1]
include::kibana-alerts.asciidoc[leveloffset=+1]
include::kibana-details.asciidoc[leveloffset=+1]
include::logstash-details.asciidoc[leveloffset=+1]
include::monitoring-troubleshooting.asciidoc[leveloffset=+1]

View file

@ -0,0 +1,36 @@
[role="xpack"]
[[kibana-alerts]]
= {kib} Alerts
The {stack} {monitor-features} provide
<<alerting-getting-started,{kib} alerts>> out-of-the box to notify you of
potential issues in the {stack}. These alerts are preconfigured based on the
best practices recommended by Elastic. However, you can tailor them to meet your
specific needs.
When you open *{stack-monitor-app}*, the preconfigured {kib} alerts are
created automatically. If you collect monitoring data from multiple clusters,
these alerts can search, detect, and notify on various conditions across the
clusters. The alerts are visible alongside your existing {watcher} cluster
alerts. You can view details about the alerts that are active and view health
and performance data for {es}, {ls}, and Beats in real time, as well as
analyze past performance. You can also modify active alerts.
[role="screenshot"]
image::user/monitoring/images/monitoring-kibana-alerts.png["Kibana alerts in the Stack Monitoring app"]
To review and modify all the available alerts, use
<<managing-alerts-and-actions,*{alerts-ui}*>> in *{stack-manage-app}*.
[discrete]
[[kibana-alerts-cpu-threshold]]
== CPU threshold
This alert is triggered when a node runs a consistently high CPU load. By
default, the trigger condition is set at 85% or more averaged over the last 5
minutes. The alert is grouped across all the nodes of the cluster by running
checks on a schedule time of 1 minute with a re-notify internal of 1 day.
NOTE: Some action types are subscription features, while others are free.
For a comparison of the Elastic subscription levels, see the alerting section of
the {subscriptions}[Subscriptions page].