kibana/docs/settings/task-manager-settings.asciidoc

60 lines
3.3 KiB
Text
Raw Normal View History

[role="xpack"]
[[task-manager-settings-kb]]
=== Task Manager settings in {kib}
++++
<titleabbrev>Task Manager settings</titleabbrev>
++++
Task Manager runs background tasks by polling for work on an interval. You can configure its behavior to tune for performance and throughput.
[float]
[[task-manager-settings]]
==== Task Manager settings
[cols="2*<"]
|===
| `xpack.task_manager.max_attempts`
| The maximum number of times a task will be attempted before being abandoned as failed. Defaults to 3.
| `xpack.task_manager.poll_interval`
| How often, in milliseconds, the task manager will look for more work. Defaults to 3000 and cannot be lower than 100.
| `xpack.task_manager.request_capacity`
| How many requests can Task Manager buffer before it rejects new requests. Defaults to 1000.
| `xpack.task_manager.max_workers`
| The maximum number of tasks that this Kibana instance will run simultaneously. Defaults to 10.
Starting in 8.0, it will not be possible to set the value greater than 100.
| `xpack.task_manager.`
`monitored_stats_health_verbose_log.enabled`
| This flag will enable automatic warn and error logging if task manager self detects a performance issue, such as the time between when a task is scheduled to execute and when it actually executes. Defaults to false.
| `xpack.task_manager.`
`monitored_stats_health_verbose_log.`
`warn_delayed_task_start_in_seconds`
| The amount of seconds we allow a task to delay before printing a warning server log. Defaults to 60.
[Alerting] Change execution of alerts from async to sync (#97311) * added ability to run ephemeral tasks * fixed typing * added typing on plugin * WIP * Fix type issues * Hook up the ephemeral task into the task runner for actions * Tasks can now run independently of one another * Use deferred language * Refactor taskParams slightly * Use Promise.all * Remove deferred logic * Add config options to limit the amount of tasks executing at once * Add ephemeral task monitoring * WIP * Add single test so far * Ensure we log after actions have executed * Remove confusing * 1 * Add logic to ensure we fallback to default enqueueing if the total actions is above the config * Add additional test * Fix tests a bit, ensure we log the alerting:actions-execute right away and the tests should listen for alerts:execute * Better tests * If the queue is at capacity, attempt to execute the ephemeral task as a regular action * Ensure we run ephemeral tasks before to avoid them getting stuck in the queue * Do not handle the promise anymore * Remove unnecessary code * Properly handle errors from ephemeral task lifecycle * moved acitons domain out of alerting and into actions plugin * Remove some tests * Fix TS and test issues * Fix type issues * Fix more type issues * Fix more type issues * Fix jest tests * Fix more jest tests * Off by default * Fix jest tests * Update config for this suite too * Start of telemetry code * Fix types and add missing files * Fix telemetry schema * Fix types * Fix more types * moved load event emission to pollingcycle and added health stats on Ephemeral tasks * Add more telemetry data based on new health metrics for the ephemeral queue * Fix tests and types * Add separate request capacity for ephemeral queue * Fix telemetry schema and add tests for usage collection * track polled tasks by persistence and use in capacity estimation instead of executions * fixed typing * Bump default capacity * added delay metric to ephemeral stats * Fix bad merge * Fix tests * Fix tests * Fix types * Skip failing tests * Exclude ephemeral stats from capacity estimation tests * PR feedback * More PR feedback * PR feedback * Fix merge conflict * Try fixing CI * Fix broken lock file from merge * Match master * Add this back * PR feedback * Change to queue and add test * Disable ephemeral queue in tests * Updated desc * Comment out ephemeral-specific tests tha require the entire test suite to support ephemeral tasks * Add clarifying comment Co-authored-by: Gidi Meir Morris <github@gidi.io> Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
2021-07-20 19:24:24 +02:00
| `xpack.task_manager.ephemeral_tasks.enabled`
| Enables an experimental feature that executes a limited (and configurable) number of actions in the same task as the alert which triggered them.
These action tasks will reduce the latency of the time it takes an action to run after it's triggered, but are not persisted as SavedObjects.
These non-persisted action tasks have a risk that they won't be run at all if the Kibana instance running them exits unexpectedly. Defaults to false.
| `xpack.task_manager.ephemeral_tasks.request_capacity`
| Sets the size of the ephemeral queue defined above. Defaults to 10.
|===
[float]
[[task-manager-health-settings]]
==== Task Manager Health settings
Settings that configure the <<task-manager-health-monitoring>> endpoint.
[cols="2*<"]
|===
| `xpack.task_manager.`
`monitored_task_execution_thresholds`
2021-10-07 20:30:32 +02:00
| Configures the threshold of failed task executions at which point the `warn` or `error` health status is set under each task type execution status (under `stats.runtime.value.execution.result_frequency_percent_as_number[${task type}].status`). This setting allows configuration of both the default level and a custom task type specific level. By default, this setting is configured to mark the health of every task type as `warning` when it exceeds 80% failed executions, and as `error` at 90%. Custom configurations allow you to reduce this threshold to catch failures sooner for task types that you might consider critical, such as alerting tasks. This value can be set to any number between 0 to 100, and a threshold is hit when the value *exceeds* this number. This means that you can avoid setting the status to `error` by setting the threshold at 100, or hit `error` the moment any task fails by setting the threshold to 0 (as it will exceed 0 once a single failure occurs).
|===