kibana/docs/management/rollups/create_and_manage_rollups.asciidoc
Melori Arellano 024221ddb8
[DOCS]Update rollup tutorial to add steps for index pattern (#67377)
* [DOCS]Update rollup tutorial to add steps for index pattern

* Make edits suggested by reviewers
2020-05-27 14:01:29 -06:00

163 lines
5.4 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

[role="xpack"]
[[data-rollups]]
== Rollup Jobs
A rollup job is a periodic task that aggregates data from indices specified
by an index pattern, and then rolls it into a new index. Rollup indices are a good way to
compactly store months or years of historical
data for use in visualizations and reports.
Youll find *Rollup Jobs* under *Management > Elasticsearch*. With this UI,
you can:
* <<create-and-manage-rollup-job, Create a rollup job>>
* <<manage-rollup-job, Start&comma; stop&comma; and delete rollup jobs>>
[role="screenshot"]
image::images/management_rollup_list.png[][List of currently active rollup jobs]
Before using this feature, you should be familiar with how rollups work.
{ref}/xpack-rollup.html[Rolling up historical data] is a good source for more detailed information.
[float]
[[create-and-manage-rollup-job]]
=== Create a rollup job
{kib} makes it easy for you to create a rollup job by walking you through
the process. You fill in the name, data flow, and how often you want to roll
up the data. Then you define a date histogram aggregation for the rollup job
and optionally define terms, histogram, and metrics aggregations.
When defining the index pattern, you must enter a name that is different than
the output rollup index. Otherwise, the job
will attempt to capture the data in the rollup index. For example, if your index pattern is `metricbeat-*`,
you can name your rollup index `rollup-metricbeat`, but not `metricbeat-rollup`.
[role="screenshot"]
image::images/management_create_rollup_job.png[][Wizard that walks you through creation of a rollup job]
[float]
[[manage-rollup-job]]
=== Start, stop, and delete rollup jobs
Once youve saved a rollup job, youll see it the *Rollup Jobs* overview page,
where you can drill down for further investigation. The *Manage* menu enables
you to start, stop, and delete the rollup job.
You must first stop a rollup job before deleting it.
[role="screenshot"]
image::images/management_rollup_job_details.png[][Rollup job details]
You cant change a rollup job after youve created it. To select additional fields
or redefine terms, you must delete the existing job, and then create a new one
with the updated specifications. Be sure to use a different name for the new rollup
job&mdash;reusing the same name can lead to problems with mismatched job configurations.
You can read more at {ref}/rollup-job-config.html[rollup job configuration].
[float]
[[rollup-data-tutorial]]
=== Try it: Create and visualize rolled up data
This example creates a rollup job to capture log data from sample web logs.
To follow along, add the <<get-data-in, sample web logs data set>>.
In this example, you want data that is older than 7 days in the target index pattern `kibana_sample_data_logs`
to roll up once a day into the index `rollup_logstash`. Youll bucket the
rolled up data on an hourly basis, using 60m for the time bucket configuration.
This allows for more granular queries, such as 2h and 12h.
[float]
==== Create the rollup job
As you walk through the *Create rollup job* UI, enter the data:
|===
|*Field* |*Value*
|Name
|logs_job
|Index pattern
|`kibana_sample_data_logs`
|Rollup index name
|`rollup_logstash`
|Frequency
|Every day at midnight
|Page size
|1000
|Delay (latency buffer)|7d
|Date field
|@timestamp
|Time bucket size
|60m
|Time zone
|UTC
|Terms
|geo.src, machine.os.keyword
|Histogram
|bytes, memory
|Histogram interval
|1000
|Metrics
|bytes (average)
|===
The terms, histogram, and metrics fields reflect
the key information to retain in the rolled up data: where visitors are from (geo.src),
what operating system they are using (machine.os.keyword),
and how much data is being sent (bytes).
You can now use the rolled up data for analysis at a fraction of the storage cost
of the original index. The original data can live side by side with the new
rollup index, or you can remove or archive it using <<creating-index-lifecycle-policies,Index Lifecycle Management>>.
[float]
==== Visualize the rolled up data
Your next step is to visualize your rolled up data in a vertical bar chart.
Most visualizations support rolled up data, with the exception of Timelion and Vega visualizations.
. Create the rollup index pattern in *Management > Index Patterns* so you can
select your rolled up data for visualizations. Click *Create index pattern*, and select *Rollup index pattern* from the dropdown.
+
[role="screenshot"]
image::images/management-rollup-index-pattern.png[][Create rollup index pattern]
. Enter *rollup_logstash,kibana_sample_logs* as your *Index Pattern* and `@timestamp`
as the *Time Filter field name*.
+
The notation for a combination index pattern with both raw and rolled up data
is `rollup_logstash,kibana_sample_data_logs`. In this index pattern, `rollup_logstash`
matches the rolled up index pattern and `kibana_sample_data_logs` matches the index
pattern for raw data.
. Go to *Visualize* and create a vertical bar chart. Choose `rollup_logstash,kibana_sample_data_logs`
as your source to see both the raw and rolled up data.
+
[role="screenshot"]
image::images/management-create-rollup-bar-chart.png[][Create visualization of rolled up data]
. Look at the data in your visualization.
+
[role="screenshot"]
image::images/management_rollup_job_vis.png[][Visualization of rolled up data]
. Optionally, create a dashboard that contains visualizations of the rolled up
data, raw data, or both.
+
[role="screenshot"]
image::images/management_rollup_job_dashboard.png[][Dashboard with rolled up data]