kibana/x-pack/plugins/index_lifecycle_management
Mikhail Shustov d920682e4e
Update @elastic/elasticsearch to 8.0.0-canary13 (#98266)
* bump @elastic/elasticsearch to canary.7

* address errors in core

* address errors in data plugin

* address errors in Alerting team plugins

* remove outdated messages in Lens

* remove unnecessary comments in ML

* address errors in Observability plugin

* address errors in reporting plugin

* address errors in Rule registry plugin

* fix errors in Security plugins

* fix errors in ES-UI plugin

* remove unnecessary union.

* update core tests

* fix kbn-es-archiver

* update to canary 8

* bump to v9

* use new typings

* fix new errors in core

* fix errors in core typeings

* fix type errors in data plugin

* fix type errors in telemetray plugin

* fix data plugin tests

* fix search examples type error

* fix errors in discover plugin

* fix errors in index_pattern_management

* fix type errors in vis_type_*

* fix errors in typings/elasticsearch

* fix type errors in actions plugin

* fix type errors in alerting and apm plugins

* fix type errors in canvas and cases

* fix errors in event_log

* fix type errors in ILM and ingest_pipelines

* fix errors in lens plugin

* fix errors in lists plugin

* fix errors in logstash

* fix errors in metrics_entities

* fix errors in o11y

* fix errors in watcher

* fix errors in uptime

* fix errors in upgrade_assistant

* fix errors in task_manager

* fix errors in stack_alerts

* fix errors in security_solution

* fix errors in rule_registry

* fix errors in snapshot_restore

* fix remaining errors

* fix search intergration tests

* adjust assetion

* bump version to canary.10

* adapt code to new naming schema

* use mapping types provided by the client library

* Revert "adjust assetion"

This reverts commit 19b8fe0464.

* fix so intergration tests

* fix http integration tests

* bump version to canary 11

* fix login test

* fix http integration test

* fix apm test

* update docs

* fixing some ml types

* fix new errors in data plugin

* fix new errors in alerting plugin

* fix new errors in lists plugin

* fix new errors in reporting

* fix or mute errors in rule_registry plugin

* more ML type fixes

* bump to canary 12

* fix errors after merge conflict

* additional ML fixes

* bump to canary 13

* fix errors in apm plugin

* fix errors in fleet plugin

* fix errors in infra plugin

* fix errors in monitoring plugin

* fix errors in osquery plugin

* fix errors in security solution plugins

* fix errors in transform plugin

* Update type imports for ES

* fix errors in x-pack plugins

* fix errors in tests

* update docs

* fix errors in x-pack/test

* update error description

* fix errors after master merge

* update comment in infra plugin

* fix new errors on xpack tests/

Co-authored-by: James Gowdy <jgowdy@elastic.co>
Co-authored-by: Dario Gieselaar <dario.gieselaar@elastic.co>
2021-06-08 15:06:06 +02:00
..
__jest__ [ILM] Split edit policy test helpers into separate files (#100807) 2021-06-07 19:31:15 +02:00
common Reintroduce 96111: Provide guidance of "Custom" allocation behavior in ILM (#99007) 2021-05-03 12:29:57 -07:00
public [ILM] Split edit policy test helpers into separate files (#100807) 2021-06-07 19:31:15 +02:00
server Update @elastic/elasticsearch to 8.0.0-canary13 (#98266) 2021-06-08 15:06:06 +02:00
jest.config.js Elastic License 2.0 (#90099) 2021-02-03 18:12:39 -08:00
kibana.json Add ILM url generator and use it in Index Management (#82165) 2020-11-06 15:42:51 +01:00
README.md
tsconfig.json Revert "TS Incremental build exclude test files (#95610)" (#96223) 2021-04-05 11:59:26 -07:00

Index Lifecycle Management

Testing

Quick steps for testing ILM in Index Management

You can test that the Frozen badge, phase filtering, and lifecycle information is surfaced in Index Management by running this series of requests in Console:

PUT /_ilm/policy/full
{
  "policy": {
    "phases" : {
      "hot" : {
        "min_age" : "0ms",
        "actions" : {
          "rollover" : {
            "max_docs" : 1
          }
        }
      },
      "warm" : {
        "min_age" : "15s",
        "actions" : {
          "forcemerge" : {
            "max_num_segments" : 1
          },
          "shrink" : {
            "number_of_shards" : 1
          }
        }
      },
      "cold" : {
        "min_age" : "30s",
        "actions" : {
          "freeze": {}
        }
      },
      "delete" : {
        "min_age" : "1d",
        "actions" : {
          "delete" : { }
        }
      }
    }
  }
}

PUT _template/test
{
  "index_patterns": ["test-*"],
  "settings": {
    "number_of_shards": 3,
    "number_of_replicas": 0,
    "index.lifecycle.name": "full",
    "index.lifecycle.rollover_alias": "test-alias"
  }
}

PUT /test-000001
{
  "aliases": {
    "test-alias": {
      "is_write_index": true
    }
  }
}

PUT test-alias/_doc/1
{
  "a": "a"
}

PUT /_cluster/settings
{
  "transient": {
    "logger.org.elasticsearch.xpack.core.indexlifecycle": "TRACE",
    "logger.org.elasticsearch.xpack.indexlifecycle": "TRACE",
    "logger.org.elasticsearch.xpack.core.ilm": "TRACE",
    "logger.org.elasticsearch.xpack.ilm": "TRACE",
    "indices.lifecycle.poll_interval": "10s"
  }
}

Then go into Index Management and, after about 1 minute, you'll see a frozen index and you'll be able to filter by the various lifecycle phases and statuses.

image

Next, add the Kibana sample data and attach the full policy to the index that gets created. After about a minute, there should be an error on this index. When you click the index you'll see ILM information in the detail panel as well as an error. You can dismiss the error by clicking Manage > Retry lifecycle step.

image

Data tier notifications

When creating or editing an ILM policy the UI should notify users that under certain conditions their data will not be moved to a tier corresponding to a phase. For instance, when a cluster only has hot-tier nodes. We test the UI with this cluster state by starting an ES node with the data_hot role. Using this command:

yarn es snapshot --license=trial -E node.roles=data_hot,master,data_content

This will create a cluster where we have a single node that belongs to the hot-tier. In the data allocation section of both the warm and cold phase you should see notice like the following:

image

Default configuration for a node is that it belongs to all tiers, in which case you should not see this notice. Test this by running:

yarn es snapshot --license=trial