kibana/x-pack/plugins/index_lifecycle_management
Vadim Dalecky 1fb2640a6f
ILM locators (#102313)
* feat: 🎸 add url service types

* refactor: 💡 move locator types into its own folder

* feat: 🎸 add abstract locator implementation

* feat: 🎸 implement abstract locator client

* feat: 🎸 add browser-side locators service

* feat: 🎸 implement locator .getLocation()

* feat: 🎸 implement navigate function

* feat: 🎸 implement locator service in /common folder

* feat: 🎸 expose locators client on browser and server

* refactor: 💡 make locators async

* chore: 🤖 add deprecation notice to URL generators

* docs: ✏️ add deprecation notice to readme

* feat: 🎸 create management app locator

* refactor: 💡 simplify management locator

* feat: 🎸 export management app locator from plugin contract

* feat: 🎸 implement ILM locator

* feat: 🎸 improve share plugin exports

* feat: 🎸 improve management app locator

* feat: 🎸 add useLocatorUrl React hook

* feat: 🎸 add .getUrl() method to locators

* feat: 🎸 migrate ILM app to use URL locators

* fix: 🐛 correct typescript errors

* Fix TypeScript errors in mock

* Fix ILM locator unit tests

* style: 💄 shorten import

Co-authored-by: Vadim Kibana <vadimkibana@gmail.com>
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
2021-06-21 22:11:12 +02:00
..
__jest__ [ILM] Migrate to new page layout (#101927) 2021-06-14 10:50:20 -04:00
common [ILM] Refactor types and fix missing aria labels (#101518) 2021-06-09 14:42:56 +02:00
public ILM locators (#102313) 2021-06-21 22:11:12 +02:00
server
jest.config.js
kibana.json
README.md
tsconfig.json

Index Lifecycle Management

Testing

Quick steps for testing ILM in Index Management

You can test that the Frozen badge, phase filtering, and lifecycle information is surfaced in Index Management by running this series of requests in Console:

PUT /_ilm/policy/full
{
  "policy": {
    "phases" : {
      "hot" : {
        "min_age" : "0ms",
        "actions" : {
          "rollover" : {
            "max_docs" : 1
          }
        }
      },
      "warm" : {
        "min_age" : "15s",
        "actions" : {
          "forcemerge" : {
            "max_num_segments" : 1
          },
          "shrink" : {
            "number_of_shards" : 1
          }
        }
      },
      "cold" : {
        "min_age" : "30s",
        "actions" : {
          "freeze": {}
        }
      },
      "delete" : {
        "min_age" : "1d",
        "actions" : {
          "delete" : { }
        }
      }
    }
  }
}

PUT _template/test
{
  "index_patterns": ["test-*"],
  "settings": {
    "number_of_shards": 3,
    "number_of_replicas": 0,
    "index.lifecycle.name": "full",
    "index.lifecycle.rollover_alias": "test-alias"
  }
}

PUT /test-000001
{
  "aliases": {
    "test-alias": {
      "is_write_index": true
    }
  }
}

PUT test-alias/_doc/1
{
  "a": "a"
}

PUT /_cluster/settings
{
  "transient": {
    "logger.org.elasticsearch.xpack.core.indexlifecycle": "TRACE",
    "logger.org.elasticsearch.xpack.indexlifecycle": "TRACE",
    "logger.org.elasticsearch.xpack.core.ilm": "TRACE",
    "logger.org.elasticsearch.xpack.ilm": "TRACE",
    "indices.lifecycle.poll_interval": "10s"
  }
}

Then go into Index Management and, after about 1 minute, you'll see a frozen index and you'll be able to filter by the various lifecycle phases and statuses.

image

Next, add the Kibana sample data and attach the full policy to the index that gets created. After about a minute, there should be an error on this index. When you click the index you'll see ILM information in the detail panel as well as an error. You can dismiss the error by clicking Manage > Retry lifecycle step.

image

Data tier notifications

When creating or editing an ILM policy the UI should notify users that under certain conditions their data will not be moved to a tier corresponding to a phase. For instance, when a cluster only has hot-tier nodes. We test the UI with this cluster state by starting an ES node with the data_hot role. Using this command:

yarn es snapshot --license=trial -E node.roles=data_hot,master,data_content

This will create a cluster where we have a single node that belongs to the hot-tier. In the data allocation section of both the warm and cold phase you should see notice like the following:

image

Default configuration for a node is that it belongs to all tiers, in which case you should not see this notice. Test this by running:

yarn es snapshot --license=trial