kibana/x-pack/plugins/ingest_pipelines
Vadim Dalecky 82e32faf1a
Locator docs (#103129)
* feat: 🎸 add locator_examples plugin

* feat: 🎸 add example app in locator_examples

* feat: 🎸 add locator_explorer plugin

* chore: 🤖 remove url_generaotrs_* example plugins

* docs: ✏️ update share plugin readme

* docs: ✏️ add locators readme

* docs: ✏️ update docs link in example plugin

* docs: ✏️ update navigation docs

* fix: 🐛 make P extend SerializableState

* test: 💍 update test mocks

* fix: 🐛 use correct type in ingest pipeline locator

* test: 💍 add missing methods in mock

* test: 💍 update test mocks

* chore: 🤖 update plugin list

Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
2021-06-28 21:44:11 +02:00
..
__jest__/client_integration [Ingest Node Pipelines] Migrate to new page layout (#101894) 2021-06-15 15:27:48 +02:00
common Update @elastic/elasticsearch to 8.0.0-canary13 (#98266) 2021-06-08 15:06:06 +02:00
public Locator docs (#103129) 2021-06-28 21:44:11 +02:00
server Update @elastic/elasticsearch to 8.0.0-canary13 (#98266) 2021-06-08 15:06:06 +02:00
jest.config.js Elastic License 2.0 (#90099) 2021-02-03 18:12:39 -08:00
kibana.json Remove license check from Ingest Node Pipelines UI (#100189) 2021-05-28 17:13:09 -07:00
README.md [Ingest pipelines] Update readmes (#78350) 2020-09-30 08:42:29 -04:00
tsconfig.json Revert "TS Incremental build exclude test files (#95610)" (#96223) 2021-04-05 11:59:26 -07:00

Ingest Node Pipelines UI

Summary

The ingest_pipelines plugin provides Kibana support for Elasticsearch's ingest nodes. Please refer to the Elasticsearch documentation for more details.

This plugin allows Kibana to create, edit, clone and delete ingest node pipelines. It also provides support to simulate a pipeline.

It requires a Basic license and the following cluster privileges: manage_pipeline and cluster:monitor/nodes/info.


Development

A new app called Ingest Node Pipelines is registered in the Management section and follows a typical CRUD UI pattern. The client-side portion of this app lives in public/application and uses endpoints registered in server/routes/api. For more information on the pipeline processors editor component, check out the component readme.

See the kibana contributing guide for instructions on setting up your development environment.

Test coverage

The app has the following test coverage:

  • API integration tests
  • Smoke-level functional test
  • Client-integration tests

Quick steps for manual testing

You can run the following request in Console to create an ingest node pipeline:

PUT _ingest/pipeline/test_pipeline
{
   "description": "_description",
    "processors": [
      {
        "set": {
          "field": "field1",
          "value": "value1"
        }
      },
      {
        "rename": {
          "field": "dont_exist",
          "target_field": "field1",
          "ignore_failure": true
        }
      },
      {
        "rename": {
          "field": "foofield",
          "target_field": "new_field",
          "on_failure": [
            {
              "set": {
                "field": "field2",
                "value": "value2"
              }
            }
          ]
        }
      },
      {
        "drop": {
          "if": "false"
        }
      },
      {
        "drop": {
          "if": "true"
        }
      }
    ]
}

Then, go to the Ingest Node Pipelines UI to edit, delete, clone, or view details of the pipeline.

To simulate a pipeline, go to the "Edit" page of your pipeline. Click the "Add documents" link under the "Processors" section. You may add the following sample documents to test the pipeline:

// The first document in this example should trigger the on_failure processor in the pipeline, while the second one should succeed.
[
  {
    "_index": "my_index",
    "_id": "id1",
    "_source": {
      "foo": "bar"
    }
  },
  {
    "_index": "my_index",
    "_id": "id2",
    "_source": {
      "foo": "baz",
      "foofield": "bar"
    }
  }
]

Alternatively, you can add a document from an existing index, or create some sample data of your own. Afterward, click the "Run the pipeline" button to view the output.