kibana/x-pack/plugins/ingest_pipelines/README.md
2020-09-30 10:53:09 -04:00

102 lines
2.9 KiB
Markdown

# Ingest Node Pipelines UI
## Summary
The `ingest_pipelines` plugin provides Kibana support for [Elasticsearch's ingest nodes](https://www.elastic.co/guide/en/elasticsearch/reference/master/ingest.html). Please refer to the Elasticsearch documentation for more details.
This plugin allows Kibana to create, edit, clone and delete ingest node pipelines. It also provides support to simulate a pipeline.
It requires a Basic license and the following cluster privileges: `manage_pipeline` and `cluster:monitor/nodes/info`.
---
## Development
A new app called Ingest Node Pipelines is registered in the Management section and follows a typical CRUD UI pattern. The client-side portion of this app lives in [public/application](public/application) and uses endpoints registered in [server/routes/api](server/routes/api). For more information on the pipeline processors editor component, check out the [component readme](public/application/components/pipeline_processors_editor/README.md).
See the [kibana contributing guide](https://github.com/elastic/kibana/blob/master/CONTRIBUTING.md) for instructions on setting up your development environment.
### Test coverage
The app has the following test coverage:
- API integration tests
- Smoke-level functional test
- Client-integration tests
### Quick steps for manual testing
You can run the following request in Console to create an ingest node pipeline:
```
PUT _ingest/pipeline/test_pipeline
{
"description": "_description",
"processors": [
{
"set": {
"field": "field1",
"value": "value1"
}
},
{
"rename": {
"field": "dont_exist",
"target_field": "field1",
"ignore_failure": true
}
},
{
"rename": {
"field": "foofield",
"target_field": "new_field",
"on_failure": [
{
"set": {
"field": "field2",
"value": "value2"
}
}
]
}
},
{
"drop": {
"if": "false"
}
},
{
"drop": {
"if": "true"
}
}
]
}
```
Then, go to the Ingest Node Pipelines UI to edit, delete, clone, or view details of the pipeline.
To simulate a pipeline, go to the "Edit" page of your pipeline. Click the "Add documents" link under the "Processors" section. You may add the following sample documents to test the pipeline:
```
// The first document in this example should trigger the on_failure processor in the pipeline, while the second one should succeed.
[
{
"_index": "my_index",
"_id": "id1",
"_source": {
"foo": "bar"
}
},
{
"_index": "my_index",
"_id": "id2",
"_source": {
"foo": "baz",
"foofield": "bar"
}
}
]
```
Alternatively, you can add a document from an existing index, or create some sample data of your own. Afterward, click the "Run the pipeline" button to view the output.