[DOCS] Updates screenshots in Dev Tools docs (#105859)

* [DOCS] Updates screenshots in Dev Tools docs

* [DOCS] Combines all Search Profiler content in one doc
This commit is contained in:
gchaps 2021-07-20 14:33:06 -07:00 committed by GitHub
parent 9c25f8e8aa
commit 646df22ede
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
21 changed files with 360 additions and 370 deletions

View file

@ -1,7 +1,7 @@
[[console-kibana]]
== Console
Console enables you to interact with the REST API of {es}. You can:
*Console* enables you to interact with the REST API of {es}. You can:
* Send requests to {es} and view the responses
* View API documentation
@ -12,13 +12,13 @@ To get started, open the main menu, click *Dev Tools*, then click *Console*.
[role="screenshot"]
image::dev-tools/console/images/console.png["Console"]
NOTE: You are unable to interact with the REST API of {kib} with the Console.
NOTE: You cannot to interact with the REST API of {kib} with the Console.
[float]
[[console-api]]
=== Write requests
Console understands commands in a cURL-like syntax.
*Console* understands commands in a cURL-like syntax.
For example, the following is a `GET` request to the {es} `_search` API.
[source,js]
@ -43,8 +43,8 @@ curl -XGET "http://localhost:9200/_search" -d'
}'
----------------------------------
When you paste the command into Console, {kib} automatically converts it
to Console syntax. Alternatively, if you want to see Console syntax in cURL,
When you paste the command into *Console*, {kib} automatically converts it
to *Console* syntax. Alternatively, if you want to see *Console* syntax in cURL,
click the action icon (image:dev-tools/console/images/wrench.png[]) and select *Copy as cURL*.
Once copied, the username and password will need to be provided
for the calls to work from external environments.
@ -53,7 +53,7 @@ for the calls to work from external environments.
[[console-autocomplete]]
==== Autocomplete
When you're typing a command, Console makes context-sensitive suggestions.
When you're typing a command, *Console* makes context-sensitive suggestions.
These suggestions show you the parameters for each API and speed up your typing.
To configure your preferences for autocomplete, go to
<<configuring-console, Settings>>.
@ -69,15 +69,16 @@ and then select *Auto indent*.
For example, you might have a request formatted like this:
[role="screenshot"]
image::dev-tools/console/images/copy-curl.png["Console close-up"]
image::dev-tools/console/images/copy-curl.png["Console close-up", width=75%]
]
Console adjusts the JSON body of the request to apply the indents.
*Console* adjusts the JSON body of the request to apply the indents.
[role="screenshot"]
image::dev-tools/console/images/request.png["Console close-up"]
image::dev-tools/console/images/request.png["Console close-up", width=75%]
If you select *Auto indent* on a request that is already well formatted,
Console collapses the request body to a single line per document.
*Console* collapses the request body to a single line per document.
This is helpful when working with the {es} {ref}/docs-bulk.html[bulk APIs].
@ -90,8 +91,9 @@ When you're ready to submit the request to {es}, click the
green triangle.
You can select multiple requests and submit them together.
Console sends the requests to {es} one by one and shows the output
in the response pane. Submitting multiple request is helpful when you're debugging an issue or trying query
*Console* sends the requests to {es} one by one and shows the output
in the response pane. Submitting multiple requests is helpful
when you're debugging an issue or trying query
combinations in multiple scenarios.
@ -107,7 +109,7 @@ the action icon (image:dev-tools/console/images/wrench.png[]) and select
[[console-history]]
=== Get your request history
Console maintains a list of the last 500 requests that {es} successfully executed.
*Console* maintains a list of the last 500 requests that {es} successfully executed.
To view your most recent requests, click *History*. If you select a request
and click *Apply*, {kib} adds it to the editor at the current cursor position.
@ -115,11 +117,11 @@ and click *Apply*, {kib} adds it to the editor at the current cursor position.
[[configuring-console]]
=== Configure Console settings
You can configure the Console font size, JSON syntax,
You can configure the *Console* font size, JSON syntax,
and autocomplete suggestions in *Settings*.
[role="screenshot"]
image::dev-tools/console/images/console-settings.png["Console Settings"]
image::dev-tools/console/images/console-settings.png["Console Settings", width=60%]
[float]
[[keyboard-shortcuts]]
@ -132,7 +134,7 @@ shortcuts, click *Help*.
[[console-settings]]
=== Disable Console
If you dont want to use Console, you can disable it by setting `console.enabled`
If you dont want to use *Console*, you can disable it by setting `console.enabled`
to `false` in your `kibana.yml` configuration file. Changing this setting
causes the server to regenerate assets on the next startup,
which might cause a delay before pages start being served.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

After

Width:  |  Height:  |  Size: 240 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 519 KiB

After

Width:  |  Height:  |  Size: 260 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 314 KiB

After

Width:  |  Height:  |  Size: 140 KiB

View file

@ -1,19 +1,19 @@
[role="xpack"]
[[xpack-grokdebugger]]
== Debugging grok expressions
== Debug grok expressions
You can build and debug grok patterns in the {kib} *Grok Debugger*
before you use them in your data processing pipelines. Grok is a pattern
before you use them in your data processing pipelines. Grok is a pattern
matching syntax that you can use to parse arbitrary text and
structure it. Grok is good for parsing syslog, apache, and other
webserver logs, mysql logs, and in general, any log format that is
written for human consumption.
written for human consumption.
Grok patterns are supported in the ingest node
{ref}/grok-processor.html[grok processor] and the Logstash
{logstash-ref}/plugins-filters-grok.html[grok filter]. See
{logstash-ref}/plugins-filters-grok.html[grok filter]. See
{logstash-ref}/plugins-filters-grok.html#_grok_basics[grok basics]
for more information on the syntax for a grok pattern.
for more information on the syntax for a grok pattern.
The Elastic Stack ships
with more than 120 reusable grok patterns. See
@ -27,10 +27,10 @@ in ingest node and Logstash.
[float]
[[grokdebugger-getting-started]]
=== Getting started with the Grok Debugger
=== Get started
This example walks you through using the *Grok Debugger*. This tool
is automatically enabled in {kib}.
is automatically enabled in {kib}.
NOTE: If you're using {stack-security-features}, you must have the `manage_pipeline`
permission to use the Grok Debugger.
@ -66,12 +66,12 @@ image::dev-tools/grokdebugger/images/grok-debugger-overview.png["Grok Debugger"]
[float]
[[grokdebugger-custom-patterns]]
=== Testing custom patterns
=== Test custom patterns
If the default grok pattern dictionary doesn't contain the patterns you need,
you can define, test, and debug custom patterns using the Grok Debugger.
you can define, test, and debug custom patterns using the *Grok Debugger*.
Custom patterns that you enter in the Grok Debugger are not saved. Custom patterns
Custom patterns that you enter in the *Grok Debugger* are not saved. Custom patterns
are only available for the current debugging session and have no side effects.
Follow this example to define a custom pattern.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 246 KiB

After

Width:  |  Height:  |  Size: 387 KiB

View file

@ -4,7 +4,7 @@
beta::[]
The Painless Lab is an interactive code editor that lets you test and
The *Painless Lab* is an interactive code editor that lets you test and
debug {ref}/modules-scripting-painless.html[Painless scripts] in real-time.
You can use the Painless scripting
language to create <<scripted-fields, {kib} scripted fields>>,
@ -12,6 +12,7 @@ process {ref}/docs-reindex.html[reindexed data], define complex
<<watcher-create-advanced-watch, Watcher conditions>>,
and work with data in other contexts.
To get started, open the main menu, click *Dev Tools*, then click *Painless Lab*.
To get started, open the main menu, click *Dev Tools*, and then click *Painless Lab*.
[role="screenshot"]
image::dev-tools/painlesslab/images/painless-lab.png[Painless Lab]

View file

@ -1,49 +0,0 @@
[role="xpack"]
[[profiler-getting-started]]
=== Getting Started
The {searchprofiler} is automatically enabled in {kib}. Open the main menu, click *Dev Tools*, then click *{searchprofiler}*
to get started.
{searchprofiler} displays the names of the indices searched, the shards in each index,
and how long it took for the query to complete. To try it out, replace the default `match_all` query
with the query you want to profile and click *Profile*.
The following example shows the results of profiling the `match_all` query.
If we take a closer look at the information for the `.kibana_1` sample index, the
Cumulative Time field shows us that the query took 1.279ms to execute.
[role="screenshot"]
image::dev-tools/searchprofiler/images/overview.png["{searchprofiler} example"]
[NOTE]
====
The Cumulative Time metric is the sum of individual shard times.
It is not necessarily the actual time it took for the query to return (wall clock time).
Because shards might be processed in parallel on multiple nodes, the wall clock time can
be significantly less than the Cumulative Time. However, if shards are colocated on the
same node and executed serially, the wall clock time is closer to the Cumulative Time.
While the Cumulative Time metric is useful for comparing the performance of your
indices and shards, it doesn't necessarily represent the actual physical query times.
====
You can select the name of the shard and then click *View details* to see more profiling information,
including details about the query component(s) that ran on the shard, as well as the timing
breakdown of low-level Lucene methods. For more information, see {ref}/search-profile.html#profiling-queries[Profiling queries].
[float]
=== Index and type filtering
By default, all queries executed by the {searchprofiler} are sent
to `GET /_search`. It searches across your entire cluster (all indices, all types).
If you need to query a specific index or type (or several), you can use the Index
and Type filters.
In the following example, the query is executed against the indices `test` and `kibana_1`
and the type `my_type`. This is equivalent making a request to `GET /test,kibana_1/my_type/_search`.
[role="screenshot"]
image::dev-tools/searchprofiler/images/filter.png["Filtering by index and type"]

View file

@ -1,20 +0,0 @@
[role="xpack"]
[[xpack-profiler]]
= Profiling queries and aggregations
[partintro]
--
{es} has a powerful {ref}/search-profile.html[Profile API] which can be used to inspect and analyze
your search queries. The response returns a large JSON blob, which can be
difficult to analyze manually.
The {searchprofiler} tool can transform this JSON output
into a visualization that is easy to navigate, allowing you to diagnose and debug
poorly performing queries much faster.
[role="screenshot"]
image::dev-tools/searchprofiler/images/overview.png["{searchprofiler} Visualization"]
--
include::getting-started.asciidoc[]

Binary file not shown.

Before

Width:  |  Height:  |  Size: 374 KiB

After

Width:  |  Height:  |  Size: 220 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 269 KiB

After

Width:  |  Height:  |  Size: 342 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 598 KiB

After

Width:  |  Height:  |  Size: 278 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 202 KiB

After

Width:  |  Height:  |  Size: 274 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 404 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 292 KiB

View file

@ -1,20 +1,324 @@
[role="xpack"]
[[xpack-profiler]]
== Profiling queries and aggregations
== Profile queries and aggregations
{es} has a powerful {ref}/search-profile.html[Profile API] which can be used to inspect and analyze
{es} has a powerful {ref}/search-profile.html[Profile API] that you can use to inspect and analyze
your search queries. The response returns a large JSON blob, which can be
difficult to analyze manually.
The {searchprofiler} tool can transform this JSON output
The *{searchprofiler}* tool can transform this JSON output
into a visualization that is easy to navigate, allowing you to diagnose and debug
poorly performing queries much faster.
[float]
[[search-profiler-getting-started]]
=== Get started
image::dev-tools/searchprofiler/images/overview.png["{searchprofiler} Visualization"]
*{searchprofiler}* is automatically enabled in {kib}. Open the main menu,
click *Dev Tools*, and then click *{searchprofiler}*
to get started.
include::getting-started.asciidoc[]
*{searchprofiler}* displays the names of the indices searched, the shards in each index,
and how long it took for the query to complete. To try it out, replace the default `match_all` query
with the query you want to profile, and then click *Profile*.
include::more-complicated.asciidoc[]
The following example shows the results of profiling the `match_all` query.
If you take a closer look at the information for the `.security_7` sample index, the
*Cumulative time* field shows you that the query took 0.028ms to execute.
include::pasting.asciidoc[]
[role="screenshot"]
image::dev-tools/searchprofiler/images/overview.png["{searchprofiler} visualization"]
[NOTE]
====
The cumulative time metric is the sum of individual shard times.
It is not necessarily the actual time it took for the query to return (wall clock time).
Because shards might be processed in parallel on multiple nodes, the wall clock time can
be significantly less than the cumulative time. However, if shards are colocated on the
same node and executed serially, the wall clock time is closer to the cumulative time.
While the cumulative time metric is useful for comparing the performance of your
indices and shards, it doesn't necessarily represent the actual physical query times.
====
To see more profiling information, click *View details*. You'll
see details about the query components that ran on the shard and the timing
breakdown of low-level methods. For more information, refer to {ref}/search-profile.html#profiling-queries[Profiling queries].
[float]
=== Filter for an index or type
By default, all queries executed by the *{searchprofiler}* are sent
to `GET /_search`. It searches across your entire cluster (all indices, all types).
To query a specific index or type, you can use the *Index* filter.
In the following example, the query is executed against the indices `.security-7` and `kibana_sample_data_ecommerce`.
This is equivalent making a request to `GET /test,kibana_1/_search`.
[role="screenshot"]
image::dev-tools/searchprofiler/images/filter.png["Filtering by index and type"]
[[profile-complicated-query]]
[float]
=== Profile a more complicated query
To understand how the query trees are displayed inside the *{searchprofiler}*,
take a look at a more complicated query.
. Index the following data via *Console*:
+
--
[source,js]
--------------------------------------------------
POST test/_bulk
{"index":{}}
{"name":"aaron","age":23,"hair":"brown"}
{"index":{}}
{"name":"sue","age":19,"hair":"red"}
{"index":{}}
{"name":"sally","age":19,"hair":"blonde"}
{"index":{}}
{"name":"george","age":19,"hair":"blonde"}
{"index":{}}
{"name":"fred","age":69,"hair":"blonde"}
--------------------------------------------------
// CONSOLE
--
. From the *{searchprofiler}*, enter *test* in the *Index* field to restrict profiled
queries to the `test` index.
. Replace the default `match_all` query in the query editor with a query that has two sub-query
components and includes a simple aggregation:
+
--
[source,js]
--------------------------------------------------
{
"query": {
"bool": {
"should": [
{
"match": {
"name": "fred"
}
},
{
"terms": {
"name": [
"sue",
"sally"
]
}
}
]
}
},
"aggs": {
"stats": {
"stats": {
"field": "price"
}
}
}
}
--------------------------------------------------
// NOTCONSOLE
--
. Click *Profile* to profile the query and visualize the results.
+
[role="screenshot"]
image::dev-tools/searchprofiler/images/gs8.png["Profiling the more complicated query"]
+
- The top `BooleanQuery` component corresponds to the bool in the query.
- The second `BooleanQuery` corresponds to the terms query, which is internally
converted to a `Boolean` of should clauses. It has two child queries that correspond
to "sally" and "sue from the terms query.
- The `TermQuery` that's labeled with "name:fred" corresponds to match: fred in the query.
+
If you look at the time columns, you can see that *Self time* and *Total time* are no longer
identical on all the rows. *Self time* represents how long the query component took to execute.
*Total time* is the time a query component and all its children took to execute.
Therefore, queries like the Boolean queries often have a larger total time than self time.
. Click *Aggregation Profile* to view aggregation profiling statistics.
+
This query includes a `stats` agg on the `"age"` field.
The *Aggregation Profile* tab is only enabled when the query being profiled contains an aggregation.
. Click *View details* to view the timing breakdown.
+
[role="screenshot"]
image::dev-tools/searchprofiler/images/gs10.png["Drilling into the first shard's details"]
+
For more information about how the *{searchprofiler}* works, how timings are calculated, and
how to interpret various results, see
{ref}/search-profile.html#profiling-queries[Profiling queries].
[[profiler-render-JSON]]
[float]
=== Render pre-captured profiler JSON
The *{searchprofiler}* queries the cluster that the {kib} node is attached to.
It does this by executing the query against the cluster and collecting the results.
Sometimes you might want to investigate performance problems that are temporal in nature.
For example, a query might only be slow at certain time of day when many customers are using your system.
You can set up a process to automatically profile slow queries when they occur and then
save those profile responses for later analysis.
The *{searchprofiler}* supports this workflow by allowing you to paste the
pre-captured JSON in the query editor. The *{searchprofiler}* will detect that you
have entered a JSON response (rather than a query) and will render just the visualization,
rather than querying the cluster.
To see how this works, copy and paste the following profile response into the
query editor and click *Profile*.
[source,js]
--------------------------------------------------
{
"took": 3,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 1.3862944,
"hits": [
{
"_index": "test",
"_type": "test",
"_id": "AVi3aRDmGKWpaS38wV57",
"_score": 1.3862944,
"_source": {
"name": "fred",
"age": 69,
"hair": "blonde"
}
}
]
},
"profile": {
"shards": [
{
"id": "[O-l25nM4QN6Z68UA5rUYqQ][test][0]",
"searches": [
{
"query": [
{
"type": "BooleanQuery",
"description": "+name:fred #(ConstantScore(*:*))^0.0",
"time": "0.5884370000ms",
"breakdown": {
"score": 7243,
"build_scorer_count": 1,
"match_count": 0,
"create_weight": 196239,
"next_doc": 9851,
"match": 0,
"create_weight_count": 1,
"next_doc_count": 2,
"score_count": 1,
"build_scorer": 375099,
"advance": 0,
"advance_count": 0
},
"children": [
{
"type": "TermQuery",
"description": "name:fred",
"time": "0.3016880000ms",
"breakdown": {
"score": 4218,
"build_scorer_count": 1,
"match_count": 0,
"create_weight": 132425,
"next_doc": 2196,
"match": 0,
"create_weight_count": 1,
"next_doc_count": 2,
"score_count": 1,
"build_scorer": 162844,
"advance": 0,
"advance_count": 0
}
},
{
"type": "BoostQuery",
"description": "(ConstantScore(*:*))^0.0",
"time": "0.1223030000ms",
"breakdown": {
"score": 0,
"build_scorer_count": 1,
"match_count": 0,
"create_weight": 17366,
"next_doc": 0,
"match": 0,
"create_weight_count": 1,
"next_doc_count": 0,
"score_count": 0,
"build_scorer": 102329,
"advance": 2604,
"advance_count": 2
},
"children": [
{
"type": "MatchAllDocsQuery",
"description": "*:*",
"time": "0.03307600000ms",
"breakdown": {
"score": 0,
"build_scorer_count": 1,
"match_count": 0,
"create_weight": 6068,
"next_doc": 0,
"match": 0,
"create_weight_count": 1,
"next_doc_count": 0,
"score_count": 0,
"build_scorer": 25615,
"advance": 1389,
"advance_count": 2
}
}
]
}
]
}
],
"rewrite_time": 168640,
"collector": [
{
"name": "CancellableCollector",
"reason": "search_cancelled",
"time": "0.02952900000ms",
"children": [
{
"name": "SimpleTopScoreDocCollector",
"reason": "search_top_hits",
"time": "0.01931700000ms"
}
]
}
]
}
],
"aggregations": []
}
]
}
}
--------------------------------------------------
// NOTCONSOLE
Your output should look similar to this:
[role="screenshot"]
image::dev-tools/searchprofiler/images/search-profiler-json.png["Rendering pre-captured profiler JSON"]

View file

@ -1,104 +0,0 @@
[role="xpack"]
[[profiler-complicated]]
=== Profiling a more complicated query
To understand how the query trees are displayed inside the {searchprofiler},
let's look at a more complicated query.
. Index the following data via *Console*:
+
--
[source,js]
--------------------------------------------------
POST test/_bulk
{"index":{}}
{"name":"aaron","age":23,"hair":"brown"}
{"index":{}}
{"name":"sue","age":19,"hair":"red"}
{"index":{}}
{"name":"sally","age":19,"hair":"blonde"}
{"index":{}}
{"name":"george","age":19,"hair":"blonde"}
{"index":{}}
{"name":"fred","age":69,"hair":"blonde"}
--------------------------------------------------
// CONSOLE
--
. From the {searchprofiler}, enter "test" in the *Index* field to restrict profiled
queries to the `test` index.
. Replace the default `match_all` query in the query editor with a query that has two sub-query
components and includes a simple aggregation:
+
--
[source,js]
--------------------------------------------------
{
"query": {
"bool": {
"should": [
{
"match": {
"name": "fred"
}
},
{
"terms": {
"name": [
"sue",
"sally"
]
}
}
]
}
},
"aggs": {
"stats": {
"stats": {
"field": "price"
}
}
}
}
--------------------------------------------------
// NOTCONSOLE
--
. Click *Profile* to profile the query and visualize the results.
. Select the shard to view the query details.
+
[role="screenshot"]
image::dev-tools/searchprofiler/images/gs8.png["Profiling the more complicated query"]
The detail view contains a row for each query component:
- The top-level `BooleanQuery` component corresponds to the bool in the query.
- The second `BooleanQuery` corresponds to the terms query, which is internally
converted to a `Boolean` of should clauses. It has two child queries that correspond
to "sue" and "sally" from the terms query.
- The `TermQuery` that's labeled with "name:fred" corresponds to match: fred in the query.
If you look at the time columns, you can see that "Self time" and "Total time" are no longer
identical on all the rows. Self time represents how long the query component took to execute.
Total time is the time a query component and all its children took to execute.
Therefore, queries like the Boolean queries often have a larger total time than self time.
==== Aggregations
This particular query also includes a aggregation (a `stats` agg on the `"age"` field).
Click *Aggregation Profile* to view aggregation profiling statistics (this tab
is only enabled if the query being profiled contains an aggregation).
Select the name of the shard to view the aggregation details and timing breakdown.
[role="screenshot"]
image::dev-tools/searchprofiler/images/gs10.png["Drilling into the first shard's details"]
For more information about how the {searchprofiler} works, how timings are calculated, and
how to interpret various results, see
{ref}/search-profile.html#profiling-queries[Profiling queries].

View file

@ -1,161 +0,0 @@
[role="xpack"]
[[profiler-render]]
=== Rendering pre-captured profiler JSON
The {searchprofiler} queries the cluster that the Kibana node is attached to.
It does this by executing the query against the cluster and collecting the results.
But sometimes you may want to investigate performance problems that are temporal in nature.
For example, a query might only be slow at certain time of day when many customers are using your system.
You can setup a process to automatically profile slow queries when they occur and then
save those profile responses for later analysis.
The {searchprofiler} supports this workflow by allowing you to paste the
pre-captured JSON in the query editor. The {searchprofiler} will detect that you
have entered a JSON response (rather than a query) and will just render the visualization,
rather than querying the cluster.
To see how this works, copy and paste the following profile response into the
query editor and click *Profile*.
[source,js]
--------------------------------------------------
{
"took": 3,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 1.3862944,
"hits": [
{
"_index": "test",
"_type": "test",
"_id": "AVi3aRDmGKWpaS38wV57",
"_score": 1.3862944,
"_source": {
"name": "fred",
"age": 69,
"hair": "blonde"
}
}
]
},
"profile": {
"shards": [
{
"id": "[O-l25nM4QN6Z68UA5rUYqQ][test][0]",
"searches": [
{
"query": [
{
"type": "BooleanQuery",
"description": "+name:fred #(ConstantScore(*:*))^0.0",
"time": "0.5884370000ms",
"breakdown": {
"score": 7243,
"build_scorer_count": 1,
"match_count": 0,
"create_weight": 196239,
"next_doc": 9851,
"match": 0,
"create_weight_count": 1,
"next_doc_count": 2,
"score_count": 1,
"build_scorer": 375099,
"advance": 0,
"advance_count": 0
},
"children": [
{
"type": "TermQuery",
"description": "name:fred",
"time": "0.3016880000ms",
"breakdown": {
"score": 4218,
"build_scorer_count": 1,
"match_count": 0,
"create_weight": 132425,
"next_doc": 2196,
"match": 0,
"create_weight_count": 1,
"next_doc_count": 2,
"score_count": 1,
"build_scorer": 162844,
"advance": 0,
"advance_count": 0
}
},
{
"type": "BoostQuery",
"description": "(ConstantScore(*:*))^0.0",
"time": "0.1223030000ms",
"breakdown": {
"score": 0,
"build_scorer_count": 1,
"match_count": 0,
"create_weight": 17366,
"next_doc": 0,
"match": 0,
"create_weight_count": 1,
"next_doc_count": 0,
"score_count": 0,
"build_scorer": 102329,
"advance": 2604,
"advance_count": 2
},
"children": [
{
"type": "MatchAllDocsQuery",
"description": "*:*",
"time": "0.03307600000ms",
"breakdown": {
"score": 0,
"build_scorer_count": 1,
"match_count": 0,
"create_weight": 6068,
"next_doc": 0,
"match": 0,
"create_weight_count": 1,
"next_doc_count": 0,
"score_count": 0,
"build_scorer": 25615,
"advance": 1389,
"advance_count": 2
}
}
]
}
]
}
],
"rewrite_time": 168640,
"collector": [
{
"name": "CancellableCollector",
"reason": "search_cancelled",
"time": "0.02952900000ms",
"children": [
{
"name": "SimpleTopScoreDocCollector",
"reason": "search_top_hits",
"time": "0.01931700000ms"
}
]
}
]
}
],
"aggregations": []
}
]
}
}
--------------------------------------------------
// NOTCONSOLE
image::dev-tools/searchprofiler/images/pasting.png["Visualizing pre-collected responses"]

View file

@ -316,4 +316,21 @@ This content has moved. Refer to <<embedded-content-authentication>> and <<embed
[role="exclude",id="reporting-troubleshooting-system-dependencies"]
== System dependencies
This content has moved. Refer to <<install-reporting-packages>>.
This content has moved. Refer to <<install-reporting-packages>>.
[role="exclude",id="profiler-getting-started"]
== Getting start with Search Profiler
This content has moved. Refer to <<xpack-profiler>>.
[role="exclude",id="profiler-complicated"]
== Profiling a more complicated querying
This content has moved. Refer to <<xpack-profiler>>.
[role="exclude",id="profiler-render"]
== Rendering pre-captured profiler JSON
This content has moved. Refer to <<xpack-profiler>>.