* Add Index Management README and quick testing steps for data streams.
* Surface data stream health in Data Streams tab (table and detail panel).
- Extract out DataHealth component for use in both Data Streams and Indices tabs.
- Refactor detail panel to use data structure & algo to build component.
- Refactor detail panel to use i18n.translate instead of FormattedMessage.
* Render index template name and index lifecycle policy name in the detail panel.
* Render storage size and max timestamp information in table and detail panel.
- Add 'Include stats' switch.
- Add humanizeTimeStamp service, localized to data streams.
* [Setup] DRY out stripTrailingSlash helper
- DRYs out repeated code
- This will be used by an upcoming server/ endpoint change, hence why it's in common
* [Setup] DRY out initial app data types to common/types
- In preparation for upcoming server logic that will need to reuse these types
+ DRY out and clean up workplace_search types
- remove unused supportEligible
- remove currentUser - unneeded in Kibana
* Update callEnterpriseSearchConfigAPI to parse and fetch new expected data
* Remove /public_url API for /config_data
* Remove getPublicUrl in favor of directly calling the new /config_data API from public/plugin
+ set returned initialData in this.data
* Set up product apps to be passed initial data as props
* Fix for Kea/redux state not resetting between AS<->WS nav
- resetContext at the top level only gets called once total on first plugin load and never after, causing navigating between WS and AS to crash when both have Kea - this fixes the issue
- moves redux Provider to top level app as well
* Add very basic Kea logic file to App Search
* Finish AppSearchConfigured tests & set up kea+useEffect mocks
* [Cleanup] DRY out repeated mock initialAppData to a reusable defaults constant
* Server side changes
- removed console_legacy plugin!
- added new es_config endpoint that returns server side es config
at the moment this is just the first value in hosts
- Slight refactor to how routes are registered to bring them more
in line with other ES UI plugins
* Client side update
- Updated the client to not get es host from injected metadata.
Instead use the new endpoint created server side that returns
this value
- Added a small README.md regarding the hooks lib and need to
refactor use of jQuery in console
- Write code to init the es host value on the client once at start
up in a non-blocking way. If this fails we just use the default
value of http://localhost:9200 as this powers non-essential
console functionality (i.e., copy as cURL).
* fix type issue and jest tests
* fix another type issue
* simplify proxy assignment in proxy handler mock
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
* refactor: 💡 move timeRange, filters and query to base embeddabl
* refactor: 💡 use new base embeddable input in explore data
* feat: 🎸 import types as types
* Remove enableHistogramMode prop from TSVB as it causes bugs with non-stacked bars
* Disable enableHistogramMode property to false only when thr user selects the nobn-stacked option
* small change to be more human readbale
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
Introduces a monitor around the Task Manager poller which pips through all values emitted by the poller and recovers from poller failures or stalls.
This monitor does the following:
1. Catches the poller thrown errors and recovers by proxying the error to a handler and continues listening to the poller.
2. Reacts to the poller `error` (caused by uncaught errors) and `completion` events, by starting a new poller and piping its event through to any previous subscribers (in our case, Task Manager itself).
3. Tracks the rate at which the poller emits events (this can be both work events, and `No Task` events, so polling and finding no work, still counts as an emitted event) and times out when this rate gets too long (suggesting the poller has hung) and replaces the Poller with a new one.
We're not aware of any clear cases where Task Manager should actually get restarted by the monitor - this is definitely an error case and we have addressed all known cases.
The goal of introducing this monitor is as an insurance policy in case an unexpected error case breaks the poller in a long running production environment.
* Added the enrich processor form (big one)
* added fail processor form
* Added foreach processor form
* Added geoip processor form and refactored some deserialization
* added grok processor form and updated comments and some i18n ids
* updated existing gsub processor form to be in line with other forms
* added html_strip processor form
* refactored some serialize and deserialize functions and added inference processor form
* fix copy-pasta mistake in inference form and add join processor form
* Added JSON processor field
- Also factored out target_field field to common field (still have
to update the all instances)
- Built out special logic for handling add_to_root and
target_field combo on JSON processor form
- Created another serializer, for default boolean -> undefined.
* remove unused variable
* Refactor to use new shared target_field component, and fix JSON serializer bug
* fix i18n
* address pr feedback
* Fix enrich max fields help text copy
* add link to enrich policy docs in help text
* fix error validation message in enrich policy form and replace space with horizontal rule
* address copy feedback
* fix i18n id typo
* fix i18n
* address additional round of feedback and fix json form help text
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
* Set up navigateToUrl context
* Update RR link helpers to use navigateToUrl
* Update breadcrumbs to use navigateToUrl + refactor
generate_breadcrumbs:
- Change base breadcrumb generator to a custom React useHook instead of passing history/context around
- Change use{Product}Breadcrumbs helpers to mainly involve merging arrays
- Update + simplify tests accordingly (test link behavior in main useBreadcrumb suite, not in subsequent helpers)
set_chrome:
- Update to use new breadcrumb hooks (requires pulling out of useEffect, hooks can't be used inside another hook)
- Clean up/refactor tests
* Update route redirects now that navigation works correctly
* add retries for registry requests.
works, afaict. no tests. one TS issue.
* Fix TS issue. Add link to node-fetch error docs
* Restore some accidentally deleted code.
* Add more comments. Remove logging.
* Add tests for plugin setup service & handlers
* Add tests for Registry retry logic
* Extract setup retry logic to separate function/file
* Add tests for setup retry logic
```
firstSuccessOrTryAgain
✓ reject/throws is called again & its value returned (18ms)
✓ the first success value is cached (2ms)
```
* More straightforward(?) tests for setup caching
* Revert cached setup. Still limit 1 call at a time
Terrible tests. Committing & pushing to see if it fixes failures like https://github.com/elastic/kibana/pull/74507/checks?check_run_id=980178887https://kibana-ci.elastic.co/job/elastic+kibana+pipeline-pull-request/67892/execution/node/663/log/
```
07:36:56 └-> "before all" hook
07:36:56 └-> should not allow to enroll an agent with a invalid enrollment
07:36:56 └-> "before each" hook: global before each
07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry
07:36:56 │ proc [kibana] error [11:36:56.369] Error: Internal Server Error
07:36:56 │ proc [kibana] at HapiResponseAdapter.toError (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/response_adapter.js:132:19)
07:36:56 │ proc [kibana] at HapiResponseAdapter.toHapiResponse (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/response_adapter.js:86:19)
07:36:56 │ proc [kibana] at HapiResponseAdapter.handle (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/response_adapter.js:81:17)
07:36:56 │ proc [kibana] at Router.handle (/dev/shm/workspace/parallel/5/kibana/build/kibana-build-xpack/src/core/server/http/router/router.js:164:34)
07:36:56 │ proc [kibana] at process._tickCallback (internal/process/next_tick.js:68:7)
07:36:56 │ proc [kibana] log [11:36:56.581] [info][authentication][plugins][security] Authentication attempt failed: [security_exception] missing authentication credentials for REST request [/_security/_authenticate], with { header={ WWW-Authenticate={ 0="ApiKey" & 1="Basic realm=\"security\" charset=\"UTF-8\"" } } }
07:36:56 └- ✓ pass (60ms) "Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should not allow to enroll an agent with a invalid enrollment"
07:36:56 └-> should not allow to enroll an agent with a shared id if it already exists
07:36:56 └-> "before each" hook: global before each
07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry
07:36:56 └- ✓ pass (111ms) "Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should not allow to enroll an agent with a shared id if it already exists "
07:36:56 └-> should not allow to enroll an agent with a version > kibana
07:36:56 └-> "before each" hook: global before each
07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry
07:36:56 └- ✓ pass (58ms) "Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should not allow to enroll an agent with a version > kibana"
07:36:56 └-> should allow to enroll an agent with a valid enrollment token
07:36:56 └-> "before each" hook: global before each
07:36:56 └-> "before each" hook: beforeSetupWithDockerRegistry
07:36:56 └- ✖ fail: Ingest Manager Endpoints Fleet Endpoints fleet_agents_enroll should allow to enroll an agent with a valid enrollment token
07:36:56 │ Error: expected 200 "OK", got 500 "Internal Server Error"
07:36:56 │ at Test._assertStatus (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:268:12)
07:36:56 │ at Test._assertFunction (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:283:11)
07:36:56 │ at Test.assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:173:18)
07:36:56 │ at assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:131:12)
07:36:56 │ at /dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:128:5
07:36:56 │ at Test.Request.callback (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:718:3)
07:36:56 │ at parser (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:906:18)
07:36:56 │ at IncomingMessage.res.on (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/parsers/json.js:19:7)
07:36:56 │ at endReadableNT (_stream_readable.js:1145:12)
07:36:56 │ at process._tickCallback (internal/process/next_tick.js:63:19)
07:36:56 │
07:36:56 │
```
* New name & tests for one-at-a-time /setup behavior
`firstPromiseBlocksAndFufills` for "the first promise created blocks others from being created, then fufills all with that first result"
* More (better?) renaming
* Fix name in test description
* Fix spelling typo.
* Remove registry retry code & tests
* Use async fn's .catch to avoid unhandled rejection
Add explicit `isPending` value instead of overloading role of `status`. Could probably do without it, but it makes the intent more clear.
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
* initial version of string extraction by parser
* refine comments and add function description
* - Added parser specific test
- Added 5 megabyte limit to string size for expansion
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>