Fix typos in docs & dev_docs (#113746)

This commit is contained in:
garanews 2021-10-07 20:30:32 +02:00 committed by GitHub
parent 5e18fb1899
commit 58f6d9002a
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
72 changed files with 92 additions and 92 deletions

View file

@ -58,7 +58,7 @@ type Bar = { id: string };
export type Foo = Bar | string;
```
`Bar`, in the signature of `Foo`, will not be clickable because it would result in a broken link. `Bar` is not publically exported!
`Bar`, in the signature of `Foo`, will not be clickable because it would result in a broken link. `Bar` is not publicly exported!
If that isn't the case, please file an issue, it could be a bug with the system.

View file

@ -196,7 +196,7 @@ Over-refactoring can be a problem in it's own right, but it's still important to
Try not to put your PR in review mode, or merge large changes, right before Feature Freeze. It's inevitably one of the most volatile times for the
Kibana code base, try not to contribute to this volatility. Doing this can:
- increase the likelyhood of conflicts from other features being merged at the last minute
- increase the likelihood of conflicts from other features being merged at the last minute
- means your feature has less QA time
- means your feature gets less careful review as reviewers are often swamped at this time

View file

@ -44,7 +44,7 @@ Then, install the latest version of yarn using:
npm install -g yarn
```
Finally, boostrap Kibana and install all of the remaining dependencies:
Finally, bootstrap Kibana and install all of the remaining dependencies:
```sh
yarn kbn bootstrap

View file

@ -33,7 +33,7 @@ all the "children" will be automatically included. However, when a "child" is ex
## Migrations and Backward compatibility
As your plugin evolves, you may need to change your Saved Object type in a breaking way (for example, changing the type of an attribtue, or removing
As your plugin evolves, you may need to change your Saved Object type in a breaking way (for example, changing the type of an attribute, or removing
an attribute). If that happens, you should write a migration to upgrade the Saved Objects that existed prior to the change.
<DocLink id="kibDevTutorialSavedObject" section="migrations" text="How to write a migration" />.

View file

@ -477,7 +477,7 @@ If you don't call `clear`, you will see a warning in the console while developin
The last step of the integration is restoring an existing search session. The `searchSessionId` parameter and the rest of the restore state are passed into the application via the URL. Non-URL support is planned for future releases.
If you detect the presense of a `searchSessionId` parameter in the URL, call the `restore` method **instead** of calling `start`. The previous example would now become:
If you detect the presence of a `searchSessionId` parameter in the URL, call the `restore` method **instead** of calling `start`. The previous example would now become:
```ts
function onSearchSessionConfigChange(searchSessionIdFromUrl?: string) {

View file

@ -58,7 +58,7 @@ You can request to overwrite any objects that already exist in the target space
NOTE: This cannot be used with the `overwrite` option.
`overwrite`::
(Optional, boolean) When set to `true`, all conflicts are automatically overidden. When a saved object with a matching `type` and `id`
(Optional, boolean) When set to `true`, all conflicts are automatically overridden. When a saved object with a matching `type` and `id`
exists in the target space, that version is replaced with the version from the source space. The default value is `false`.
+
NOTE: This cannot be used with the `createNewCopies` option.

View file

@ -5,7 +5,7 @@
When running {kib} from source, you must have this version installed locally.
The required version of Node.js is listed in several different files throughout the {kib} source code.
Theses files must be updated when upgrading Node.js:
These files must be updated when upgrading Node.js:
- {kib-repo}blob/{branch}/.ci/Dockerfile[`.ci/Dockerfile`] - The version is specified in the `NODE_VERSION` constant.
This is used to pull the relevant image from https://hub.docker.com/_/node[Docker Hub].
@ -29,7 +29,7 @@ The following rules are not set in stone.
Use best judgement when backporting.
Currently version 7.11 and newer run Node.js 14, while 7.10 and older run Node.js 10.
Hence, upgrades to either Node.js 14 or Node.js 10 shold be done as separate PRs.
Hence, upgrades to either Node.js 14 or Node.js 10 should be done as separate PRs.
==== Node.js patch upgrades

View file

@ -104,6 +104,6 @@ Authorization: Basic foo_read_only_user password
}
----------------------------------
{es} checks if the user is granted a specific action. If the user is assigned a role that grants a privilege, {es} uses the <<development-rbac-privileges, {kib} privileges>> definition to associate this with the actions, which makes authorizing users more intuitive and flexible programatically.
{es} checks if the user is granted a specific action. If the user is assigned a role that grants a privilege, {es} uses the <<development-rbac-privileges, {kib} privileges>> definition to associate this with the actions, which makes authorizing users more intuitive and flexible programmatically.
Once we have authorized the user to perform a specific action, we can execute the request using `callWithInternalUser`.

View file

@ -47,7 +47,7 @@ Additionally, in order to migrate into project refs, you also need to make sure
"declarationMap": true
},
"include": [
// add all the folders containg files to be compiled
// add all the folders containing files to be compiled
],
"references": [
{ "path": "../../core/tsconfig.json" },

View file

@ -67,7 +67,7 @@ You can report new metrics by using the `CiStatsReporter` class provided by the
In order to prevent the page load bundles from growing unexpectedly large we limit the `page load asset size` metric for each plugin. When a PR increases this metric beyond the limit defined for that plugin in {kib-repo}blob/{branch}/packages/kbn-optimizer/limits.yml[`limits.yml`] a failed commit status is set and the PR author needs to decide how to resolve this issue before the PR can be merged.
In most cases the limit should be high enough that PRs shouldn't trigger overages, but when they do make sure it's clear what is cuasing the overage by trying the following:
In most cases the limit should be high enough that PRs shouldn't trigger overages, but when they do make sure it's clear what is causing the overage by trying the following:
1. Run the optimizer locally with the `--profile` flag to produce webpack `stats.json` files for bundles which can be inspected using a number of different online tools. Focus on the chunk named `{pluginId}.plugin.js`; the `*.chunk.js` chunks make up the `async chunks size` metric which is currently unlimited and is the main way that we <<plugin-performance, reduce the size of page load chunks>>.
+
@ -107,7 +107,7 @@ prettier -w {pluginDir}/target/public/{pluginId}.plugin.js
Once you've identified the files which were added to the build you likely just need to stick them behind an async import as described in <<plugin-performance, Plugin performance>>.
In the case that the bundle size is not being bloated by anything obvious, but it's still larger than the limit, you can raise the limit in your PR. Do this either by editting the {kib-repo}blob/{branch}/packages/kbn-optimizer/limits.yml[`limits.yml` file] manually or by running the following to have the limit updated to the current size + 15kb
In the case that the bundle size is not being bloated by anything obvious, but it's still larger than the limit, you can raise the limit in your PR. Do this either by editing the {kib-repo}blob/{branch}/packages/kbn-optimizer/limits.yml[`limits.yml` file] manually or by running the following to have the limit updated to the current size + 15kb
[source,shell]
-----------

View file

@ -185,8 +185,8 @@ node scripts/functional_test_runner --config test/functional/config.firefox.js
[discrete]
==== Using the test_user service
Tests should run at the positive security boundry condition, meaning that they should be run with the mimimum privileges required (and documented) and not as the superuser.
This prevents the type of regression where additional privleges accidentally become required to perform the same action.
Tests should run at the positive security boundary condition, meaning that they should be run with the minimum privileges required (and documented) and not as the superuser.
This prevents the type of regression where additional privileges accidentally become required to perform the same action.
The functional UI tests now default to logging in with a user named `test_user` and the roles of this user can be changed dynamically without logging in and out.
@ -458,7 +458,7 @@ Bad example: `PageObjects.app.clickButton()`
class AppPage {
// what can people who call this method expect from the
// UI after the promise resolves? Since the reaction to most
// clicks is asynchronous the behavior is dependant on timing
// clicks is asynchronous the behavior is dependent on timing
// and likely to cause test that fail unexpectedly
async clickButton () {
await testSubjects.click(menuButton);

View file

@ -51,7 +51,7 @@ Any additional options supplied to `test:jest` will be passed onto the Jest CLI
----
kibana/src/plugins/dashboard/server$ yarn test:jest --coverage
# is equivelant to
# is equivalent to
yarn jest --coverage --verbose --config /home/tyler/elastic/kibana/src/plugins/dashboard/jest.config.js server
----

View file

@ -486,7 +486,7 @@ to change the application or the navlink state at runtime.
[source,typescript]
----
// my_plugin has a required dependencie to the `licensing` plugin
// my_plugin has a required dependency to the `licensing` plugin
interface MyPluginSetupDeps {
licensing: LicensingPluginSetup;
}

View file

@ -4,7 +4,7 @@
## AppNavOptions.euiIconType property
A EUI iconType that will be used for the app's icon. This icon takes precendence over the `icon` property.
A EUI iconType that will be used for the app's icon. This icon takes precedence over the `icon` property.
<b>Signature:</b>

View file

@ -16,7 +16,7 @@ export interface AppNavOptions
| Property | Type | Description |
| --- | --- | --- |
| [euiIconType](./kibana-plugin-core-public.appnavoptions.euiicontype.md) | <code>string</code> | A EUI iconType that will be used for the app's icon. This icon takes precendence over the <code>icon</code> property. |
| [euiIconType](./kibana-plugin-core-public.appnavoptions.euiicontype.md) | <code>string</code> | A EUI iconType that will be used for the app's icon. This icon takes precedence over the <code>icon</code> property. |
| [icon](./kibana-plugin-core-public.appnavoptions.icon.md) | <code>string</code> | A URL to an image file used as an icon. Used as a fallback if <code>euiIconType</code> is not provided. |
| [order](./kibana-plugin-core-public.appnavoptions.order.md) | <code>number</code> | An ordinal used to sort nav links relative to one another for display. |
| [tooltip](./kibana-plugin-core-public.appnavoptions.tooltip.md) | <code>string</code> | A tooltip shown when hovering over app link. |

View file

@ -4,7 +4,7 @@
## HttpSetup.fetch property
Makes an HTTP request. Defaults to a GET request unless overriden. See [HttpHandler](./kibana-plugin-core-public.httphandler.md) for options.
Makes an HTTP request. Defaults to a GET request unless overridden. See [HttpHandler](./kibana-plugin-core-public.httphandler.md) for options.
<b>Signature:</b>

View file

@ -19,7 +19,7 @@ export interface HttpSetup
| [basePath](./kibana-plugin-core-public.httpsetup.basepath.md) | <code>IBasePath</code> | APIs for manipulating the basePath on URL segments. See [IBasePath](./kibana-plugin-core-public.ibasepath.md) |
| [delete](./kibana-plugin-core-public.httpsetup.delete.md) | <code>HttpHandler</code> | Makes an HTTP request with the DELETE method. See [HttpHandler](./kibana-plugin-core-public.httphandler.md) for options. |
| [externalUrl](./kibana-plugin-core-public.httpsetup.externalurl.md) | <code>IExternalUrl</code> | |
| [fetch](./kibana-plugin-core-public.httpsetup.fetch.md) | <code>HttpHandler</code> | Makes an HTTP request. Defaults to a GET request unless overriden. See [HttpHandler](./kibana-plugin-core-public.httphandler.md) for options. |
| [fetch](./kibana-plugin-core-public.httpsetup.fetch.md) | <code>HttpHandler</code> | Makes an HTTP request. Defaults to a GET request unless overridden. See [HttpHandler](./kibana-plugin-core-public.httphandler.md) for options. |
| [get](./kibana-plugin-core-public.httpsetup.get.md) | <code>HttpHandler</code> | Makes an HTTP request with the GET method. See [HttpHandler](./kibana-plugin-core-public.httphandler.md) for options. |
| [head](./kibana-plugin-core-public.httpsetup.head.md) | <code>HttpHandler</code> | Makes an HTTP request with the HEAD method. See [HttpHandler](./kibana-plugin-core-public.httphandler.md) for options. |
| [options](./kibana-plugin-core-public.httpsetup.options.md) | <code>HttpHandler</code> | Makes an HTTP request with the OPTIONS method. See [HttpHandler](./kibana-plugin-core-public.httphandler.md) for options. |

View file

@ -303,7 +303,7 @@ The plugin integrates with the core system via lifecycle events: `setup`<!-- -->
| [SavedObjectAttributeSingle](./kibana-plugin-core-server.savedobjectattributesingle.md) | Don't use this type, it's simply a helper type for [SavedObjectAttribute](./kibana-plugin-core-server.savedobjectattribute.md) |
| [SavedObjectMigrationFn](./kibana-plugin-core-server.savedobjectmigrationfn.md) | A migration function for a [saved object type](./kibana-plugin-core-server.savedobjectstype.md) used to migrate it to a given version |
| [SavedObjectSanitizedDoc](./kibana-plugin-core-server.savedobjectsanitizeddoc.md) | Describes Saved Object documents that have passed through the migration framework and are guaranteed to have a <code>references</code> root property. |
| [SavedObjectsClientContract](./kibana-plugin-core-server.savedobjectsclientcontract.md) | Saved Objects is Kibana's data persisentence mechanism allowing plugins to use Elasticsearch for storing plugin state.<!-- -->\#\# SavedObjectsClient errors<!-- -->Since the SavedObjectsClient has its hands in everything we are a little paranoid about the way we present errors back to to application code. Ideally, all errors will be either:<!-- -->1. Caused by bad implementation (ie. undefined is not a function) and as such unpredictable 2. An error that has been classified and decorated appropriately by the decorators in [SavedObjectsErrorHelpers](./kibana-plugin-core-server.savedobjectserrorhelpers.md)<!-- -->Type 1 errors are inevitable, but since all expected/handle-able errors should be Type 2 the <code>isXYZError()</code> helpers exposed at <code>SavedObjectsErrorHelpers</code> should be used to understand and manage error responses from the <code>SavedObjectsClient</code>.<!-- -->Type 2 errors are decorated versions of the source error, so if the elasticsearch client threw an error it will be decorated based on its type. That means that rather than looking for <code>error.body.error.type</code> or doing substring checks on <code>error.body.error.reason</code>, just use the helpers to understand the meaning of the error:<!-- -->\`\`\`<!-- -->js if (SavedObjectsErrorHelpers.isNotFoundError(error)) { // handle 404 }<!-- -->if (SavedObjectsErrorHelpers.isNotAuthorizedError(error)) { // 401 handling should be automatic, but in case you wanted to know }<!-- -->// always rethrow the error unless you handle it throw error; \`\`\`<!-- -->\#\#\# 404s from missing index<!-- -->From the perspective of application code and APIs the SavedObjectsClient is a black box that persists objects. One of the internal details that users have no control over is that we use an elasticsearch index for persistance and that index might be missing.<!-- -->At the time of writing we are in the process of transitioning away from the operating assumption that the SavedObjects index is always available. Part of this transition is handling errors resulting from an index missing. These used to trigger a 500 error in most cases, and in others cause 404s with different error messages.<!-- -->From my (Spencer) perspective, a 404 from the SavedObjectsApi is a 404; The object the request/call was targeting could not be found. This is why \#14141 takes special care to ensure that 404 errors are generic and don't distinguish between index missing or document missing.<!-- -->See [SavedObjectsClient](./kibana-plugin-core-server.savedobjectsclient.md) See [SavedObjectsErrorHelpers](./kibana-plugin-core-server.savedobjectserrorhelpers.md) |
| [SavedObjectsClientContract](./kibana-plugin-core-server.savedobjectsclientcontract.md) | Saved Objects is Kibana's data persisentence mechanism allowing plugins to use Elasticsearch for storing plugin state.<!-- -->\#\# SavedObjectsClient errors<!-- -->Since the SavedObjectsClient has its hands in everything we are a little paranoid about the way we present errors back to to application code. Ideally, all errors will be either:<!-- -->1. Caused by bad implementation (ie. undefined is not a function) and as such unpredictable 2. An error that has been classified and decorated appropriately by the decorators in [SavedObjectsErrorHelpers](./kibana-plugin-core-server.savedobjectserrorhelpers.md)<!-- -->Type 1 errors are inevitable, but since all expected/handle-able errors should be Type 2 the <code>isXYZError()</code> helpers exposed at <code>SavedObjectsErrorHelpers</code> should be used to understand and manage error responses from the <code>SavedObjectsClient</code>.<!-- -->Type 2 errors are decorated versions of the source error, so if the elasticsearch client threw an error it will be decorated based on its type. That means that rather than looking for <code>error.body.error.type</code> or doing substring checks on <code>error.body.error.reason</code>, just use the helpers to understand the meaning of the error:<!-- -->\`\`\`<!-- -->js if (SavedObjectsErrorHelpers.isNotFoundError(error)) { // handle 404 }<!-- -->if (SavedObjectsErrorHelpers.isNotAuthorizedError(error)) { // 401 handling should be automatic, but in case you wanted to know }<!-- -->// always rethrow the error unless you handle it throw error; \`\`\`<!-- -->\#\#\# 404s from missing index<!-- -->From the perspective of application code and APIs the SavedObjectsClient is a black box that persists objects. One of the internal details that users have no control over is that we use an elasticsearch index for persistence and that index might be missing.<!-- -->At the time of writing we are in the process of transitioning away from the operating assumption that the SavedObjects index is always available. Part of this transition is handling errors resulting from an index missing. These used to trigger a 500 error in most cases, and in others cause 404s with different error messages.<!-- -->From my (Spencer) perspective, a 404 from the SavedObjectsApi is a 404; The object the request/call was targeting could not be found. This is why \#14141 takes special care to ensure that 404 errors are generic and don't distinguish between index missing or document missing.<!-- -->See [SavedObjectsClient](./kibana-plugin-core-server.savedobjectsclient.md) See [SavedObjectsErrorHelpers](./kibana-plugin-core-server.savedobjectserrorhelpers.md) |
| [SavedObjectsClientFactory](./kibana-plugin-core-server.savedobjectsclientfactory.md) | Describes the factory used to create instances of the Saved Objects Client. |
| [SavedObjectsClientFactoryProvider](./kibana-plugin-core-server.savedobjectsclientfactoryprovider.md) | Provider to invoke to retrieve a [SavedObjectsClientFactory](./kibana-plugin-core-server.savedobjectsclientfactory.md)<!-- -->. |
| [SavedObjectsClientWrapperFactory](./kibana-plugin-core-server.savedobjectsclientwrapperfactory.md) | Describes the factory used to create instances of Saved Objects Client Wrappers. |

View file

@ -24,7 +24,7 @@ if (SavedObjectsErrorHelpers.isNotAuthorizedError(error)) { // 401 handling shou
\#\#\# 404s from missing index
From the perspective of application code and APIs the SavedObjectsClient is a black box that persists objects. One of the internal details that users have no control over is that we use an elasticsearch index for persistance and that index might be missing.
From the perspective of application code and APIs the SavedObjectsClient is a black box that persists objects. One of the internal details that users have no control over is that we use an elasticsearch index for persistence and that index might be missing.
At the time of writing we are in the process of transitioning away from the operating assumption that the SavedObjects index is always available. Part of this transition is handling errors resulting from an index missing. These used to trigger a 500 error in most cases, and in others cause 404s with different error messages.

View file

@ -4,7 +4,7 @@
## SavedObjectsExportError.invalidTransformError() method
Error returned when a [export tranform](./kibana-plugin-core-server.savedobjectsexporttransform.md) performed an invalid operation during the transform, such as removing objects from the export, or changing an object's type or id.
Error returned when a [export transform](./kibana-plugin-core-server.savedobjectsexporttransform.md) performed an invalid operation during the transform, such as removing objects from the export, or changing an object's type or id.
<b>Signature:</b>

View file

@ -29,7 +29,7 @@ export declare class SavedObjectsExportError extends Error
| Method | Modifiers | Description |
| --- | --- | --- |
| [exportSizeExceeded(limit)](./kibana-plugin-core-server.savedobjectsexporterror.exportsizeexceeded.md) | <code>static</code> | |
| [invalidTransformError(objectKeys)](./kibana-plugin-core-server.savedobjectsexporterror.invalidtransformerror.md) | <code>static</code> | Error returned when a [export tranform](./kibana-plugin-core-server.savedobjectsexporttransform.md) performed an invalid operation during the transform, such as removing objects from the export, or changing an object's type or id. |
| [invalidTransformError(objectKeys)](./kibana-plugin-core-server.savedobjectsexporterror.invalidtransformerror.md) | <code>static</code> | Error returned when a [export transform](./kibana-plugin-core-server.savedobjectsexporttransform.md) performed an invalid operation during the transform, such as removing objects from the export, or changing an object's type or id. |
| [objectFetchError(objects)](./kibana-plugin-core-server.savedobjectsexporterror.objectfetcherror.md) | <code>static</code> | |
| [objectTransformError(objects, cause)](./kibana-plugin-core-server.savedobjectsexporterror.objecttransformerror.md) | <code>static</code> | Error returned when a [export tranform](./kibana-plugin-core-server.savedobjectsexporttransform.md) threw an error |
| [objectTransformError(objects, cause)](./kibana-plugin-core-server.savedobjectsexporterror.objecttransformerror.md) | <code>static</code> | Error returned when a [export transform](./kibana-plugin-core-server.savedobjectsexporttransform.md) threw an error |

View file

@ -4,7 +4,7 @@
## SavedObjectsExportError.objectTransformError() method
Error returned when a [export tranform](./kibana-plugin-core-server.savedobjectsexporttransform.md) threw an error
Error returned when a [export transform](./kibana-plugin-core-server.savedobjectsexporttransform.md) threw an error
<b>Signature:</b>

View file

@ -28,5 +28,5 @@ export declare class SavedObjectsImporter
| Method | Modifiers | Description |
| --- | --- | --- |
| [import({ readStream, createNewCopies, namespace, overwrite, })](./kibana-plugin-core-server.savedobjectsimporter.import.md) | | Import saved objects from given stream. See the [options](./kibana-plugin-core-server.savedobjectsimportoptions.md) for more detailed information. |
| [resolveImportErrors({ readStream, createNewCopies, namespace, retries, })](./kibana-plugin-core-server.savedobjectsimporter.resolveimporterrors.md) | | Resolve and return saved object import errors. See the [options](./kibana-plugin-core-server.savedobjectsresolveimporterrorsoptions.md) for more detailed informations. |
| [resolveImportErrors({ readStream, createNewCopies, namespace, retries, })](./kibana-plugin-core-server.savedobjectsimporter.resolveimporterrors.md) | | Resolve and return saved object import errors. See the [options](./kibana-plugin-core-server.savedobjectsresolveimporterrorsoptions.md) for more detailed information. |

View file

@ -4,7 +4,7 @@
## SavedObjectsImporter.resolveImportErrors() method
Resolve and return saved object import errors. See the [options](./kibana-plugin-core-server.savedobjectsresolveimporterrorsoptions.md) for more detailed informations.
Resolve and return saved object import errors. See the [options](./kibana-plugin-core-server.savedobjectsresolveimporterrorsoptions.md) for more detailed information.
<b>Signature:</b>

View file

@ -16,5 +16,5 @@ derivedStatus$: Observable<ServiceStatus>;
By default, plugins inherit this derived status from their dependencies. Calling overrides this default status.
This may emit multliple times for a single status change event as propagates through the dependency tree
This may emit multiple times for a single status change event as propagates through the dependency tree

View file

@ -1,7 +1,7 @@
You can specify the following types to the `Url` field formatter:
* *Link* &mdash; Converts the contents of the field into an URL. You can specify the width and height of the image, while keeping the aspect ratio.
When the image is smaller than the specified paramters, the image is unable to upscale.
When the image is smaller than the specified parameters, the image is unable to upscale.
* *Image* &mdash; Specifies the image directory.
* *Audio* &mdash; Specify the audio directory.

View file

@ -249,7 +249,7 @@ image::maps/images/asset-tracking-tutorial/top_hits_layer_style.png[]
. Click *Save & close*.
. Open the <<set-time-filter, time filter>>, and set *Refresh every* to 10 seconds, and click *Start*.
Your map should automatically refresh every 10 seconds to show the lastest bus positions and tracks.
Your map should automatically refresh every 10 seconds to show the latest bus positions and tracks.
[role="screenshot"]
image::maps/images/asset-tracking-tutorial/tracks_and_top_hits.png[]

View file

@ -136,7 +136,7 @@ grids with less bytes transferred.
** **Visibility** to the range [0, 9]
** **Opacity** to 100%
. In **Metrics**:
** Set **Agregation** to **Count**.
** Set **Aggregation** to **Count**.
** Click **Add metric**.
** Set **Aggregation** to **Sum** with **Field** set to **bytes**.
. In **Layer style**, change **Symbol size**:

View file

@ -4,7 +4,7 @@
*Maps* comes with https://maps.elastic.co/#file[predefined regions] that allow you to quickly visualize regions by metrics. *Maps* also offers the ability to map your own regions. You can use any region data you'd like, as long as your source data contains an identifier for the corresponding region.
But how can you map regions when your source data does not contain a region identifier? This is where reverse geocoding comes in. Reverse geocoding is the process of assigning a region identifer to a feature based on its location.
But how can you map regions when your source data does not contain a region identifier? This is where reverse geocoding comes in. Reverse geocoding is the process of assigning a region identifier to a feature based on its location.
In this tutorial, youll use reverse geocoding to visualize United States Census Bureau Combined Statistical Area (CSA) regions by web traffic.

View file

@ -84,7 +84,7 @@ Create filters from your map to focus in on just the data you want. *Maps* provi
==== Filter dashboard by map extent
A map extent shows uniform data across all panels.
As you pan and zoom your map, all panels will update to only include data that is visable in your map.
As you pan and zoom your map, all panels will update to only include data that is visible in your map.
To enable filtering your dashboard by map extent:

View file

@ -10,7 +10,7 @@ For each property, you can specify whether to use a constant or data driven valu
[[maps-vector-style-static]]
==== Static styling
Use static styling to specificy a constant value for a style property.
Use static styling to specify a constant value for a style property.
This image shows an example of static styling using the <<add-sample-data, Kibana sample web logs>> data set.
The *kibana_sample_data_logs* layer uses static styling for all properties.

View file

@ -101,7 +101,7 @@ Optional properties are:
prevent that specific `var` from being edited by the user.
| `xpack.fleet.outputs`
| List of ouputs that are configured when the {fleet} app starts.
| List of outputs that are configured when the {fleet} app starts.
Required properties are:
`id`:: Unique ID for this output. The ID should be a string.

View file

@ -57,6 +57,6 @@ Settings that configure the <<task-manager-health-monitoring>> endpoint.
|===
| `xpack.task_manager.`
`monitored_task_execution_thresholds`
| Configures the threshold of failed task executions at which point the `warn` or `error` health status is set under each task type execution status (under `stats.runtime.value.excution.result_frequency_percent_as_number[${task type}].status`). This setting allows configuration of both the default level and a custom task type specific level. By default, this setting is configured to mark the health of every task type as `warning` when it exceeds 80% failed executions, and as `error` at 90%. Custom configurations allow you to reduce this threshold to catch failures sooner for task types that you might consider critical, such as alerting tasks. This value can be set to any number between 0 to 100, and a threshold is hit when the value *exceeds* this number. This means that you can avoid setting the status to `error` by setting the threshold at 100, or hit `error` the moment any task fails by setting the threshold to 0 (as it will exceed 0 once a single failure occurs).
| Configures the threshold of failed task executions at which point the `warn` or `error` health status is set under each task type execution status (under `stats.runtime.value.execution.result_frequency_percent_as_number[${task type}].status`). This setting allows configuration of both the default level and a custom task type specific level. By default, this setting is configured to mark the health of every task type as `warning` when it exceeds 80% failed executions, and as `error` at 90%. Custom configurations allow you to reduce this threshold to catch failures sooner for task types that you might consider critical, such as alerting tasks. This value can be set to any number between 0 to 100, and a threshold is hit when the value *exceeds* this number. This means that you can avoid setting the status to `error` by setting the threshold at 100, or hit `error` the moment any task fails by setting the threshold to 0 (as it will exceed 0 once a single failure occurs).
|===

View file

@ -428,14 +428,14 @@ in a manner that is inconsistent with `/proc/self/cgroup`.
|[[savedObjects-maxImportExportSize]] `savedObjects.maxImportExportSize:`
| The maximum count of saved objects that can be imported or exported.
This setting exists to prevent the {kib} server from runnning out of memory when handling
This setting exists to prevent the {kib} server from running out of memory when handling
large numbers of saved objects. It is recommended to only raise this setting if you are
confident your server can hold this many objects in memory.
*Default: `10000`*
|[[savedObjects-maxImportPayloadBytes]] `savedObjects.maxImportPayloadBytes:`
| The maximum byte size of a saved objects import that the {kib} server will accept.
This setting exists to prevent the {kib} server from runnning out of memory when handling
This setting exists to prevent the {kib} server from running out of memory when handling
a large import payload. Note that this setting overrides the more general
<<server-maxPayload, `server.maxPayload`>> for saved object imports only.
*Default: `26214400`*

View file

@ -43,7 +43,7 @@ on the name of your space, but you can customize the identifier to your liking.
You cannot change the space identifier once you create the space.
{kib} also has an <<spaces-api, API>>
if you prefer to create spaces programatically.
if you prefer to create spaces programmatically.
[role="screenshot"]
image::images/edit-space.png["Space management"]
@ -70,7 +70,7 @@ to specific features on a per-user basis, you must configure
<<xpack-security-authorization, {kib} Security>>.
[role="screenshot"]
image::images/edit-space-feature-visibility.png["Controlling features visiblity"]
image::images/edit-space-feature-visibility.png["Controlling features visibility"]
[float]
[[spaces-control-user-access]]
@ -84,7 +84,7 @@ while analysts or executives might have read-only privileges for *Dashboard* and
Refer to <<adding_kibana_privileges>> for details.
[role="screenshot"]
image::images/spaces-roles.png["Controlling features visiblity"]
image::images/spaces-roles.png["Controlling features visibility"]
[float]
[[spaces-moving-objects]]

View file

@ -41,7 +41,7 @@ Domain rules are registered by *Observability*, *Security*, <<maps, Maps>> and <
| Detect complex conditions in the *Logs*, *Metrics*, and *Uptime* apps.
| {security-guide}/prebuilt-rules.html[Security rules]
| Detect suspicous source events with pre-built or custom rules and create alerts when a rules conditions are met.
| Detect suspicious source events with pre-built or custom rules and create alerts when a rules conditions are met.
| <<geo-alerting, Maps rules>>
| Run an {es} query to determine if any documents are currently contained in any boundaries from a specified boundary index and generate alerts when a rule's conditions are met.

View file

@ -19,7 +19,7 @@ image::user/alerting/images/rule-types-es-query-conditions.png[Five clauses defi
Index:: This clause requires an *index or index pattern* and a *time field* that will be used for the *time window*.
Size:: This clause specifies the number of documents to pass to the configured actions when the the threshold condition is met.
{es} query:: This clause specifies the ES DSL query to execute. The number of documents that match this query will be evaulated against the threshold
{es} query:: This clause specifies the ES DSL query to execute. The number of documents that match this query will be evaluated against the threshold
condition. Aggregations are not supported at this time.
Threshold:: This clause defines a threshold value and a comparison operator (`is above`, `is above or equals`, `is below`, `is below or equals`, or `is between`). The number of documents that match the specified query is compared to this threshold.
Time window:: This clause determines how far back to search for documents, using the *time field* set in the *index* clause. Generally this value should be set to a value higher than the *check every* value in the <<defining-rules-general-details, general rule details>>, to avoid gaps in detection.

View file

@ -208,7 +208,7 @@ For example `dashboards#/view/f193ca90-c9f4-11eb-b038-dd3270053a27`.
. Click *Save and return*.
. In the toolbar, cick *Save as*, then make sure *Store time with dashboard* is deselected.
. In the toolbar, click *Save as*, then make sure *Store time with dashboard* is deselected.
====
[discrete]

View file

@ -52,7 +52,7 @@ Predicting the buffer required to account for actions depends heavily on the rul
[float]
[[event-log-ilm]]
=== Event log index lifecycle managment
=== Event log index lifecycle management
experimental[]

View file

@ -109,7 +109,7 @@ a| Workload
a| Runtime
| This section tracks excution performance of Task Manager, tracking task _drift_, worker _load_, and execution stats broken down by type, including duration and execution results.
| This section tracks execution performance of Task Manager, tracking task _drift_, worker _load_, and execution stats broken down by type, including duration and execution results.
a| Capacity Estimation

View file

@ -68,7 +68,7 @@ This means that you can expect a single {kib} instance to support up to 200 _tas
In practice, a {kib} instance will only achieve the upper bound of `200/tpm` if the duration of task execution is below the polling rate of 3 seconds. For the most part, the duration of tasks is below that threshold, but it can vary greatly as {es} and {kib} usage grow and task complexity increases (such as alerts executing heavy queries across large datasets).
By <<task-manager-rough-throughput-estimation, estimating a rough throughput requirment>>, you can estimate the number of {kib} instances required to reliably execute tasks in a timely manner. An appropriate number of {kib} instances can be estimated to match the required scale.
By <<task-manager-rough-throughput-estimation, estimating a rough throughput requirement>>, you can estimate the number of {kib} instances required to reliably execute tasks in a timely manner. An appropriate number of {kib} instances can be estimated to match the required scale.
For details on monitoring the health of {kib} Task Manager, follow the guidance in <<task-manager-health-monitoring>>.
@ -149,7 +149,7 @@ When evaluating the proposed {kib} instance number under `proposed.provisioned_k
By <<task-manager-health-evaluate-the-workload,evaluating the workload>>, you can make a rough estimate as to the required throughput as a _tasks per minute_ measurement.
For example, suppose your current workload reveals a required throughput of `440/tpm`. You can address this scale by provisioning 3 {kib} instances, with an upper throughput of `600/tpm`. This scale would provide aproximately 25% additional capacity to handle ad-hoc non-recurring tasks and potential growth in recurring tasks.
For example, suppose your current workload reveals a required throughput of `440/tpm`. You can address this scale by provisioning 3 {kib} instances, with an upper throughput of `600/tpm`. This scale would provide approximately 25% additional capacity to handle ad-hoc non-recurring tasks and potential growth in recurring tasks.
Given a deployment of 100 recurring tasks, estimating the required throughput depends on the scheduled cadence.
Suppose you expect to run 50 tasks at a cadence of `10s`, the other 50 tasks at `20m`. In addition, you expect a couple dozen non-recurring tasks every minute.

View file

@ -83,7 +83,7 @@ export interface AppNavOptions {
/**
* A EUI iconType that will be used for the app's icon. This icon
* takes precendence over the `icon` property.
* takes precedence over the `icon` property.
*/
euiIconType?: string;

View file

@ -32,7 +32,7 @@ export interface HttpSetup {
*/
intercept(interceptor: HttpInterceptor): () => void;
/** Makes an HTTP request. Defaults to a GET request unless overriden. See {@link HttpHandler} for options. */
/** Makes an HTTP request. Defaults to a GET request unless overridden. See {@link HttpHandler} for options. */
fetch: HttpHandler;
/** Makes an HTTP request with the DELETE method. See {@link HttpHandler} for options. */
delete: HttpHandler;

View file

@ -58,7 +58,7 @@ export class GlobalToastList extends React.Component<Props, State> {
toasts={this.state.toasts.map(convertToEui)}
dismissToast={({ id }) => this.props.dismissToast(id)}
/**
* This prop is overriden by the individual toasts that are added.
* This prop is overridden by the individual toasts that are added.
* Use `Infinity` here so that it's obvious a timeout hasn't been
* provided in development.
*/

View file

@ -34,7 +34,7 @@ export interface PluginInitializerContext<ConfigSchema extends object = object>
}
/**
* Provides a plugin-specific context passed to the plugin's construtor. This is currently
* Provides a plugin-specific context passed to the plugin's constructor. This is currently
* empty but should provide static services in the future, such as config and logging.
*
* @param coreContext

View file

@ -19,7 +19,7 @@ export type {
export { CoreUsageDataService } from './core_usage_data_service';
export { CoreUsageStatsClient, REPOSITORY_RESOLVE_OUTCOME_STATS } from './core_usage_stats_client';
// Because of #79265 we need to explicity import, then export these types for
// Because of #79265 we need to explicitly import, then export these types for
// scripts/telemetry_check.js to work as expected
import {
CoreUsageStats,

View file

@ -252,7 +252,7 @@ Examples:
> become read-only if disk usage reaches 95%.
## Note on i18n
All deprecation titles, messsages, and manual steps should be wrapped in `i18n.translate`. This
All deprecation titles, messages, and manual steps should be wrapped in `i18n.translate`. This
provides a better user experience using different locales. Follow the writing guidelines below for
best practices to writing the i18n messages and ids.

View file

@ -78,7 +78,7 @@ describe('DeprecationsService', () => {
correctiveActions: {
manualSteps: [
'Using Kibana user management, change all users using the kibana_user role to the kibana_admin role.',
'Using Kibana role-mapping management, change all role-mappings which assing the kibana_user role to the kibana_admin role.',
'Using Kibana role-mapping management, change all role-mappings which assign the kibana_user role to the kibana_admin role.',
],
},
},
@ -103,7 +103,7 @@ describe('DeprecationsService', () => {
"correctiveActions": Object {
"manualSteps": Array [
"Using Kibana user management, change all users using the kibana_user role to the kibana_admin role.",
"Using Kibana role-mapping management, change all role-mappings which assing the kibana_user role to the kibana_admin role.",
"Using Kibana role-mapping management, change all role-mappings which assign the kibana_user role to the kibana_admin role.",
],
},
"deprecationType": "config",

View file

@ -16,7 +16,7 @@ import { Logger } from '../logging';
const FILE_ENCODING = 'utf8';
const FILE_NAME = 'uuid';
/**
* This UUID was inadvertantly shipped in the 7.6.0 distributable and should be deleted if found.
* This UUID was inadvertently shipped in the 7.6.0 distributable and should be deleted if found.
* See https://github.com/elastic/kibana/issues/57673 for more info.
*/
export const UUID_7_6_0_BUG = `ce42b997-a913-4d58-be46-bb1937feedd6`;

View file

@ -115,7 +115,7 @@ describe('KibanaExecutionContext', () => {
expect(value).toEqual(context);
});
it('returns a context object with registed parent object', () => {
it('returns a context object with registered parent object', () => {
const parentContext: KibanaExecutionContext = {
type: 'parent-type',
name: 'parent-name',

View file

@ -132,7 +132,7 @@ describe('trace', () => {
expect(header).toEqual(expect.any(String));
});
it('can be overriden during Elasticsearch client call', async () => {
it('can be overridden during Elasticsearch client call', async () => {
const { http } = await root.setup();
const { createRouter } = http;

View file

@ -266,7 +266,7 @@ describe('customHeaders pre-response handler', () => {
});
});
it('preserve the kbn-name value from server.name if definied in custom headders ', () => {
it('preserve the kbn-name value from server.name if defined in custom headders ', () => {
const config = createConfig({
name: 'my-server-name',
customResponseHeaders: {

View file

@ -144,7 +144,7 @@ export class RollingFileAppender implements DisposableAppender {
// this would cause a second rollover that would not be awaited
// and could result in a race with the newly created appender
// that would also be performing a rollover.
// so if we are disposed, we just flush the buffer directly to the file instead to avoid loosing the entries.
// so if we are disposed, we just flush the buffer directly to the file instead to avoid losing the entries.
for (const log of pendingLogs) {
if (this.disposed) {
this._writeToFile(log);

View file

@ -40,7 +40,7 @@ export class SavedObjectsExportError extends Error {
}
/**
* Error returned when a {@link SavedObjectsExportTransform | export tranform} threw an error
* Error returned when a {@link SavedObjectsExportTransform | export transform} threw an error
*/
static objectTransformError(objects: SavedObject[], cause: Error) {
return new SavedObjectsExportError(
@ -54,7 +54,7 @@ export class SavedObjectsExportError extends Error {
}
/**
* Error returned when a {@link SavedObjectsExportTransform | export tranform} performed an invalid operation
* Error returned when a {@link SavedObjectsExportTransform | export transform} performed an invalid operation
* during the transform, such as removing objects from the export, or changing an object's type or id.
*/
static invalidTransformError(objectKeys: string[]) {

View file

@ -52,7 +52,7 @@ export interface ResolveSavedObjectsImportErrorsOptions {
/**
* Resolve and return saved object import errors.
* See the {@link SavedObjectsResolveImportErrorsOptions | options} for more detailed informations.
* See the {@link SavedObjectsResolveImportErrorsOptions | options} for more detailed information.
*
* @public
*/

View file

@ -81,7 +81,7 @@ export class SavedObjectsImporter {
/**
* Resolve and return saved object import errors.
* See the {@link SavedObjectsResolveImportErrorsOptions | options} for more detailed informations.
* See the {@link SavedObjectsResolveImportErrorsOptions | options} for more detailed information.
*
* @throws SavedObjectsImportError
*/

View file

@ -44,7 +44,7 @@ It might happen that a user modifies their FanciPlugin 1.0 export file to have d
Similarly, Kibana server APIs assume that they are sent up to date documents unless a document specifies a migrationVersion. This means that out-of-date callers of our APIs will send us out-of-date documents, and those documents will be accepted and stored as if they are up-to-date.
To prevent this from happening, migration authors should _always_ write a [validation](../validation) function that throws an error if a document is not up to date, and this validation function should always be updated any time a new migration is added for the relevent document types.
To prevent this from happening, migration authors should _always_ write a [validation](../validation) function that throws an error if a document is not up to date, and this validation function should always be updated any time a new migration is added for the relevant document types.
## Document ownership
@ -92,7 +92,7 @@ Each migration function only needs to be able to handle documents belonging to t
## Disabled plugins
If a plugin is disbled, all of its documents are retained in the Kibana index. They can be imported and exported. When the plugin is re-enabled, Kibana will migrate any out of date documents that were imported or retained while it was disabled.
If a plugin is disabled, all of its documents are retained in the Kibana index. They can be imported and exported. When the plugin is re-enabled, Kibana will migrate any out of date documents that were imported or retained while it was disabled.
## Configuration
@ -116,7 +116,7 @@ Kibana index migrations expose a few config settings which might be tweaked:
To illustrate how migrations work, let's walk through an example, using a fictional plugin: `FanciPlugin`.
FanciPlugin 1.0 had a mappping that looked like this:
FanciPlugin 1.0 had a mapping that looked like this:
```js
{

View file

@ -443,7 +443,7 @@ function buildDocumentTransform({
}
// In order to keep tests a bit more stable, we won't
// tack on an empy migrationVersion to docs that have
// tack on an empty migrationVersion to docs that have
// no migrations defined.
if (_.isEmpty(transformedDoc.migrationVersion)) {
delete transformedDoc.migrationVersion;
@ -740,7 +740,7 @@ function nextUnmigratedProp(doc: SavedObjectUnsanitizedDoc, migrations: ActiveMi
}
/**
* Applies any relevent migrations to the document for the specified property.
* Applies any relevant migrations to the document for the specified property.
*/
function migrateProp(
doc: SavedObjectUnsanitizedDoc,

View file

@ -62,7 +62,7 @@ describe('disableUnknownTypeMappingFields', () => {
properties: {
new_field: { type: 'binary' },
field_1: { type: 'keyword' }, // was type text in source mappings
// old_field was present in source but ommited in active mappings
// old_field was present in source but omitted in active mappings
},
});
});

View file

@ -67,7 +67,7 @@ export const updateAliases =
// The only impact for using `updateAliases` to mark the version index
// as ready is that it could take longer for other Kibana instances to
// see that the version index is ready so they are more likely to
// perform unecessary duplicate work.
// perform unnecessary duplicate work.
return Either.right('update_aliases_succeeded' as const);
})
.catch((err: EsErrors.ElasticsearchClientError) => {

View file

@ -23,7 +23,7 @@ export interface WaitForTaskResponse {
}
/**
* After waiting for the specificed timeout, the task has not yet completed.
* After waiting for the specified timeout, the task has not yet completed.
*
* When querying the tasks API we use `wait_for_completion=true` to block the
* request until the task completes. If after the `timeout`, the task still has
@ -31,7 +31,7 @@ export interface WaitForTaskResponse {
* has reached a timeout, Elasticsearch will continue to run the task.
*/
export interface WaitForTaskCompletionTimeout {
/** After waiting for the specificed timeout, the task has not yet completed. */
/** After waiting for the specified timeout, the task has not yet completed. */
readonly type: 'wait_for_task_completion_timeout';
readonly message: string;
readonly error?: Error;

View file

@ -40,7 +40,7 @@ describe('migration v2 with corrupt saved object documents', () => {
await new Promise((resolve) => setTimeout(resolve, 10000));
});
it('collects corrupt saved object documents accross batches', async () => {
it('collects corrupt saved object documents across batches', async () => {
const { startES } = kbnTestServer.createTestServers({
adjustTimeout: (t: number) => jest.setTimeout(t),
settings: {

View file

@ -40,7 +40,7 @@ describe('migration v2 with corrupt saved object documents', () => {
await new Promise((resolve) => setTimeout(resolve, 10000));
});
it('collects corrupt saved object documents accross batches', async () => {
it('collects corrupt saved object documents across batches', async () => {
const { startES } = kbnTestServer.createTestServers({
adjustTimeout: (t: number) => jest.setTimeout(t),
settings: {

View file

@ -565,7 +565,7 @@ describe('migrations v2 model', () => {
});
// The createIndex action called by LEGACY_CREATE_REINDEX_TARGET never
// returns a left, it will always succeed or timeout. Since timeout
// failures are always retried we don't explicity test this logic
// failures are always retried we don't explicitly test this logic
});
describe('LEGACY_REINDEX', () => {

View file

@ -265,7 +265,7 @@ export const model = (currentState: State, resW: ResponseType<AllActionStates>):
// control state progression and simplify the implementation.
return { ...stateP, controlState: 'LEGACY_DELETE' };
} else if (isLeftTypeof(left, 'wait_for_task_completion_timeout')) {
// After waiting for the specificed timeout, the task has not yet
// After waiting for the specified timeout, the task has not yet
// completed. Retry this step to see if the task has completed after an
// exponential delay. We will basically keep polling forever until the
// Elasticeasrch task succeeds or fails.
@ -854,7 +854,7 @@ export const model = (currentState: State, resW: ResponseType<AllActionStates>):
} else {
// If there are none versionIndexReadyActions another instance
// already completed this migration and we only transformed outdated
// documents and updated the mappings for incase a new plugin was
// documents and updated the mappings for in case a new plugin was
// enabled.
return {
...stateP,

View file

@ -301,7 +301,7 @@ describe('SavedObjectsService', () => {
});
describe('#createScopedRepository', () => {
it('creates a respository scoped to the user', async () => {
it('creates a repository scoped to the user', async () => {
const coreContext = createCoreContext({ skipMigration: false });
const soService = new SavedObjectsService(coreContext);
const coreSetup = createSetupDeps();
@ -321,7 +321,7 @@ describe('SavedObjectsService', () => {
expect(includedHiddenTypes).toEqual([]);
});
it('creates a respository including hidden types when specified', async () => {
it('creates a repository including hidden types when specified', async () => {
const coreContext = createCoreContext({ skipMigration: false });
const soService = new SavedObjectsService(coreContext);
const coreSetup = createSetupDeps();
@ -341,7 +341,7 @@ describe('SavedObjectsService', () => {
});
describe('#createInternalRepository', () => {
it('creates a respository using the admin user', async () => {
it('creates a repository using the admin user', async () => {
const coreContext = createCoreContext({ skipMigration: false });
const soService = new SavedObjectsService(coreContext);
const coreSetup = createSetupDeps();
@ -359,7 +359,7 @@ describe('SavedObjectsService', () => {
expect(includedHiddenTypes).toEqual([]);
});
it('creates a respository including hidden types when specified', async () => {
it('creates a repository including hidden types when specified', async () => {
const coreContext = createCoreContext({ skipMigration: false });
const soService = new SavedObjectsService(coreContext);
const coreSetup = createSetupDeps();

View file

@ -243,7 +243,7 @@ describe('internalBulkResolve', () => {
});
it('does not call bulk update in the Default space', async () => {
// Aliases cannot exist in the Default space, so we skip the alias check part of the alogrithm in that case (e.g., bulk update)
// Aliases cannot exist in the Default space, so we skip the alias check part of the algorithm in that case (e.g., bulk update)
for (const namespace of [undefined, 'default']) {
const params = setup([{ type: OBJ_TYPE, id: '1' }], { namespace });
mockMgetResults(

View file

@ -1710,7 +1710,7 @@ export class SavedObjectsRepository {
return this.incrementCounterInternal<T>(type, id, counterFields, options);
}
/** @internal incrementCounter function that is used interally and bypasses validation checks. */
/** @internal incrementCounter function that is used internally and bypasses validation checks. */
private async incrementCounterInternal<T = unknown>(
type: string,
id: string,
@ -2208,7 +2208,7 @@ const errorContent = (error: DecoratedError) => error.output.payload;
const unique = (array: string[]) => [...new Set(array)];
/**
* Type and type guard function for converting a possibly not existant doc to an existant doc.
* Type and type guard function for converting a possibly not existent doc to an existent doc.
*/
type GetResponseFound<TDocument = unknown> = estypes.GetResponse<TDocument> &
Required<

View file

@ -183,7 +183,7 @@ export function getClauseForReference(reference: HasReferenceQueryParams) {
};
}
// A de-duplicated set of namespaces makes for a more effecient query.
// A de-duplicated set of namespaces makes for a more efficient query.
const uniqNamespaces = (namespacesToNormalize?: string[]) =>
namespacesToNormalize ? Array.from(new Set(namespacesToNormalize)) : undefined;

View file

@ -214,7 +214,7 @@ export type MutatingOperationRefreshSetting = boolean | 'wait_for';
*
* From the perspective of application code and APIs the SavedObjectsClient is
* a black box that persists objects. One of the internal details that users have
* no control over is that we use an elasticsearch index for persistance and that
* no control over is that we use an elasticsearch index for persistence and that
* index might be missing.
*
* At the time of writing we are in the process of transitioning away from the

View file

@ -217,7 +217,7 @@ export interface StatusServiceSetup {
* By default, plugins inherit this derived status from their dependencies.
* Calling {@link StatusSetup.set} overrides this default status.
*
* This may emit multliple times for a single status change event as propagates
* This may emit multiple times for a single status change event as propagates
* through the dependency tree
*/
derivedStatus$: Observable<ServiceStatus>;

View file

@ -603,7 +603,7 @@ describe('ui settings', () => {
expect(result).toBe('YYYY-MM-DD');
});
it('returns the overridden value for an overrided key', async () => {
it('returns the overridden value for an overriden key', async () => {
const esDocSource = { dateFormat: 'YYYY-MM-DD' };
const overrides = { dateFormat: 'foo' };
const { uiSettings } = setup({ esDocSource, overrides });