kibana/docs/user/production-considerations/production.asciidoc
Pierre Gayvallet a4b74bd398
[8.0] Remove legacy logging (#112305)
* remove kbn-legacy-logging package

* remove legacy service

* remove legacy appender

* remove LegacyObjectToConfigAdapter

* gix types

* remove @hapi/good / @hapi/good-squeeze / @hapi/podium

* remove `default` appender validation for `root` logger

* remove old config key from kibana-docker

* fix FTR config

* fix dev server

* remove reference from readme

* fix unit test

* clean CLI args and remove quiet option

* fix type

* fix status test config

* remove from test config

* fix snapshot

* use another regexp

* update generated doc

* fix createRootWithSettings

* fix some integration tests

* another IT fix

* yet another IT fix

* (will be reverted) add assertion for CI failure

* Revert "(will be reverted) add assertion for CI failure"

This reverts commit 78d5560f9e.

* switch back to json layout for test

* remove legacy logging config deprecations

* address some review comments

* update documentation

* update kibana.yml config examples

* add config example for `metrics.ops`

Co-authored-by: Tyler Smalley <tyler.smalley@elastic.co>
2021-10-05 13:30:56 +02:00

108 lines
3.8 KiB
Text

[[production]]
= Use {kib} in a production environment
++++
<titleabbrev>Production considerations</titleabbrev>
++++
How you deploy {kib} largely depends on your use case. If you are the only user,
you can run {kib} on your local machine and configure it to point to whatever
{es} instance you want to interact with. Conversely, if you have a large
number of heavy {kib} users, you might need to load balance across multiple
{kib} instances that are all connected to the same {es} instance.
While {kib} isn't terribly resource intensive, we still recommend running {kib}
separate from your {es} data or master nodes. To distribute {kib}
traffic across the nodes in your {es} cluster,
you can configure {kib} to use a list of {es} hosts.
[float]
[[load-balancing-kibana]]
=== Load balancing across multiple {kib} instances
To serve multiple {kib} installations behind a load balancer, you must change the configuration.
See {kibana-ref}/settings.html[Configuring {kib}] for details on each setting.
Settings unique across each {kib} instance:
[source,js]
--------
server.uuid
server.name
--------
Settings unique across each host (for example, running multiple installations on the same virtual machine):
[source,js]
--------
path.data
pid.file
server.port
--------
When using a file appender, the target file must also be unique:
[source,yaml]
--------
logging:
appenders:
default:
type: file
fileName: /unique/path/per/instance
--------
Settings that must be the same:
[source,js]
--------
xpack.security.encryptionKey //decrypting session information
xpack.reporting.encryptionKey //decrypting reports
xpack.encryptedSavedObjects.encryptionKey // decrypting saved objects
xpack.encryptedSavedObjects.keyRotation.decryptionOnlyKeys // saved objects encryption key rotation, if any
--------
Separate configuration files can be used from the command line by using the `-c` flag:
[source,js]
--------
bin/kibana -c config/instance1.yml
bin/kibana -c config/instance2.yml
--------
[float]
[[accessing-load-balanced-kibana]]
=== Accessing multiple load-balanced {kib} clusters
To access multiple load-balanced {kib} clusters from the same browser,
set `xpack.security.cookieName` in the configuration.
This avoids conflicts between cookies from the different {kib} instances.
In each cluster, {kib} instances should have the same `cookieName`
value. This will achieve seamless high availability and keep the session
active in case of failure from the currently used instance.
[float]
[[high-availability]]
=== High availability across multiple {es} nodes
{kib} can be configured to connect to multiple {es} nodes in the same cluster. In situations where a node becomes unavailable,
{kib} will transparently connect to an available node and continue operating. Requests to available hosts will be routed in a round robin fashion.
In kibana.yml:
[source,js]
--------
elasticsearch.hosts:
- http://elasticsearch1:9200
- http://elasticsearch2:9200
--------
Related configurations include `elasticsearch.sniffInterval`, `elasticsearch.sniffOnStart`, and `elasticsearch.sniffOnConnectionFault`.
These can be used to automatically update the list of hosts as a cluster is resized. Parameters can be found on the {kibana-ref}/settings.html[settings page].
[float]
[[memory]]
=== Memory
Kibana has a default memory limit that scales based on total memory available. In some scenarios, such as large reporting jobs,
it may make sense to tweak limits to meet more specific requirements.
A limit can be defined by setting `--max-old-space-size` in the `node.options` config file found inside the `kibana/config` folder or any other folder configured with the environment variable `KBN_PATH_CONF`. For example, in the Debian-based system, the folder is `/etc/kibana`.
The option accepts a limit in MB:
[source,js]
--------
--max-old-space-size=2048
--------