Reduces field capabilities event loop block times by scaling linearly using hashes (#75718)

## Summary

On the `security_solution` project from different customers we have been getting reports of scaling issues and excessive NodeJS event blocking times. After in-depth tracing through some of the index and field capabilities calls we identified two of the "hot paths" running through `field_capabilities` to where it is using double looped arrays rather than hashes. By switching these two hot spots out for hashes we are now able to reduce the event loop block times by an order of magnitude. 

Before this PR you can see event loop block times as high as:

```ts
field_cap: 575.131ms
```

And after this PR you will see event loop block times drop by an order of magnitude to:

```ts
field_cap: 31.783ms
```

when you're calling into indexes as large as `filebeat-*`. This number can be higher if you're concatenating several large indexes together trying to get capabilities from each one all at once. We already only call `getFieldCapabilities` with one index at a time to spread out event block times.

The fix is to use a hash within two key areas within these two files:

```ts
src/plugins/data/server/index_patterns/fetcher/lib/field_capabilities/field_capabilities.ts
src/plugins/data/server/index_patterns/fetcher/lib/field_capabilities/field_caps_response.ts
```

This effect happens during the query of `SourceQuery`/`IndexFields` within `security_solution` but you should be able to trigger it with any application code who calls into those code paths with large index sizes such as `filebeat-*` anywhere in Kibana.

An explanation of how to see the block times for before and after
---

Add, `console.time('field_cap');` and `console.timeEnd('field_cap');` to where the synchronize code is for testing the optimizations of before and after.

For example around lines 45 with the original code:
```ts
  const esFieldCaps: FieldCapsResponse = await callFieldCapsApi(callCluster, indices);
  console.time('field_cap'); // <--- start timer
  const fieldsFromFieldCapsByName = keyBy(readFieldCapsResponse(esFieldCaps), 'name');

  const allFieldsUnsorted = Object.keys(fieldsFromFieldCapsByName)
    .filter((name) => !name.startsWith('_'))
    .concat(metaFields)
    .reduce(concatIfUniq, [] as string[])
    .map<FieldDescriptor>((name) =>
      defaults({}, fieldsFromFieldCapsByName[name], {
        name,
        type: 'string',
        searchable: false,
        aggregatable: false,
        readFromDocValues: false,
      })
    )
    .map(mergeOverrides);

  const sorted = sortBy(allFieldsUnsorted, 'name');
  console.timeEnd('field_cap'); // <--- outputs the end timer
  return sorted;

```

And around lines 45 with this pull request:

```ts
  const esFieldCaps: FieldCapsResponse = await callFieldCapsApi(callCluster, indices);
  console.time('field_cap'); // <--- start timer
  const fieldsFromFieldCapsByName = keyBy(readFieldCapsResponse(esFieldCaps), 'name');

  const allFieldsUnsorted = Object.keys(fieldsFromFieldCapsByName)
    .filter((name) => !name.startsWith('_'))
    .concat(metaFields)
    .reduce<{ names: string[]; hash: Record<string, string> }>(
      (agg, value) => {
        // This is intentionally using a "hash" and a "push" to be highly optimized with very large indexes
        if (agg.hash[value] != null) {
          return agg;
        } else {
          agg.hash[value] = value;
          agg.names.push(value);
          return agg;
        }
      },
      { names: [], hash: {} }
    )
    .names.map<FieldDescriptor>((name) =>
      defaults({}, fieldsFromFieldCapsByName[name], {
        name,
        type: 'string',
        searchable: false,
        aggregatable: false,
        readFromDocValues: false,
      })
    )
    .map(mergeOverrides);

  const sorted = sortBy(allFieldsUnsorted, 'name');
  console.timeEnd('field_cap'); // <--- outputs the end timer
  return sorted;

```

And then reload the security solutions application web page or generically anything that is going to call filebeat-* index or another large index or you could concatenate several indexes together as well to test out the performance. For security solutions we can just visit any page such as this one below which has a filebeat-* index:

```
http://localhost:5601/app/security/timelines/
```

Be sure to load it _twice_ for testing as NodeJS will sometimes report better numbers the second time as it does optimizations after the first time it encounters some code paths.

You should begin to see numbers similar to this in the before:

```ts
field_cap: 575.131ms
```

This indicates that it is blocking the event loop for around half a second before this fix. If an application adds additional indexes on-top of `filebeat`, or if it tries to execute other code after this (which we do in security solutions) then the block times will climb even higher.


However, after this fix, the m^n are changed to use hashing so this only climb by some constant * n where n is your fields and for filebeat-* it will should very low around:

```ts
field_cap: 31.783ms
```

### Checklist

Unit tests already present, so this shouldn't break anything 🤞 .
- [x] [Unit or functional tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html) were updated or added to match the most common scenarios
This commit is contained in:
Frank Hassanabad 2020-08-24 12:12:48 -06:00 committed by GitHub
parent c5870589af
commit 9dfcde2cee
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
2 changed files with 39 additions and 15 deletions

View file

@ -25,10 +25,6 @@ import { FieldCapsResponse, readFieldCapsResponse } from './field_caps_response'
import { mergeOverrides } from './overrides';
import { FieldDescriptor } from '../../index_patterns_fetcher';
export function concatIfUniq<T>(arr: T[], value: T) {
return arr.includes(value) ? arr : arr.concat(value);
}
/**
* Get the field capabilities for field in `indices`, excluding
* all internal/underscore-prefixed fields that are not in `metaFields`
@ -49,8 +45,20 @@ export async function getFieldCapabilities(
const allFieldsUnsorted = Object.keys(fieldsFromFieldCapsByName)
.filter((name) => !name.startsWith('_'))
.concat(metaFields)
.reduce(concatIfUniq, [] as string[])
.map<FieldDescriptor>((name) =>
.reduce<{ names: string[]; hash: Record<string, string> }>(
(agg, value) => {
// This is intentionally using a "hash" and a "push" to be highly optimized with very large indexes
if (agg.hash[value] != null) {
return agg;
} else {
agg.hash[value] = value;
agg.names.push(value);
return agg;
}
},
{ names: [], hash: {} }
)
.names.map<FieldDescriptor>((name) =>
defaults({}, fieldsFromFieldCapsByName[name], {
name,
type: 'string',

View file

@ -93,8 +93,12 @@ export interface FieldCapsResponse {
*/
export function readFieldCapsResponse(fieldCapsResponse: FieldCapsResponse): FieldDescriptor[] {
const capsByNameThenType = fieldCapsResponse.fields;
const kibanaFormattedCaps: FieldDescriptor[] = Object.keys(capsByNameThenType).map(
(fieldName) => {
const kibanaFormattedCaps = Object.keys(capsByNameThenType).reduce<{
array: FieldDescriptor[];
hash: Record<string, FieldDescriptor>;
}>(
(agg, fieldName) => {
const capsByType = capsByNameThenType[fieldName];
const types = Object.keys(capsByType);
@ -119,7 +123,7 @@ export function readFieldCapsResponse(fieldCapsResponse: FieldCapsResponse): Fie
// ignore the conflict and carry on (my wayward son)
const uniqueKibanaTypes = uniq(types.map(castEsToKbnFieldTypeName));
if (uniqueKibanaTypes.length > 1) {
return {
const field = {
name: fieldName,
type: 'conflict',
esTypes: types,
@ -134,10 +138,14 @@ export function readFieldCapsResponse(fieldCapsResponse: FieldCapsResponse): Fie
{}
),
};
// This is intentionally using a "hash" and a "push" to be highly optimized with very large indexes
agg.array.push(field);
agg.hash[fieldName] = field;
return agg;
}
const esType = types[0];
return {
const field = {
name: fieldName,
type: castEsToKbnFieldTypeName(esType),
esTypes: types,
@ -145,11 +153,19 @@ export function readFieldCapsResponse(fieldCapsResponse: FieldCapsResponse): Fie
aggregatable: isAggregatable,
readFromDocValues: shouldReadFieldFromDocValues(isAggregatable, esType),
};
// This is intentionally using a "hash" and a "push" to be highly optimized with very large indexes
agg.array.push(field);
agg.hash[fieldName] = field;
return agg;
},
{
array: [],
hash: {},
}
);
// Get all types of sub fields. These could be multi fields or children of nested/object types
const subFields = kibanaFormattedCaps.filter((field) => {
const subFields = kibanaFormattedCaps.array.filter((field) => {
return field.name.includes('.');
});
@ -161,9 +177,9 @@ export function readFieldCapsResponse(fieldCapsResponse: FieldCapsResponse): Fie
.map((_, index, parentFieldNameParts) => {
return parentFieldNameParts.slice(0, index + 1).join('.');
});
const parentFieldCaps = parentFieldNames.map((parentFieldName) => {
return kibanaFormattedCaps.find((caps) => caps.name === parentFieldName);
});
const parentFieldCaps = parentFieldNames.map(
(parentFieldName) => kibanaFormattedCaps.hash[parentFieldName]
);
const parentFieldCapsAscending = parentFieldCaps.reverse();
if (parentFieldCaps && parentFieldCaps.length > 0) {
@ -188,7 +204,7 @@ export function readFieldCapsResponse(fieldCapsResponse: FieldCapsResponse): Fie
}
});
return kibanaFormattedCaps.filter((field) => {
return kibanaFormattedCaps.array.filter((field) => {
return !['object', 'nested'].includes(field.type);
});
}