Enforce that we use index scans (rather than seq scans), which we also do for state queries. The reason to enforce this is that we can't correctly get PostgreSQL to understand the distribution of `stream_ordering` depends on `highlight`, and so it always defaults (on matrix.org) to sequential scans.
Updates the database schema to require a thread_id (by adding a
constraint that the column is non-null) for event_push_actions,
event_push_actions_staging, and event_push_actions_summary.
For PostgreSQL we add the constraint as NOT VALID, then
VALIDATE the constraint a background job to avoid locking
the table during an upgrade.
For SQLite we simply rebuild the table & copy the data.
Pushers tend to make many connections to the same HTTP host
(e.g. a new event comes in, causes events to be pushed, and then
the homeserver connects to the same host many times). Due to this
the per-host HTTP connection pool size was increased, but this does
not make sense for other SimpleHttpClients.
Add a parameter for the connection pool and override it for pushers
(making a separate SimpleHttpClient for pushers with the increased
configuration).
This returns the HTTP connection pool settings to the default Twisted
ones for non-pusher HTTP clients.
Adds an optional keyword argument to the /relations API which
will recurse a limited number of event relationships.
This will cause the API to return not just the events related to the
parent event, but also events related to those related to the parent
event, etc.
This is disabled by default behind an experimental configuration
flag and is currently implemented using prefixed parameters.
MSC3983 provides a way to request multiple OTKs at once from appservices,
this extends this concept to the Client-Server API.
Note that this will likely be spit out into a separate MSC, but is currently part of
MSC3983.
Cleans-up the schema delta files:
* Removes no-op functions.
* Adds missing type hints to function parameters.
* Fixes any issues with type hints.
This also renames one (very old) schema delta to avoid a conflict
that mypy complains about.
* Docs: Add Nginx loadbalancing example with sticky mxid for workers
Add example nginx configuration snippet that
* does load balancing for workers
* respects mxid part of the token
* from both url parameter and auth header
* and handles since parameter
Thanks to @olmari for pushing me to write this and testing the configs
Signed-off-by: Tatu Wikman <tatu.wikman@gmail.com>
* Add changelog entry
Signed-off-by: Tatu Wikman <tatu.wikman@gmail.com>
* Update codeblock formatter
Co-authored-by: Dirk Klimpel <5740567+dklimpel@users.noreply.github.com>
* Remove indirectly related nginx-config
Signed-off-by: Sami Olmari <sami@olmari.fi>
* Proper definition of action how to target username for worker
Signed-off-by: Sami Olmari <sami@olmari.fi>
* Change "nginx" to general "reverse proxy" as it's concept now.
Signed-off-by: Sami Olmari <sami@olmari.fi>
* Wording in better English
Co-authored-by: Tatu Wikman <tatu.wikman@gmail.com>
* rename changelog entry to have correct extension
---------
Signed-off-by: Tatu Wikman <tatu.wikman@gmail.com>
Signed-off-by: Sami Olmari <sami@olmari.fi>
Co-authored-by: Dirk Klimpel <5740567+dklimpel@users.noreply.github.com>
Co-authored-by: Sami Olmari <sami@olmari.fi>
Co-authored-by: Sami Olmari <sami+github@olmari.fi>
It can be useful to always return the fallback key when attempting to
claim keys. This adds an unstable endpoint for `/keys/claim` which
always returns fallback keys in addition to one-time-keys.
The fallback key(s) are not marked as "used" unless there are no
corresponding OTKs.
This is currently defined in MSC3983 (although likely to be split out
to a separate MSC). The endpoint shape may change or be requested
differently (i.e. a keyword parameter on the current endpoint), but the
core logic should be reasonable.
Before this change:
* `PerspectivesKeyFetcher` and `ServerKeyFetcher` write to `server_keys_json`.
* `PerspectivesKeyFetcher` also writes to `server_signature_keys`.
* `StoreKeyFetcher` reads from `server_signature_keys`.
After this change:
* `PerspectivesKeyFetcher` and `ServerKeyFetcher` write to `server_keys_json`.
* `PerspectivesKeyFetcher` also writes to `server_signature_keys`.
* `StoreKeyFetcher` reads from `server_keys_json`.
This results in `StoreKeyFetcher` now using the results from `ServerKeyFetcher`
in addition to those from `PerspectivesKeyFetcher`, i.e. keys which are directly
fetched from a server will now be pulled from the database instead of refetched.
An additional minor change is included to avoid creating a `PerspectivesKeyFetcher`
(and checking it) if no `trusted_key_servers` are configured.
The overall impact of this should be better usage of cached results:
* If a server has no trusted key servers configured then it should reduce how often keys
are fetched.
* if a server's trusted key server does not have a requested server's keys cached then it
should reduce how often keys are directly fetched.
These two lines:
```
config_obj = HomeServerConfig()
config_obj.parse_config_dict(config, "", "")
```
are called many times with the exact same value for `config`.
As the test suite is CPU-bound and non-negligeably time is spent in
`parse_config_dict`, this saves ~5% on the overall runtime of the Trial
test suite (tested with both `-j2` and `-j12` on a 12t CPU).
This is sadly rather limited, as the cache cannot be shared between
processes (it contains at least jinja2.Template and RLock objects which
aren't pickleable), and Trial tends to run close tests in different
processes.
* Switch InstanceLocationConfig to a pydantic BaseModel, apply Strict* types and add a few helper methods(that will make more sense in follow up work).
Co-authored-by: David Robertson <davidr@element.io>
* More precise type for LoggingTransaction.execute
* Add an annotation for stream_ordering_month_ago
This would have spotted the error that was fixed in "Add comma missing from #15382. (#15429)"
c.f. #15264
The two changes are:
1. Add indexes so that the select / deletes don't do sequential scans
2. Don't repeatedly call `SELECT count(*)` each iteration, as that's slow
The registration fallback is broken and unspecced. This removes it
since there is no plan to spec it.
Note that this does not modify the login fallback code.
* Change `store_server_verify_keys` to take a `Mapping[(str, str), FKR]`
This is because we already can't handle duplicate keys — leads to cardinality violation
* Newsfile
Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
---------
Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
This moves `redacts` from being a top-level property to
a `content` property in a new room version.
MSC2176 (which was previously implemented) states to not
`redact` this property.
* raise a ConfigError on an invalid app_service_config_files
* changelog
* Move config check to read_config
* Add test
* Ensure list also contains strings
* Trust dtolnay/rust-toolchain
The author is a big deal in the Rust world and I'm happy to trust them.
I'm also bored of the dependabot updates tbh.
* Changelog
This change fixes a rare bug where initial /syncs would fail with a
`KeyError` under the following circumstances:
1. A user fast joins a remote room.
2. The user is kicked from the room before the room's full state has
been synced.
3. A second local user fast joins the room.
4. Events are backfilled into the room with a higher topological
ordering than the original user's leave. They are assigned a
negative stream ordering. It's not clear how backfill happened here,
since it is expected to be equivalent to syncing the full state.
5. The second local user leaves the room before the room's full state
has been synced. The homeserver does not complete the sync.
6. The original user performs an initial /sync with lazy_load_members
enabled.
* Because they were kicked from the room, the room is included in
the /sync response even though the include_leave option is not
specified.
* To populate the room's timeline, `_load_filtered_recents` /
`get_recent_events_for_room` fetches events with a lower stream
ordering than the leave event and picks the ones with the highest
topological orderings (which are most recent). This captures the
backfilled events after the leave, since they have a negative
stream ordering. These events are filtered out of the timeline,
since the user was not in the room at the time and cannot view
them. The sync code ends up with an empty timeline for the room
that notably does not include the user's leave event.
This seems buggy, but at least we don't disclose events the user
isn't allowed to see.
* Normally, `compute_state_delta` would fetch the state at the
start and end of the room's timeline to generate the sync
response. Since the timeline is empty, it fetches the state at
`min(now, last event in the room)`, which corresponds with the
second user's leave. The state during the entirety of the second
user's membership does not include the membership for the first
user because of partial state.
This part is also questionable, since we are fetching state from
outside the bounds of the user's membership.
* `compute_state_delta` then tries and fails to find the user's
membership in the auth events of timeline events. Because there
is no timeline event whose auth events are expected to contain
the user's membership, a `KeyError` is raised.
Also contains a drive-by fix for a separate unlikely race condition.
Signed-off-by: Sean Quah <seanq@matrix.org>
This uses the specced /_matrix/app/v1/... paths instead of the
"legacy" paths. If the homeserver receives an error it will retry
using the legacy path.
* Add IReactorUNIX to ISynapseReactor type hint.
* Create listen_unix().
Two options, 'path' to the file and 'mode' of permissions(not umask, recommend 666 as default as
nginx/other reverse proxies write to it and it's setup as user www-data)
For the moment, leave the option to always create a PID lockfile turned on by default
* Create UnixListenerConfig and wire it up.
Rename ListenerConfig to TCPListenerConfig, then Union them together into ListenerConfig.
This spidered around a bit, but I think I got it all. Metrics and manhole have been placed
behind a conditional in case of accidental putting them onto a unix socket.
Use new helpers to get if a listener is configured for TLS, and to help create a site tag
for logging.
There are 2 TODO things in parse_listener_def() to finish up at a later point.
* Refactor SynapseRequest to handle logging correctly when using a unix socket.
This prevents an exception when an IP address can not be retrieved for a request.
* Make the 'Synapse now listening on Unix socket' log line a little prettier.
* No silent failures on generic workers when trying to use a unix socket with metrics or manhole.
* Inline variables in app/_base.py
* Update docstring for listen_unix() to remove reference to a hardcoded permission of 0o666 and add a few comments saying where the default IS declared.
* Disallow both a unix socket and a ip/port combo on the same listener resource
* Linting
* Changelog
* review: simplify how listen_unix returns(and get rid of a type: ignore)
* review: fix typo from ConfigError in app/homeserver.py
* review: roll conditional for http_options.tag into get_site_tag() helper(and add docstring)
* review: enhance the conditionals for checking if a port or path is valid, remove a TODO line
* review: Try updating comment in get_client_ip_if_available to clarify what is being retrieved and why
* Pretty up how 'Synapse now listening on Unix Socket' looks by decoding the byte string.
* review: In parse_listener_def(), raise ConfigError if neither socket_path nor port is declared(and fix a typo)
* Revert "Fix registering a device on an account with lots of devices (#15348)"
This reverts commit f0d8f66eaa.
* Revert "Delete stale non-e2e devices for users, take 3 (#15183)"
This reverts commit 78cdb72cd6.
Clean-up from adding the thread_id column, which was initially
null but backfilled with values. It is desirable to require it to now
be non-null.
In addition to altering this column to be non-null, we clean up
obsolete background jobs, indexes, and just-in-time updating
code.