0
0
Fork 1
mirror of https://mau.dev/maunium/synapse.git synced 2024-11-14 14:01:59 +01:00

Merge remote-tracking branch 'upstream/release-v1.54'

This commit is contained in:
Tulir Asokan 2022-03-02 13:31:59 +02:00
commit 0b2b774c33
361 changed files with 7042 additions and 3821 deletions

View file

@ -8,7 +8,9 @@ export DEBIAN_FRONTEND=noninteractive
set -ex
apt-get update
apt-get install -y python3 python3-dev python3-pip libxml2-dev libxslt-dev xmlsec1 zlib1g-dev tox libjpeg-dev libwebp-dev
apt-get install -y \
python3 python3-dev python3-pip python3-venv \
libxml2-dev libxslt-dev xmlsec1 zlib1g-dev tox libjpeg-dev libwebp-dev
export LANG="C.UTF-8"

11
.flake8 Normal file
View file

@ -0,0 +1,11 @@
# TODO: incorporate this into pyproject.toml if flake8 supports it in the future.
# See https://github.com/PyCQA/flake8/issues/234
[flake8]
# see https://pycodestyle.readthedocs.io/en/latest/intro.html#error-codes
# for error codes. The ones we ignore are:
# W503: line break before binary operator
# W504: line break after binary operator
# E203: whitespace before ':' (which is contrary to pep8?)
# E731: do not assign a lambda expression, use a def
# E501: Line too long (black enforces this for us)
ignore=W503,W504,E203,E731,E501

View file

@ -7,7 +7,7 @@ on:
# of things breaking (but only build one set of debs)
pull_request:
push:
branches: ["develop"]
branches: ["develop", "release-*"]
# we do the full build on tags.
tags: ["v*"]
@ -91,17 +91,7 @@ jobs:
build-sdist:
name: "Build pypi distribution files"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- run: pip install wheel
- run: |
python setup.py sdist bdist_wheel
- uses: actions/upload-artifact@v2
with:
name: python-dist
path: dist/*
uses: "matrix-org/backend-meta/.github/workflows/packaging.yml@v1"
# if it's a tag, create a release and attach the artifacts to it
attach-assets:

View file

@ -10,12 +10,19 @@ concurrency:
cancel-in-progress: true
jobs:
check-sampleconfig:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- run: pip install -e .
- run: scripts-dev/generate_sample_config --check
lint:
runs-on: ubuntu-latest
strategy:
matrix:
toxenv:
- "check-sampleconfig"
- "check_codestyle"
- "check_isort"
- "mypy"
@ -43,29 +50,15 @@ jobs:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- uses: actions/setup-python@v2
- run: pip install tox
- run: "pip install 'towncrier>=18.6.0rc1'"
- run: scripts-dev/check-newsfragment
env:
PULL_REQUEST_NUMBER: ${{ github.event.number }}
lint-sdist:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: "3.x"
- run: pip install wheel
- run: python setup.py sdist bdist_wheel
- uses: actions/upload-artifact@v2
with:
name: Python Distributions
path: dist/*
# Dummy step to gate other tests on without repeating the whole list
linting-done:
if: ${{ !cancelled() }} # Run this even if prior jobs were skipped
needs: [lint, lint-crlf, lint-newsfile, lint-sdist]
needs: [lint, lint-crlf, lint-newsfile, check-sampleconfig]
runs-on: ubuntu-latest
steps:
- run: "true"
@ -397,7 +390,6 @@ jobs:
- lint
- lint-crlf
- lint-newsfile
- lint-sdist
- trial
- trial-olddeps
- sytest

View file

@ -1,7 +1,106 @@
Synapse 1.54.0rc1 (2022-03-02)
==============================
Please note that this will be the last release of Synapse that is compatible with Mjolnir 1.3.1 and earlier.
Administrators of servers which have the Mjolnir module installed are advised to upgrade Mjolnir to version 1.3.2 or later.
Features
--------
- Add support for [MSC3202](https://github.com/matrix-org/matrix-doc/pull/3202): sending one-time key counts and fallback key usage states to Application Services. ([\#11617](https://github.com/matrix-org/synapse/issues/11617))
- Improve the preview that is produced when generating URL previews for some web pages. Contributed by @AndrewRyanChama. ([\#11985](https://github.com/matrix-org/synapse/issues/11985))
- Track cache invalidations in Prometheus metrics, as already happens for cache eviction based on size or time. ([\#12000](https://github.com/matrix-org/synapse/issues/12000))
- Implement experimental support for [MSC3720](https://github.com/matrix-org/matrix-doc/pull/3720) (account status endpoints). ([\#12001](https://github.com/matrix-org/synapse/issues/12001), [\#12067](https://github.com/matrix-org/synapse/issues/12067))
- Enable modules to set a custom display name when registering a user. ([\#12009](https://github.com/matrix-org/synapse/issues/12009))
- Advertise Matrix 1.1 and 1.2 support on `/_matrix/client/versions`. ([\#12020](https://github.com/matrix-org/synapse/issues/12020), ([\#12022](https://github.com/matrix-org/synapse/issues/12022))
- Support only the stable identifier for [MSC3069](https://github.com/matrix-org/matrix-doc/pull/3069)'s `is_guest` on `/_matrix/client/v3/account/whoami`. ([\#12021](https://github.com/matrix-org/synapse/issues/12021))
- Use room version 9 as the default room version (per [MSC3589](https://github.com/matrix-org/matrix-doc/pull/3589)). ([\#12058](https://github.com/matrix-org/synapse/issues/12058))
- Add module callbacks to react to user deactivation status changes (i.e. deactivations and reactivations) and profile updates. ([\#12062](https://github.com/matrix-org/synapse/issues/12062))
Bugfixes
--------
- Fix a bug introduced in Synapse 1.48.0 where an edit of the latest event in a thread would not be properly applied to the thread summary. ([\#11992](https://github.com/matrix-org/synapse/issues/11992))
- Fix long-standing bug where `get_rooms_for_user` was not correctly invalidated for remote users when the server left a room. ([\#11999](https://github.com/matrix-org/synapse/issues/11999))
- Fix a 500 error with Postgres when looking backwards with the [MSC3030](https://github.com/matrix-org/matrix-doc/pull/3030) `/timestamp_to_event?dir=b` endpoint. ([\#12024](https://github.com/matrix-org/synapse/issues/12024))
- Properly fix a long-standing bug where wrong data could be inserted into the `event_search` table when using SQLite. This could block running `synapse_port_db` with an `argument of type 'int' is not iterable` error. This bug was partially fixed by a change in Synapse 1.44.0. ([\#12037](https://github.com/matrix-org/synapse/issues/12037))
- Fix slow performance of `/logout` in some cases where refresh tokens are in use. The slowness existed since the initial implementation of refresh tokens in version 1.38.0. ([\#12056](https://github.com/matrix-org/synapse/issues/12056))
- Fix a long-standing bug where Synapse would make additional failing requests over federation for missing data. ([\#12077](https://github.com/matrix-org/synapse/issues/12077))
- Fix occasional `Unhandled error in Deferred` error message. ([\#12089](https://github.com/matrix-org/synapse/issues/12089))
- Fix a bug introduced in Synapse 1.51.0 where incoming federation transactions containing at least one EDU would be dropped if debug logging was enabled for `synapse.8631_debug`. ([\#12098](https://github.com/matrix-org/synapse/issues/12098))
- Fix a long-standing bug which could cause push notifications to malfunction if `use_frozen_dicts` was set in the configuration. ([\#12100](https://github.com/matrix-org/synapse/issues/12100))
- Fix an extremely rare, long-standing bug in `ReadWriteLock` that would cause an error when a newly unblocked writer completes instantly. ([\#12105](https://github.com/matrix-org/synapse/issues/12105))
- Make a `POST` to `/rooms/<room_id>/receipt/m.read/<event_id>` only trigger a push notification if the count of unread messages is different to the one in the last successfully sent push. This reduces server load and load on the receiving device. ([\#11835](https://github.com/matrix-org/synapse/issues/11835))
Updates to the Docker image
---------------------------
- The Docker image no longer automatically creates a temporary volume at `/data`. This is not expected to affect normal usage. ([\#11997](https://github.com/matrix-org/synapse/issues/11997))
- Use Python 3.9 in Docker images by default. ([\#12112](https://github.com/matrix-org/synapse/issues/12112))
Improved Documentation
----------------------
- Document support for the `to_device`, `account_data`, `receipts`, and `presence` stream writers for workers. ([\#11599](https://github.com/matrix-org/synapse/issues/11599))
- Explain the meaning of spam checker callbacks' return values. ([\#12003](https://github.com/matrix-org/synapse/issues/12003))
- Clarify information about external Identity Provider IDs. ([\#12004](https://github.com/matrix-org/synapse/issues/12004))
Deprecations and Removals
-------------------------
- Deprecate using `synctl` with the config option `synctl_cache_factor` and print a warning if a user still uses this option. ([\#11865](https://github.com/matrix-org/synapse/issues/11865))
- Remove support for the legacy structured logging configuration (please see the the [upgrade notes](https://matrix-org.github.io/synapse/develop/upgrade#legacy-structured-logging-configuration-removal) if you are using `structured: true` in the Synapse configuration). ([\#12008](https://github.com/matrix-org/synapse/issues/12008))
- Drop support for [MSC3283](https://github.com/matrix-org/matrix-doc/pull/3283) unstable flags now that the stable flags are supported. ([\#12018](https://github.com/matrix-org/synapse/issues/12018))
- Remove the unstable `/spaces` endpoint from [MSC2946](https://github.com/matrix-org/matrix-doc/pull/2946). ([\#12073](https://github.com/matrix-org/synapse/issues/12073))
Internal Changes
----------------
- Make the `get_room_version` method use `get_room_version_id` to benefit from caching. ([\#11808](https://github.com/matrix-org/synapse/issues/11808))
- Remove unnecessary condition on knock -> leave auth rule check. ([\#11900](https://github.com/matrix-org/synapse/issues/11900))
- Add tests for device list changes between local users. ([\#11972](https://github.com/matrix-org/synapse/issues/11972))
- Optimise calculating `device_list` changes in `/sync`. ([\#11974](https://github.com/matrix-org/synapse/issues/11974))
- Add missing type hints to storage classes. ([\#11984](https://github.com/matrix-org/synapse/issues/11984))
- Refactor the search code for improved readability. ([\#11991](https://github.com/matrix-org/synapse/issues/11991))
- Move common deduplication code down into `_auth_and_persist_outliers`. ([\#11994](https://github.com/matrix-org/synapse/issues/11994))
- Limit concurrent joins from applications services. ([\#11996](https://github.com/matrix-org/synapse/issues/11996))
- Preparation for faster-room-join work: when parsing the `send_join` response, get the `m.room.create` event from `state`, not `auth_chain`. ([\#12005](https://github.com/matrix-org/synapse/issues/12005), [\#12039](https://github.com/matrix-org/synapse/issues/12039))
- Preparation for faster-room-join work: parse MSC3706 fields in send_join response. ([\#12011](https://github.com/matrix-org/synapse/issues/12011))
- Preparation for faster-room-join work: persist information on which events and rooms have partial state to the database. ([\#12012](https://github.com/matrix-org/synapse/issues/12012))
- Preparation for faster-room-join work: Support for calling `/federation/v1/state` on a remote server. ([\#12013](https://github.com/matrix-org/synapse/issues/12013))
- Configure `tox` to use `venv` rather than `virtualenv`. ([\#12015](https://github.com/matrix-org/synapse/issues/12015))
- Fix bug in `StateFilter.return_expanded()` and add some tests. ([\#12016](https://github.com/matrix-org/synapse/issues/12016))
- Use Matrix v1.1 endpoints (`/_matrix/client/v3/auth/...`) in fallback auth HTML forms. ([\#12019](https://github.com/matrix-org/synapse/issues/12019))
- Update the `olddeps` CI job to use an old version of `markupsafe`. ([\#12025](https://github.com/matrix-org/synapse/issues/12025))
- Upgrade Mypy to version 0.931. ([\#12030](https://github.com/matrix-org/synapse/issues/12030))
- Remove legacy `HomeServer.get_datastore()`. ([\#12031](https://github.com/matrix-org/synapse/issues/12031), [\#12070](https://github.com/matrix-org/synapse/issues/12070))
- Minor typing fixes. ([\#12034](https://github.com/matrix-org/synapse/issues/12034), [\#12069](https://github.com/matrix-org/synapse/issues/12069))
- After joining a room, create a dedicated logcontext to process the queued events. ([\#12041](https://github.com/matrix-org/synapse/issues/12041))
- Tidy up GitHub Actions config which builds distributions for PyPI. ([\#12051](https://github.com/matrix-org/synapse/issues/12051))
- Move configuration out of `setup.cfg`. ([\#12052](https://github.com/matrix-org/synapse/issues/12052), [\#12059](https://github.com/matrix-org/synapse/issues/12059))
- Fix error message when a worker process fails to talk to another worker process. ([\#12060](https://github.com/matrix-org/synapse/issues/12060))
- Fix using the `complement.sh` script without specifying a directory or a branch. Contributed by Nico on behalf of Famedly. ([\#12063](https://github.com/matrix-org/synapse/issues/12063))
- Add type hints to `tests/rest/client`. ([\#12066](https://github.com/matrix-org/synapse/issues/12066), [\#12072](https://github.com/matrix-org/synapse/issues/12072), [\#12084](https://github.com/matrix-org/synapse/issues/12084), [\#12094](https://github.com/matrix-org/synapse/issues/12094))
- Add some logging to `/sync` to try and track down #11916. ([\#12068](https://github.com/matrix-org/synapse/issues/12068))
- Inspect application dependencies using `importlib.metadata` or its backport. ([\#12088](https://github.com/matrix-org/synapse/issues/12088))
- Use `assertEqual` instead of the deprecated `assertEquals` in test code. ([\#12092](https://github.com/matrix-org/synapse/issues/12092))
- Move experimental support for [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440) to `/versions`. ([\#12099](https://github.com/matrix-org/synapse/issues/12099))
- Add `stop_cancellation` utility function to stop `Deferred`s from being cancelled. ([\#12106](https://github.com/matrix-org/synapse/issues/12106))
- Improve exception handling for concurrent execution. ([\#12109](https://github.com/matrix-org/synapse/issues/12109))
- Advertise support for Python 3.10 in packaging files. ([\#12111](https://github.com/matrix-org/synapse/issues/12111))
- Move CI checks out of tox, to facilitate a move to using poetry. ([\#12119](https://github.com/matrix-org/synapse/issues/12119))
Synapse 1.53.0 (2022-02-22)
===========================
No significant changes.
No significant changes since 1.53.0rc1.
Synapse 1.53.0rc1 (2022-02-15)
@ -11,7 +110,7 @@ Features
--------
- Add experimental support for sending to-device messages to application services, as specified by [MSC2409](https://github.com/matrix-org/matrix-doc/pull/2409). ([\#11215](https://github.com/matrix-org/synapse/issues/11215), [\#11966](https://github.com/matrix-org/synapse/issues/11966))
- Remove account data (including client config, push rules and ignored users) upon user deactivation. ([\#11655](https://github.com/matrix-org/synapse/issues/11655))
- Add a background database update to purge account data for deactivated users. ([\#11655](https://github.com/matrix-org/synapse/issues/11655))
- Experimental support for [MSC3666](https://github.com/matrix-org/matrix-doc/pull/3666): including bundled aggregations in server side search results. ([\#11837](https://github.com/matrix-org/synapse/issues/11837))
- Enable cache time-based expiry by default. The `expiry_time` config flag has been superseded by `expire_caches` and `cache_entry_ttl`. ([\#11849](https://github.com/matrix-org/synapse/issues/11849))
- Add a callback to allow modules to allow or forbid a 3PID (email address, phone number) from being associated to a local account. ([\#11854](https://github.com/matrix-org/synapse/issues/11854))
@ -92,7 +191,7 @@ Note that [Twisted 22.1.0](https://github.com/twisted/twisted/releases/tag/twist
has recently been released, which fixes a [security issue](https://github.com/twisted/twisted/security/advisories/GHSA-92x2-jw7w-xvvx)
within the Twisted library. We do not believe Synapse is affected by this vulnerability,
though we advise server administrators who installed Synapse via pip to upgrade Twisted
with `pip install --upgrade Twisted` as a matter of good practice. The Docker image
with `pip install --upgrade Twisted treq` as a matter of good practice. The Docker image
`matrixdotorg/synapse` and the Debian packages from `packages.matrix.org` are using the
updated library.
@ -273,7 +372,7 @@ Bugfixes
Synapse 1.50.0 (2022-01-18)
===========================
**This release contains a critical bug that may prevent clients from being able to connect.
**This release contains a critical bug that may prevent clients from being able to connect.
As such, it is not recommended to upgrade to 1.50.0. Instead, please upgrade straight to
to 1.50.1. Further details are available in [this issue](https://github.com/matrix-org/synapse/issues/11763).**

View file

@ -45,6 +45,7 @@ include book.toml
include pyproject.toml
recursive-include changelog.d *
include .flake8
prune .circleci
prune .github
prune .ci

6
debian/changelog vendored
View file

@ -1,3 +1,9 @@
matrix-synapse-py3 (1.54.0~rc1) stable; urgency=medium
* New synapse release 1.54.0~rc1.
-- Synapse Packaging team <packages@matrix.org> Wed, 02 Mar 2022 10:43:22 +0000
matrix-synapse-py3 (1.53.0) stable; urgency=medium
* New synapse release 1.53.0.

View file

@ -11,10 +11,10 @@
# There is an optional PYTHON_VERSION build argument which sets the
# version of python to build against: for example:
#
# DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.9 .
# DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.10 .
#
ARG PYTHON_VERSION=3.8
ARG PYTHON_VERSION=3.9
###
### Stage 0: builder
@ -98,8 +98,6 @@ COPY --from=builder /install /usr/local
COPY ./docker/start.py /start.py
COPY ./docker/conf /conf
VOLUME ["/data"]
EXPOSE 8008/tcp 8009/tcp 8448/tcp
ENTRYPOINT ["/start.py"]

View file

@ -126,7 +126,8 @@ Body parameters:
[Sample Configuration File](../usage/configuration/homeserver_sample_config.html)
section `sso` and `oidc_providers`.
- `auth_provider` - string. ID of the external identity provider. Value of `idp_id`
in homeserver configuration.
in the homeserver configuration. Note that no error is raised if the provided
value is not in the homeserver configuration.
- `external_id` - string, user ID in the external identity provider.
- `avatar_url` - string, optional, must be a
[MXC URI](https://matrix.org/docs/spec/client_server/r0.6.0#matrix-content-mxc-uris).

View file

@ -94,6 +94,6 @@ As a simple example, retrieving an event from the database:
```pycon
>>> from twisted.internet import defer
>>> defer.ensureDeferred(hs.get_datastore().get_event('$1416420717069yeQaw:matrix.org'))
>>> defer.ensureDeferred(hs.get_datastores().main.get_event('$1416420717069yeQaw:matrix.org'))
<Deferred at 0x7ff253fc6998 current result: <FrozenEvent event_id='$1416420717069yeQaw:matrix.org', type='m.room.create', state_key=''>>
```

View file

@ -85,7 +85,7 @@ If the authentication is unsuccessful, the module must return `None`.
If multiple modules implement this callback, they will be considered in order. If a
callback returns `None`, Synapse falls through to the next one. The value of the first
callback that does not return `None` will be used. If this happens, Synapse will not call
any of the subsequent implementations of this callback. If every callback return `None`,
any of the subsequent implementations of this callback. If every callback returns `None`,
the authentication is denied.
### `on_logged_out`
@ -162,10 +162,38 @@ return `None`.
If multiple modules implement this callback, they will be considered in order. If a
callback returns `None`, Synapse falls through to the next one. The value of the first
callback that does not return `None` will be used. If this happens, Synapse will not call
any of the subsequent implementations of this callback. If every callback return `None`,
any of the subsequent implementations of this callback. If every callback returns `None`,
the username provided by the user is used, if any (otherwise one is automatically
generated).
### `get_displayname_for_registration`
_First introduced in Synapse v1.54.0_
```python
async def get_displayname_for_registration(
uia_results: Dict[str, Any],
params: Dict[str, Any],
) -> Optional[str]
```
Called when registering a new user. The module can return a display name to set for the
user being registered by returning it as a string, or `None` if it doesn't wish to force a
display name for this user.
This callback is called once [User-Interactive Authentication](https://spec.matrix.org/latest/client-server-api/#user-interactive-authentication-api)
has been completed by the user. It is not called when registering a user via SSO. It is
passed two dictionaries, which include the information that the user has provided during
the registration process. These dictionaries are identical to the ones passed to
[`get_username_for_registration`](#get_username_for_registration), so refer to the
documentation of this callback for more information about them.
If multiple modules implement this callback, they will be considered in order. If a
callback returns `None`, Synapse falls through to the next one. The value of the first
callback that does not return `None` will be used. If this happens, Synapse will not call
any of the subsequent implementations of this callback. If every callback returns `None`,
the username will be used (e.g. `alice` if the user being registered is `@alice:example.com`).
## `is_3pid_allowed`
_First introduced in Synapse v1.53.0_
@ -194,8 +222,7 @@ The example module below implements authentication checkers for two different lo
- Is checked by the method: `self.check_my_login`
- `m.login.password` (defined in [the spec](https://matrix.org/docs/spec/client_server/latest#password-based))
- Expects a `password` field to be sent to `/login`
- Is checked by the method: `self.check_pass`
- Is checked by the method: `self.check_pass`
```python
from typing import Awaitable, Callable, Optional, Tuple

View file

@ -16,10 +16,12 @@ _First introduced in Synapse v1.37.0_
async def check_event_for_spam(event: "synapse.events.EventBase") -> Union[bool, str]
```
Called when receiving an event from a client or via federation. The module can return
either a `bool` to indicate whether the event must be rejected because of spam, or a `str`
to indicate the event must be rejected because of spam and to give a rejection reason to
forward to clients.
Called when receiving an event from a client or via federation. The callback must return
either:
- an error message string, to indicate the event must be rejected because of spam and
give a rejection reason to forward to clients;
- the boolean `True`, to indicate that the event is spammy, but not provide further details; or
- the booelan `False`, to indicate that the event is not considered spammy.
If multiple modules implement this callback, they will be considered in order. If a
callback returns `False`, Synapse falls through to the next one. The value of the first
@ -35,7 +37,10 @@ async def user_may_join_room(user: str, room: str, is_invited: bool) -> bool
```
Called when a user is trying to join a room. The module must return a `bool` to indicate
whether the user can join the room. The user is represented by their Matrix user ID (e.g.
whether the user can join the room. Return `False` to prevent the user from joining the
room; otherwise return `True` to permit the joining.
The user is represented by their Matrix user ID (e.g.
`@alice:example.com`) and the room is represented by its Matrix ID (e.g.
`!room:example.com`). The module is also given a boolean to indicate whether the user
currently has a pending invite in the room.
@ -58,7 +63,8 @@ async def user_may_invite(inviter: str, invitee: str, room_id: str) -> bool
Called when processing an invitation. The module must return a `bool` indicating whether
the inviter can invite the invitee to the given room. Both inviter and invitee are
represented by their Matrix user ID (e.g. `@alice:example.com`).
represented by their Matrix user ID (e.g. `@alice:example.com`). Return `False` to prevent
the invitation; otherwise return `True` to permit it.
If multiple modules implement this callback, they will be considered in order. If a
callback returns `True`, Synapse falls through to the next one. The value of the first
@ -80,7 +86,8 @@ async def user_may_send_3pid_invite(
Called when processing an invitation using a third-party identifier (also called a 3PID,
e.g. an email address or a phone number). The module must return a `bool` indicating
whether the inviter can invite the invitee to the given room.
whether the inviter can invite the invitee to the given room. Return `False` to prevent
the invitation; otherwise return `True` to permit it.
The inviter is represented by their Matrix user ID (e.g. `@alice:example.com`), and the
invitee is represented by its medium (e.g. "email") and its address
@ -117,6 +124,7 @@ async def user_may_create_room(user: str) -> bool
Called when processing a room creation request. The module must return a `bool` indicating
whether the given user (represented by their Matrix user ID) is allowed to create a room.
Return `False` to prevent room creation; otherwise return `True` to permit it.
If multiple modules implement this callback, they will be considered in order. If a
callback returns `True`, Synapse falls through to the next one. The value of the first
@ -133,7 +141,8 @@ async def user_may_create_room_alias(user: str, room_alias: "synapse.types.RoomA
Called when trying to associate an alias with an existing room. The module must return a
`bool` indicating whether the given user (represented by their Matrix user ID) is allowed
to set the given alias.
to set the given alias. Return `False` to prevent the alias creation; otherwise return
`True` to permit it.
If multiple modules implement this callback, they will be considered in order. If a
callback returns `True`, Synapse falls through to the next one. The value of the first
@ -150,7 +159,8 @@ async def user_may_publish_room(user: str, room_id: str) -> bool
Called when trying to publish a room to the homeserver's public rooms directory. The
module must return a `bool` indicating whether the given user (represented by their
Matrix user ID) is allowed to publish the given room.
Matrix user ID) is allowed to publish the given room. Return `False` to prevent the
room from being published; otherwise return `True` to permit its publication.
If multiple modules implement this callback, they will be considered in order. If a
callback returns `True`, Synapse falls through to the next one. The value of the first
@ -166,8 +176,11 @@ async def check_username_for_spam(user_profile: Dict[str, str]) -> bool
```
Called when computing search results in the user directory. The module must return a
`bool` indicating whether the given user profile can appear in search results. The profile
is represented as a dictionary with the following keys:
`bool` indicating whether the given user should be excluded from user directory
searches. Return `True` to indicate that the user is spammy and exclude them from
search results; otherwise return `False`.
The profile is represented as a dictionary with the following keys:
* `user_id`: The Matrix ID for this user.
* `display_name`: The user's display name.
@ -225,8 +238,9 @@ async def check_media_file_for_spam(
) -> bool
```
Called when storing a local or remote file. The module must return a boolean indicating
whether the given file can be stored in the homeserver's media store.
Called when storing a local or remote file. The module must return a `bool` indicating
whether the given file should be excluded from the homeserver's media store. Return
`True` to prevent this file from being stored; otherwise return `False`.
If multiple modules implement this callback, they will be considered in order. If a
callback returns `False`, Synapse falls through to the next one. The value of the first

View file

@ -148,6 +148,62 @@ deny an incoming event, see [`check_event_for_spam`](spam_checker_callbacks.md#c
If multiple modules implement this callback, Synapse runs them all in order.
### `on_profile_update`
_First introduced in Synapse v1.54.0_
```python
async def on_profile_update(
user_id: str,
new_profile: "synapse.module_api.ProfileInfo",
by_admin: bool,
deactivation: bool,
) -> None:
```
Called after updating a local user's profile. The update can be triggered either by the
user themselves or a server admin. The update can also be triggered by a user being
deactivated (in which case their display name is set to an empty string (`""`) and the
avatar URL is set to `None`). The module is passed the Matrix ID of the user whose profile
has been updated, their new profile, as well as a `by_admin` boolean that is `True` if the
update was triggered by a server admin (and `False` otherwise), and a `deactivated`
boolean that is `True` if the update is a result of the user being deactivated.
Note that the `by_admin` boolean is also `True` if the profile change happens as a result
of the user logging in through Single Sign-On, or if a server admin updates their own
profile.
Per-room profile changes do not trigger this callback to be called. Synapse administrators
wishing this callback to be called on every profile change are encouraged to disable
per-room profiles globally using the `allow_per_room_profiles` configuration setting in
Synapse's configuration file.
This callback is not called when registering a user, even when setting it through the
[`get_displayname_for_registration`](https://matrix-org.github.io/synapse/latest/modules/password_auth_provider_callbacks.html#get_displayname_for_registration)
module callback.
If multiple modules implement this callback, Synapse runs them all in order.
### `on_user_deactivation_status_changed`
_First introduced in Synapse v1.54.0_
```python
async def on_user_deactivation_status_changed(
user_id: str, deactivated: bool, by_admin: bool
) -> None:
```
Called after deactivating a local user, or reactivating them through the admin API. The
deactivation can be triggered either by the user themselves or a server admin. The module
is passed the Matrix ID of the user whose status is changed, as well as a `deactivated`
boolean that is `True` if the user is being deactivated and `False` if they're being
reactivated, and a `by_admin` boolean that is `True` if the deactivation was triggered by
a server admin (and `False` otherwise). This latter `by_admin` boolean is always `True`
if the user is being reactivated, as this operation can only be performed through the
admin API.
If multiple modules implement this callback, Synapse runs them all in order.
## Example
The example below is a module that implements the third-party rules callback

View file

@ -163,7 +163,7 @@ presence:
# For example, for room version 1, default_room_version should be set
# to "1".
#
#default_room_version: "6"
#default_room_version: "9"
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
#

View file

@ -81,14 +81,12 @@ remote endpoint at 10.1.2.3:9999.
## Upgrading from legacy structured logging configuration
Versions of Synapse prior to v1.23.0 included a custom structured logging
configuration which is deprecated. It used a `structured: true` flag and
configured `drains` instead of ``handlers`` and `formatters`.
Versions of Synapse prior to v1.54.0 automatically converted the legacy
structured logging configuration, which was deprecated in v1.23.0, to the standard
library logging configuration.
Synapse currently automatically converts the old configuration to the new
configuration, but this will be removed in a future version of Synapse. The
following reference can be used to update your configuration. Based on the drain
`type`, we can pick a new handler:
The following reference can be used to update your configuration. Based on the
drain `type`, we can pick a new handler:
1. For a type of `console`, `console_json`, or `console_json_terse`: a handler
with a class of `logging.StreamHandler` and a `stream` of `ext://sys.stdout`

View file

@ -85,6 +85,15 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.54.0
## Legacy structured logging configuration removal
This release removes support for the `structured: true` logging configuration
which was deprecated in Synapse v1.23.0. If your logging configuration contains
`structured: true` then it should be modified based on the
[structured logging documentation](structured_logging.md).
# Upgrading to v1.53.0
## Dropping support for `webclient` listeners and non-HTTP(S) `web_client_location`
@ -157,7 +166,7 @@ Note that [Twisted 22.1.0](https://github.com/twisted/twisted/releases/tag/twist
has recently been released, which fixes a [security issue](https://github.com/twisted/twisted/security/advisories/GHSA-92x2-jw7w-xvvx)
within the Twisted library. We do not believe Synapse is affected by this vulnerability,
though we advise server administrators who installed Synapse via pip to upgrade Twisted
with `pip install --upgrade Twisted` as a matter of good practice. The Docker image
with `pip install --upgrade Twisted treq` as a matter of good practice. The Docker image
`matrixdotorg/synapse` and the Debian packages from `packages.matrix.org` are using the
updated library.

View file

@ -178,8 +178,11 @@ recommend the use of `systemd` where available: for information on setting up
### `synapse.app.generic_worker`
This worker can handle API requests matching the following regular
expressions:
This worker can handle API requests matching the following regular expressions.
These endpoints can be routed to any worker. If a worker is set up to handle a
stream then, for maximum efficiency, additional endpoints should be routed to that
worker: refer to the [stream writers](#stream-writers) section below for further
information.
# Sync requests
^/_matrix/client/(v2_alpha|r0|v3)/sync$
@ -209,7 +212,6 @@ expressions:
^/_matrix/federation/v1/user/devices/
^/_matrix/federation/v1/get_groups_publicised$
^/_matrix/key/v2/query
^/_matrix/federation/unstable/org.matrix.msc2946/spaces/
^/_matrix/federation/(v1|unstable/org.matrix.msc2946)/hierarchy/
# Inbound federation transaction request
@ -222,22 +224,25 @@ expressions:
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/context/.*$
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/members$
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state$
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/spaces$
^/_matrix/client/(v1|unstable/org.matrix.msc2946)/rooms/.*/hierarchy$
^/_matrix/client/unstable/im.nheko.summary/rooms/.*/summary$
^/_matrix/client/(api/v1|r0|v3|unstable)/account/3pid$
^/_matrix/client/(api/v1|r0|v3|unstable)/devices$
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/query$
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/changes$
^/_matrix/client/(r0|v3|unstable)/account/3pid$
^/_matrix/client/(r0|v3|unstable)/devices$
^/_matrix/client/versions$
^/_matrix/client/(api/v1|r0|v3|unstable)/voip/turnServer$
^/_matrix/client/(api/v1|r0|v3|unstable)/joined_groups$
^/_matrix/client/(api/v1|r0|v3|unstable)/publicised_groups$
^/_matrix/client/(api/v1|r0|v3|unstable)/publicised_groups/
^/_matrix/client/(r0|v3|unstable)/joined_groups$
^/_matrix/client/(r0|v3|unstable)/publicised_groups$
^/_matrix/client/(r0|v3|unstable)/publicised_groups/
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/event/
^/_matrix/client/(api/v1|r0|v3|unstable)/joined_rooms$
^/_matrix/client/(api/v1|r0|v3|unstable)/search$
# Encryption requests
^/_matrix/client/(r0|v3|unstable)/keys/query$
^/_matrix/client/(r0|v3|unstable)/keys/changes$
^/_matrix/client/(r0|v3|unstable)/keys/claim$
^/_matrix/client/(r0|v3|unstable)/room_keys/
# Registration/login requests
^/_matrix/client/(api/v1|r0|v3|unstable)/login$
^/_matrix/client/(r0|v3|unstable)/register$
@ -251,6 +256,20 @@ expressions:
^/_matrix/client/(api/v1|r0|v3|unstable)/join/
^/_matrix/client/(api/v1|r0|v3|unstable)/profile/
# Device requests
^/_matrix/client/(r0|v3|unstable)/sendToDevice/
# Account data requests
^/_matrix/client/(r0|v3|unstable)/.*/tags
^/_matrix/client/(r0|v3|unstable)/.*/account_data
# Receipts requests
^/_matrix/client/(r0|v3|unstable)/rooms/.*/receipt
^/_matrix/client/(r0|v3|unstable)/rooms/.*/read_markers
# Presence requests
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/
Additionally, the following REST endpoints can be handled for GET requests:
@ -330,12 +349,10 @@ Additionally, there is *experimental* support for moving writing of specific
streams (such as events) off of the main process to a particular worker. (This
is only supported with Redis-based replication.)
Currently supported streams are `events` and `typing`.
To enable this, the worker must have a HTTP replication listener configured,
have a `worker_name` and be listed in the `instance_map` config. For example to
move event persistence off to a dedicated worker, the shared configuration would
include:
have a `worker_name` and be listed in the `instance_map` config. The same worker
can handle multiple streams. For example, to move event persistence off to a
dedicated worker, the shared configuration would include:
```yaml
instance_map:
@ -347,6 +364,12 @@ stream_writers:
events: event_persister1
```
Some of the streams have associated endpoints which, for maximum efficiency, should
be routed to the workers handling that stream. See below for the currently supported
streams and the endpoints associated with them:
##### The `events` stream
The `events` stream also experimentally supports having multiple writers, where
work is sharded between them by room ID. Note that you *must* restart all worker
instances when adding or removing event persisters. An example `stream_writers`
@ -359,6 +382,43 @@ stream_writers:
- event_persister2
```
##### The `typing` stream
The following endpoints should be routed directly to the workers configured as
stream writers for the `typing` stream:
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/typing
##### The `to_device` stream
The following endpoints should be routed directly to the workers configured as
stream writers for the `to_device` stream:
^/_matrix/client/(api/v1|r0|v3|unstable)/sendToDevice/
##### The `account_data` stream
The following endpoints should be routed directly to the workers configured as
stream writers for the `account_data` stream:
^/_matrix/client/(api/v1|r0|v3|unstable)/.*/tags
^/_matrix/client/(api/v1|r0|v3|unstable)/.*/account_data
##### The `receipts` stream
The following endpoints should be routed directly to the workers configured as
stream writers for the `receipts` stream:
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/receipt
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/read_markers
##### The `presence` stream
The following endpoints should be routed directly to the workers configured as
stream writers for the `presence` stream:
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/
#### Background tasks
There is also *experimental* support for moving background tasks to a separate

View file

@ -31,14 +31,11 @@ exclude = (?x)
|synapse/storage/databases/main/group_server.py
|synapse/storage/databases/main/metrics.py
|synapse/storage/databases/main/monthly_active_users.py
|synapse/storage/databases/main/presence.py
|synapse/storage/databases/main/purge_events.py
|synapse/storage/databases/main/push_rule.py
|synapse/storage/databases/main/receipts.py
|synapse/storage/databases/main/roommember.py
|synapse/storage/databases/main/search.py
|synapse/storage/databases/main/state.py
|synapse/storage/databases/main/user_directory.py
|synapse/storage/schema/
|tests/api/test_auth.py
@ -78,16 +75,12 @@ exclude = (?x)
|tests/push/test_presentable_names.py
|tests/push/test_push_rule_evaluator.py
|tests/rest/client/test_account.py
|tests/rest/client/test_events.py
|tests/rest/client/test_filter.py
|tests/rest/client/test_groups.py
|tests/rest/client/test_register.py
|tests/rest/client/test_report_event.py
|tests/rest/client/test_rooms.py
|tests/rest/client/test_third_party_rules.py
|tests/rest/client/test_transactions.py
|tests/rest/client/test_typing.py
|tests/rest/client/utils.py
|tests/rest/key/v2/test_remote_key_resource.py
|tests/rest/media/v1/test_base.py
|tests/rest/media/v1/test_media_storage.py
@ -256,7 +249,7 @@ disallow_untyped_defs = True
[mypy-tests.rest.admin.*]
disallow_untyped_defs = True
[mypy-tests.rest.client.test_directory]
[mypy-tests.rest.client.*]
disallow_untyped_defs = True
[mypy-tests.federation.transport.test_client]

View file

@ -54,3 +54,15 @@ exclude = '''
)/
)
'''
[tool.isort]
line_length = 88
sections = ["FUTURE", "STDLIB", "THIRDPARTY", "TWISTED", "FIRSTPARTY", "TESTS", "LOCALFOLDER"]
default_section = "THIRDPARTY"
known_first_party = ["synapse"]
known_tests = ["tests"]
known_twisted = ["twisted", "OpenSSL"]
multi_line_output = 3
include_trailing_comma = true
combine_as_imports = true

View file

@ -35,7 +35,7 @@ CONTRIBUTING_GUIDE_TEXT="!! Please see the contributing guide for help writing y
https://github.com/matrix-org/synapse/blob/develop/CONTRIBUTING.md#changelog"
# If check-newsfragment returns a non-zero exit code, print the contributing guide and exit
tox -qe check-newsfragment || (echo -e "$CONTRIBUTING_GUIDE_TEXT" >&2 && exit 1)
python -m towncrier.check --compare-with=origin/develop || (echo -e "$CONTRIBUTING_GUIDE_TEXT" >&2 && exit 1)
echo
echo "--------------------------"

View file

@ -5,7 +5,7 @@
# It makes a Synapse image which represents the current checkout,
# builds a synapse-complement image on top, then runs tests with it.
#
# By default the script will fetch the latest Complement master branch and
# By default the script will fetch the latest Complement main branch and
# run tests with that. This can be overridden to use a custom Complement
# checkout by setting the COMPLEMENT_DIR environment variable to the
# filepath of a local Complement checkout or by setting the COMPLEMENT_REF
@ -32,7 +32,7 @@ cd "$(dirname $0)/.."
# Check for a user-specified Complement checkout
if [[ -z "$COMPLEMENT_DIR" ]]; then
COMPLEMENT_REF=${COMPLEMENT_REF:-master}
COMPLEMENT_REF=${COMPLEMENT_REF:-main}
echo "COMPLEMENT_DIR not set. Fetching Complement checkout from ${COMPLEMENT_REF}..."
wget -Nq https://github.com/matrix-org/complement/archive/${COMPLEMENT_REF}.tar.gz
tar -xzf ${COMPLEMENT_REF}.tar.gz

View file

@ -44,7 +44,7 @@ class MockHomeserver(HomeServer):
def run_background_updates(hs):
store = hs.get_datastore()
store = hs.get_datastores().main
async def run_background_updates():
await store.db_pool.updates.run_background_updates(sleep=False)

View file

@ -1,6 +1,3 @@
[trial]
test_suite = tests
[check-manifest]
ignore =
.git-blame-ignore-revs
@ -10,23 +7,3 @@ ignore =
pylint.cfg
tox.ini
[flake8]
# see https://pycodestyle.readthedocs.io/en/latest/intro.html#error-codes
# for error codes. The ones we ignore are:
# W503: line break before binary operator
# W504: line break after binary operator
# E203: whitespace before ':' (which is contrary to pep8?)
# E731: do not assign a lambda expression, use a def
# E501: Line too long (black enforces this for us)
ignore=W503,W504,E203,E731,E501
[isort]
line_length = 88
sections=FUTURE,STDLIB,THIRDPARTY,TWISTED,FIRSTPARTY,TESTS,LOCALFOLDER
default_section=THIRDPARTY
known_first_party = synapse
known_tests=tests
known_twisted=twisted,OpenSSL
multi_line_output=3
include_trailing_comma=true
combine_as_imports=true

View file

@ -103,8 +103,8 @@ CONDITIONAL_REQUIREMENTS["lint"] = [
]
CONDITIONAL_REQUIREMENTS["mypy"] = [
"mypy==0.910",
"mypy-zope==0.3.2",
"mypy==0.931",
"mypy-zope==0.3.5",
"types-bleach>=4.1.0",
"types-jsonschema>=3.2.0",
"types-opentracing>=2.4.2",
@ -165,6 +165,7 @@ setup(
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
],
scripts=["synctl"] + glob.glob("scripts/*"),
cmdclass={"test": TestCommand},

View file

@ -66,13 +66,18 @@ class SortedDict(Dict[_KT, _VT]):
def __copy__(self: _SD) -> _SD: ...
@classmethod
@overload
def fromkeys(cls, seq: Iterable[_T_h]) -> SortedDict[_T_h, None]: ...
def fromkeys(
cls, seq: Iterable[_T_h], value: None = ...
) -> SortedDict[_T_h, None]: ...
@classmethod
@overload
def fromkeys(cls, seq: Iterable[_T_h], value: _S) -> SortedDict[_T_h, _S]: ...
def keys(self) -> SortedKeysView[_KT]: ...
def items(self) -> SortedItemsView[_KT, _VT]: ...
def values(self) -> SortedValuesView[_VT]: ...
# As of Python 3.10, `dict_{keys,items,values}` have an extra `mapping` attribute and so
# `Sorted{Keys,Items,Values}View` are no longer compatible with them.
# See https://github.com/python/typeshed/issues/6837
def keys(self) -> SortedKeysView[_KT]: ... # type: ignore[override]
def items(self) -> SortedItemsView[_KT, _VT]: ... # type: ignore[override]
def values(self) -> SortedValuesView[_VT]: ... # type: ignore[override]
@overload
def pop(self, key: _KT) -> _VT: ...
@overload

View file

@ -47,7 +47,7 @@ try:
except ImportError:
pass
__version__ = "1.53.0"
__version__ = "1.54.0rc1"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View file

@ -60,7 +60,7 @@ class Auth:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.state = hs.get_state_handler()
self._account_validity_handler = hs.get_account_validity_handler()

View file

@ -28,7 +28,7 @@ logger = logging.getLogger(__name__)
class AuthBlocking:
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self._server_notices_mxid = hs.config.servernotices.server_notices_mxid
self._hs_disabled = hs.config.server.hs_disabled

View file

@ -22,6 +22,7 @@ from typing import (
Dict,
Iterable,
List,
Mapping,
Optional,
Set,
TypeVar,
@ -150,7 +151,7 @@ def matrix_user_id_validator(user_id_str: str) -> UserID:
class Filtering:
def __init__(self, hs: "HomeServer"):
self._hs = hs
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.DEFAULT_FILTER_COLLECTION = FilterCollection(hs, {})
@ -294,7 +295,7 @@ class FilterCollection:
class Filter:
def __init__(self, hs: "HomeServer", filter_json: JsonDict):
self._hs = hs
self._store = hs.get_datastore()
self._store = hs.get_datastores().main
self.filter_json = filter_json
self.limit = filter_json.get("limit", 10)
@ -361,10 +362,10 @@ class Filter:
return self._check_fields(field_matchers)
else:
content = event.get("content")
# Content is assumed to be a dict below, so ensure it is. This should
# Content is assumed to be a mapping below, so ensure it is. This should
# always be true for events, but account_data has been allowed to
# have non-dict content.
if not isinstance(content, dict):
if not isinstance(content, Mapping):
content = {}
sender = event.get("sender", None)

View file

@ -15,13 +15,13 @@ import logging
import sys
from typing import Container
from synapse import python_dependencies # noqa: E402
from synapse.util import check_dependencies
logger = logging.getLogger(__name__)
try:
python_dependencies.check_requirements()
except python_dependencies.DependencyException as e:
check_dependencies.check_requirements()
except check_dependencies.DependencyException as e:
sys.stderr.writelines(
e.message # noqa: B306, DependencyException.message is a property
)

View file

@ -448,7 +448,7 @@ async def start(hs: "HomeServer") -> None:
# It is now safe to start your Synapse.
hs.start_listening()
hs.get_datastore().db_pool.start_profiling()
hs.get_datastores().main.db_pool.start_profiling()
hs.get_pusherpool().start()
# Log when we start the shut down process.

View file

@ -142,7 +142,7 @@ class KeyUploadServlet(RestServlet):
"""
super().__init__()
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.http_client = hs.get_simple_http_client()
self.main_uri = hs.config.worker.worker_main_http_uri

View file

@ -59,7 +59,6 @@ from synapse.http.server import (
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.python_dependencies import check_requirements
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
from synapse.rest import ClientRestResource
@ -70,6 +69,7 @@ from synapse.rest.synapse.client import build_synapse_client_resource_tree
from synapse.rest.well_known import well_known_resource
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.util.check_dependencies import check_requirements
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.module_loader import load_module
@ -372,7 +372,7 @@ def setup(config_options: List[str]) -> SynapseHomeServer:
await _base.start(hs)
hs.get_datastore().db_pool.updates.start_doing_background_updates()
hs.get_datastores().main.db_pool.updates.start_doing_background_updates()
register_start(start)

View file

@ -82,7 +82,7 @@ async def phone_stats_home(
# General statistics
#
store = hs.get_datastore()
store = hs.get_datastores().main
stats["homeserver"] = hs.config.server.server_name
stats["server_context"] = hs.config.server.server_context
@ -170,18 +170,22 @@ def start_phone_stats_home(hs: "HomeServer") -> None:
# Rather than update on per session basis, batch up the requests.
# If you increase the loop period, the accuracy of user_daily_visits
# table will decrease
clock.looping_call(hs.get_datastore().generate_user_daily_visits, 5 * 60 * 1000)
clock.looping_call(
hs.get_datastores().main.generate_user_daily_visits, 5 * 60 * 1000
)
# monthly active user limiting functionality
clock.looping_call(hs.get_datastore().reap_monthly_active_users, 1000 * 60 * 60)
hs.get_datastore().reap_monthly_active_users()
clock.looping_call(
hs.get_datastores().main.reap_monthly_active_users, 1000 * 60 * 60
)
hs.get_datastores().main.reap_monthly_active_users()
@wrap_as_background_process("generate_monthly_active_users")
async def generate_monthly_active_users() -> None:
current_mau_count = 0
current_mau_count_by_service = {}
reserved_users: Sized = ()
store = hs.get_datastore()
store = hs.get_datastores().main
if hs.config.server.limit_usage_by_mau or hs.config.server.mau_stats_only:
current_mau_count = await store.get_monthly_active_count()
current_mau_count_by_service = (

View file

@ -31,6 +31,14 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
# Type for the `device_one_time_key_counts` field in an appservice transaction
# user ID -> {device ID -> {algorithm -> count}}
TransactionOneTimeKeyCounts = Dict[str, Dict[str, Dict[str, int]]]
# Type for the `device_unused_fallback_keys` field in an appservice transaction
# user ID -> {device ID -> [algorithm]}
TransactionUnusedFallbackKeys = Dict[str, Dict[str, List[str]]]
class ApplicationServiceState(Enum):
DOWN = "down"
@ -72,6 +80,7 @@ class ApplicationService:
rate_limited: bool = True,
ip_range_whitelist: Optional[IPSet] = None,
supports_ephemeral: bool = False,
msc3202_transaction_extensions: bool = False,
):
self.token = token
self.url = (
@ -84,6 +93,7 @@ class ApplicationService:
self.id = id
self.ip_range_whitelist = ip_range_whitelist
self.supports_ephemeral = supports_ephemeral
self.msc3202_transaction_extensions = msc3202_transaction_extensions
if "|" in self.id:
raise Exception("application service ID cannot contain '|' character")
@ -339,12 +349,16 @@ class AppServiceTransaction:
events: List[EventBase],
ephemeral: List[JsonDict],
to_device_messages: List[JsonDict],
one_time_key_counts: TransactionOneTimeKeyCounts,
unused_fallback_keys: TransactionUnusedFallbackKeys,
):
self.service = service
self.id = id
self.events = events
self.ephemeral = ephemeral
self.to_device_messages = to_device_messages
self.one_time_key_counts = one_time_key_counts
self.unused_fallback_keys = unused_fallback_keys
async def send(self, as_api: "ApplicationServiceApi") -> bool:
"""Sends this transaction using the provided AS API interface.
@ -359,6 +373,8 @@ class AppServiceTransaction:
events=self.events,
ephemeral=self.ephemeral,
to_device_messages=self.to_device_messages,
one_time_key_counts=self.one_time_key_counts,
unused_fallback_keys=self.unused_fallback_keys,
txn_id=self.id,
)

View file

@ -19,6 +19,11 @@ from prometheus_client import Counter
from synapse.api.constants import EventTypes, Membership, ThirdPartyEntityKind
from synapse.api.errors import CodeMessageException
from synapse.appservice import (
ApplicationService,
TransactionOneTimeKeyCounts,
TransactionUnusedFallbackKeys,
)
from synapse.events import EventBase
from synapse.events.utils import serialize_event
from synapse.http.client import SimpleHttpClient
@ -26,7 +31,6 @@ from synapse.types import JsonDict, ThirdPartyInstanceID
from synapse.util.caches.response_cache import ResponseCache
if TYPE_CHECKING:
from synapse.appservice import ApplicationService
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
@ -219,6 +223,8 @@ class ApplicationServiceApi(SimpleHttpClient):
events: List[EventBase],
ephemeral: List[JsonDict],
to_device_messages: List[JsonDict],
one_time_key_counts: TransactionOneTimeKeyCounts,
unused_fallback_keys: TransactionUnusedFallbackKeys,
txn_id: Optional[int] = None,
) -> bool:
"""
@ -252,7 +258,7 @@ class ApplicationServiceApi(SimpleHttpClient):
uri = service.url + ("/transactions/%s" % urllib.parse.quote(str(txn_id)))
# Never send ephemeral events to appservices that do not support it
body: Dict[str, List[JsonDict]] = {"events": serialized_events}
body: JsonDict = {"events": serialized_events}
if service.supports_ephemeral:
body.update(
{
@ -262,6 +268,16 @@ class ApplicationServiceApi(SimpleHttpClient):
}
)
if service.msc3202_transaction_extensions:
if one_time_key_counts:
body[
"org.matrix.msc3202.device_one_time_key_counts"
] = one_time_key_counts
if unused_fallback_keys:
body[
"org.matrix.msc3202.device_unused_fallback_keys"
] = unused_fallback_keys
try:
await self.put_json(
uri=uri,

View file

@ -54,12 +54,19 @@ from typing import (
Callable,
Collection,
Dict,
Iterable,
List,
Optional,
Set,
Tuple,
)
from synapse.appservice import ApplicationService, ApplicationServiceState
from synapse.appservice import (
ApplicationService,
ApplicationServiceState,
TransactionOneTimeKeyCounts,
TransactionUnusedFallbackKeys,
)
from synapse.appservice.api import ApplicationServiceApi
from synapse.events import EventBase
from synapse.logging.context import run_in_background
@ -92,11 +99,11 @@ class ApplicationServiceScheduler:
def __init__(self, hs: "HomeServer"):
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.as_api = hs.get_application_service_api()
self.txn_ctrl = _TransactionController(self.clock, self.store, self.as_api)
self.queuer = _ServiceQueuer(self.txn_ctrl, self.clock)
self.queuer = _ServiceQueuer(self.txn_ctrl, self.clock, hs)
async def start(self) -> None:
logger.info("Starting appservice scheduler")
@ -153,7 +160,9 @@ class _ServiceQueuer:
appservice at a given time.
"""
def __init__(self, txn_ctrl: "_TransactionController", clock: Clock):
def __init__(
self, txn_ctrl: "_TransactionController", clock: Clock, hs: "HomeServer"
):
# dict of {service_id: [events]}
self.queued_events: Dict[str, List[EventBase]] = {}
# dict of {service_id: [events]}
@ -165,6 +174,10 @@ class _ServiceQueuer:
self.requests_in_flight: Set[str] = set()
self.txn_ctrl = txn_ctrl
self.clock = clock
self._msc3202_transaction_extensions_enabled: bool = (
hs.config.experimental.msc3202_transaction_extensions
)
self._store = hs.get_datastores().main
def start_background_request(self, service: ApplicationService) -> None:
# start a sender for this appservice if we don't already have one
@ -202,15 +215,84 @@ class _ServiceQueuer:
if not events and not ephemeral and not to_device_messages_to_send:
return
one_time_key_counts: Optional[TransactionOneTimeKeyCounts] = None
unused_fallback_keys: Optional[TransactionUnusedFallbackKeys] = None
if (
self._msc3202_transaction_extensions_enabled
and service.msc3202_transaction_extensions
):
# Compute the one-time key counts and fallback key usage states
# for the users which are mentioned in this transaction,
# as well as the appservice's sender.
(
one_time_key_counts,
unused_fallback_keys,
) = await self._compute_msc3202_otk_counts_and_fallback_keys(
service, events, ephemeral, to_device_messages_to_send
)
try:
await self.txn_ctrl.send(
service, events, ephemeral, to_device_messages_to_send
service,
events,
ephemeral,
to_device_messages_to_send,
one_time_key_counts,
unused_fallback_keys,
)
except Exception:
logger.exception("AS request failed")
finally:
self.requests_in_flight.discard(service.id)
async def _compute_msc3202_otk_counts_and_fallback_keys(
self,
service: ApplicationService,
events: Iterable[EventBase],
ephemerals: Iterable[JsonDict],
to_device_messages: Iterable[JsonDict],
) -> Tuple[TransactionOneTimeKeyCounts, TransactionUnusedFallbackKeys]:
"""
Given a list of the events, ephemeral messages and to-device messages,
- first computes a list of application services users that may have
interesting updates to the one-time key counts or fallback key usage.
- then computes one-time key counts and fallback key usages for those users.
Given a list of application service users that are interesting,
compute one-time key counts and fallback key usages for the users.
"""
# Set of 'interesting' users who may have updates
users: Set[str] = set()
# The sender is always included
users.add(service.sender)
# All AS users that would receive the PDUs or EDUs sent to these rooms
# are classed as 'interesting'.
rooms_of_interesting_users: Set[str] = set()
# PDUs
rooms_of_interesting_users.update(event.room_id for event in events)
# EDUs
rooms_of_interesting_users.update(
ephemeral["room_id"] for ephemeral in ephemerals
)
# Look up the AS users in those rooms
for room_id in rooms_of_interesting_users:
users.update(
await self._store.get_app_service_users_in_room(room_id, service)
)
# Add recipients of to-device messages.
# device_message["user_id"] is the ID of the recipient.
users.update(device_message["user_id"] for device_message in to_device_messages)
# Compute and return the counts / fallback key usage states
otk_counts = await self._store.count_bulk_e2e_one_time_keys_for_as(users)
unused_fbks = await self._store.get_e2e_bulk_unused_fallback_key_types(users)
return otk_counts, unused_fbks
class _TransactionController:
"""Transaction manager.
@ -238,6 +320,8 @@ class _TransactionController:
events: List[EventBase],
ephemeral: Optional[List[JsonDict]] = None,
to_device_messages: Optional[List[JsonDict]] = None,
one_time_key_counts: Optional[TransactionOneTimeKeyCounts] = None,
unused_fallback_keys: Optional[TransactionUnusedFallbackKeys] = None,
) -> None:
"""
Create a transaction with the given data and send to the provided
@ -248,6 +332,10 @@ class _TransactionController:
events: The persistent events to include in the transaction.
ephemeral: The ephemeral events to include in the transaction.
to_device_messages: The to-device messages to include in the transaction.
one_time_key_counts: Counts of remaining one-time keys for relevant
appservice devices in the transaction.
unused_fallback_keys: Lists of unused fallback keys for relevant
appservice devices in the transaction.
"""
try:
txn = await self.store.create_appservice_txn(
@ -255,6 +343,8 @@ class _TransactionController:
events=events,
ephemeral=ephemeral or [],
to_device_messages=to_device_messages or [],
one_time_key_counts=one_time_key_counts or {},
unused_fallback_keys=unused_fallback_keys or {},
)
service_is_up = await self._is_service_up(service)
if service_is_up:

View file

@ -166,6 +166,16 @@ def _load_appservice(
supports_ephemeral = as_info.get("de.sorunome.msc2409.push_ephemeral", False)
# Opt-in flag for the MSC3202-specific transactional behaviour.
# When enabled, appservice transactions contain the following information:
# - device One-Time Key counts
# - device unused fallback key usage states
msc3202_transaction_extensions = as_info.get("org.matrix.msc3202", False)
if not isinstance(msc3202_transaction_extensions, bool):
raise ValueError(
"The `org.matrix.msc3202` option should be true or false if specified."
)
return ApplicationService(
token=as_info["as_token"],
hostname=hostname,
@ -174,8 +184,9 @@ def _load_appservice(
hs_token=as_info["hs_token"],
sender=user_id,
id=as_info["id"],
supports_ephemeral=supports_ephemeral,
protocols=protocols,
rate_limited=rate_limited,
ip_range_whitelist=ip_range_whitelist,
supports_ephemeral=supports_ephemeral,
msc3202_transaction_extensions=msc3202_transaction_extensions,
)

View file

@ -20,7 +20,7 @@ from typing import Callable, Dict, Optional
import attr
from synapse.python_dependencies import DependencyException, check_requirements
from synapse.util.check_dependencies import DependencyException, check_requirements
from ._base import Config, ConfigError

View file

@ -41,20 +41,12 @@ class ExperimentalConfig(Config):
# MSC3244 (room version capabilities)
self.msc3244_enabled: bool = experimental.get("msc3244_enabled", True)
# MSC3283 (set displayname, avatar_url and change 3pid capabilities)
self.msc3283_enabled: bool = experimental.get("msc3283_enabled", False)
# MSC3266 (room summary api)
self.msc3266_enabled: bool = experimental.get("msc3266_enabled", False)
# MSC3030 (Jump to date API endpoint)
self.msc3030_enabled: bool = experimental.get("msc3030_enabled", False)
# The portion of MSC3202 which is related to device masquerading.
self.msc3202_device_masquerading_enabled: bool = experimental.get(
"msc3202_device_masquerading", False
)
# MSC2409 (this setting only relates to optionally sending to-device messages).
# Presence, typing and read receipt EDUs are already sent to application services that
# have opted in to receive them. If enabled, this adds to-device messages to that list.
@ -62,5 +54,23 @@ class ExperimentalConfig(Config):
"msc2409_to_device_messages_enabled", False
)
# The portion of MSC3202 which is related to device masquerading.
self.msc3202_device_masquerading_enabled: bool = experimental.get(
"msc3202_device_masquerading", False
)
# Portion of MSC3202 related to transaction extensions:
# sending one-time key counts and fallback key usage to application services.
self.msc3202_transaction_extensions: bool = experimental.get(
"msc3202_transaction_extensions", False
)
# MSC3706 (server-side support for partial state in /send_join responses)
self.msc3706_enabled: bool = experimental.get("msc3706_enabled", False)
# experimental support for faster joins over federation (msc2775, msc3706)
# requires a target server with msc3706_enabled enabled.
self.faster_joins_enabled: bool = experimental.get("faster_joins", False)
# MSC3720 (Account status endpoint)
self.msc3720_enabled: bool = experimental.get("msc3720_enabled", False)

View file

@ -33,7 +33,6 @@ from twisted.logger import (
globalLogBeginner,
)
from synapse.logging._structured import setup_structured_logging
from synapse.logging.context import LoggingContextFilter
from synapse.logging.filter import MetadataFilter
@ -138,6 +137,12 @@ Support for the log_file configuration option and --log-file command-line option
removed in Synapse 1.3.0. You should instead set up a separate log configuration file.
"""
STRUCTURED_ERROR = """\
Support for the structured configuration option was removed in Synapse 1.54.0.
You should instead use the standard logging configuration. See
https://matrix-org.github.io/synapse/v1.54/structured_logging.html
"""
class LoggingConfig(Config):
section = "logging"
@ -292,10 +297,9 @@ def _load_logging_config(log_config_path: str) -> None:
if not log_config:
logging.warning("Loaded a blank logging config?")
# If the old structured logging configuration is being used, convert it to
# the new style configuration.
# If the old structured logging configuration is being used, raise an error.
if "structured" in log_config and log_config.get("structured"):
log_config = setup_structured_logging(log_config)
raise ConfigError(STRUCTURED_ERROR)
logging.config.dictConfig(log_config)

View file

@ -15,7 +15,7 @@
import attr
from synapse.python_dependencies import DependencyException, check_requirements
from synapse.util.check_dependencies import DependencyException, check_requirements
from ._base import Config, ConfigError

View file

@ -20,11 +20,11 @@ import attr
from synapse.config._util import validate_config
from synapse.config.sso import SsoAttributeRequirement
from synapse.python_dependencies import DependencyException, check_requirements
from synapse.types import JsonDict
from synapse.util.module_loader import load_module
from synapse.util.stringutils import parse_and_validate_mxc_uri
from ..util.check_dependencies import DependencyException, check_requirements
from ._base import Config, ConfigError, read_file
DEFAULT_USER_MAPPING_PROVIDER = "synapse.handlers.oidc.JinjaOidcMappingProvider"

View file

@ -13,7 +13,7 @@
# limitations under the License.
from synapse.config._base import Config
from synapse.python_dependencies import check_requirements
from synapse.util.check_dependencies import check_requirements
class RedisConfig(Config):

View file

@ -20,8 +20,8 @@ from urllib.request import getproxies_environment # type: ignore
import attr
from synapse.config.server import DEFAULT_IP_RANGE_BLACKLIST, generate_ip_set
from synapse.python_dependencies import DependencyException, check_requirements
from synapse.types import JsonDict
from synapse.util.check_dependencies import DependencyException, check_requirements
from synapse.util.module_loader import load_module
from ._base import Config, ConfigError

View file

@ -17,8 +17,8 @@ import logging
from typing import Any, List, Set
from synapse.config.sso import SsoAttributeRequirement
from synapse.python_dependencies import DependencyException, check_requirements
from synapse.types import JsonDict
from synapse.util.check_dependencies import DependencyException, check_requirements
from synapse.util.module_loader import load_module, load_python_module
from ._base import Config, ConfigError

View file

@ -146,7 +146,7 @@ DEFAULT_IP_RANGE_BLACKLIST = [
"fec0::/10",
]
DEFAULT_ROOM_VERSION = "6"
DEFAULT_ROOM_VERSION = "9"
ROOM_COMPLEXITY_TOO_GREAT = (
"Your homeserver is unable to join rooms this large or complex. "

View file

@ -14,7 +14,7 @@
from typing import Set
from synapse.python_dependencies import DependencyException, check_requirements
from synapse.util.check_dependencies import DependencyException, check_requirements
from ._base import Config, ConfigError

View file

@ -476,7 +476,7 @@ class StoreKeyFetcher(KeyFetcher):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
async def _fetch_keys(
self, keys_to_fetch: List[_FetchKeyRequest]
@ -498,7 +498,7 @@ class BaseV2KeyFetcher(KeyFetcher):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.config = hs.config
async def process_v2_response(

View file

@ -374,9 +374,9 @@ def _is_membership_change_allowed(
return
# Require the user to be in the room for membership changes other than join/knock.
if Membership.JOIN != membership and (
RoomVersion.msc2403_knocking and Membership.KNOCK != membership
):
# Note that the room version check for knocking is done implicitly by `caller_knocked`
# and the ability to set a membership of `knock` in the first place.
if Membership.JOIN != membership and Membership.KNOCK != membership:
# If the user has been invited or has knocked, they are allowed to change their
# membership event to leave
if (

View file

@ -189,7 +189,7 @@ class EventBuilderFactory:
self.hostname = hs.hostname
self.signing_key = hs.signing_key
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.state = hs.get_state_handler()
self._event_auth_handler = hs.get_event_auth_handler()

View file

@ -101,6 +101,9 @@ class EventContext:
As with _current_state_ids, this is a private attribute. It should be
accessed via get_prev_state_ids.
partial_state: if True, we may be storing this event with a temporary,
incomplete state.
"""
rejected: Union[bool, str] = False
@ -113,12 +116,15 @@ class EventContext:
_current_state_ids: Optional[StateMap[str]] = None
_prev_state_ids: Optional[StateMap[str]] = None
partial_state: bool = False
@staticmethod
def with_state(
state_group: Optional[int],
state_group_before_event: Optional[int],
current_state_ids: Optional[StateMap[str]],
prev_state_ids: Optional[StateMap[str]],
partial_state: bool,
prev_group: Optional[int] = None,
delta_ids: Optional[StateMap[str]] = None,
) -> "EventContext":
@ -129,6 +135,7 @@ class EventContext:
state_group_before_event=state_group_before_event,
prev_group=prev_group,
delta_ids=delta_ids,
partial_state=partial_state,
)
@staticmethod
@ -170,6 +177,7 @@ class EventContext:
"prev_group": self.prev_group,
"delta_ids": _encode_state_dict(self.delta_ids),
"app_service_id": self.app_service.id if self.app_service else None,
"partial_state": self.partial_state,
}
@staticmethod
@ -196,6 +204,7 @@ class EventContext:
prev_group=input["prev_group"],
delta_ids=_decode_state_dict(input["delta_ids"]),
rejected=input["rejected"],
partial_state=input.get("partial_state", False),
)
app_service_id = input["app_service_id"]

View file

@ -17,6 +17,7 @@ from typing import TYPE_CHECKING, Any, Awaitable, Callable, List, Optional, Tupl
from synapse.api.errors import ModuleFailedException, SynapseError
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.storage.roommember import ProfileInfo
from synapse.types import Requester, StateMap
from synapse.util.async_helpers import maybe_awaitable
@ -37,6 +38,8 @@ CHECK_VISIBILITY_CAN_BE_MODIFIED_CALLBACK = Callable[
[str, StateMap[EventBase], str], Awaitable[bool]
]
ON_NEW_EVENT_CALLBACK = Callable[[EventBase, StateMap[EventBase]], Awaitable]
ON_PROFILE_UPDATE_CALLBACK = Callable[[str, ProfileInfo, bool, bool], Awaitable]
ON_USER_DEACTIVATION_STATUS_CHANGED_CALLBACK = Callable[[str, bool, bool], Awaitable]
def load_legacy_third_party_event_rules(hs: "HomeServer") -> None:
@ -143,7 +146,7 @@ class ThirdPartyEventRules:
def __init__(self, hs: "HomeServer"):
self.third_party_rules = None
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self._check_event_allowed_callbacks: List[CHECK_EVENT_ALLOWED_CALLBACK] = []
self._on_create_room_callbacks: List[ON_CREATE_ROOM_CALLBACK] = []
@ -154,6 +157,10 @@ class ThirdPartyEventRules:
CHECK_VISIBILITY_CAN_BE_MODIFIED_CALLBACK
] = []
self._on_new_event_callbacks: List[ON_NEW_EVENT_CALLBACK] = []
self._on_profile_update_callbacks: List[ON_PROFILE_UPDATE_CALLBACK] = []
self._on_user_deactivation_status_changed_callbacks: List[
ON_USER_DEACTIVATION_STATUS_CHANGED_CALLBACK
] = []
def register_third_party_rules_callbacks(
self,
@ -166,6 +173,8 @@ class ThirdPartyEventRules:
CHECK_VISIBILITY_CAN_BE_MODIFIED_CALLBACK
] = None,
on_new_event: Optional[ON_NEW_EVENT_CALLBACK] = None,
on_profile_update: Optional[ON_PROFILE_UPDATE_CALLBACK] = None,
on_deactivation: Optional[ON_USER_DEACTIVATION_STATUS_CHANGED_CALLBACK] = None,
) -> None:
"""Register callbacks from modules for each hook."""
if check_event_allowed is not None:
@ -187,6 +196,12 @@ class ThirdPartyEventRules:
if on_new_event is not None:
self._on_new_event_callbacks.append(on_new_event)
if on_profile_update is not None:
self._on_profile_update_callbacks.append(on_profile_update)
if on_deactivation is not None:
self._on_user_deactivation_status_changed_callbacks.append(on_deactivation)
async def check_event_allowed(
self, event: EventBase, context: EventContext
) -> Tuple[bool, Optional[dict]]:
@ -334,9 +349,6 @@ class ThirdPartyEventRules:
Args:
event_id: The ID of the event.
Raises:
ModuleFailureError if a callback raised any exception.
"""
# Bail out early without hitting the store if we don't have any callbacks
if len(self._on_new_event_callbacks) == 0:
@ -370,3 +382,41 @@ class ThirdPartyEventRules:
state_events[key] = room_state_events[event_id]
return state_events
async def on_profile_update(
self, user_id: str, new_profile: ProfileInfo, by_admin: bool, deactivation: bool
) -> None:
"""Called after the global profile of a user has been updated. Does not include
per-room profile changes.
Args:
user_id: The user whose profile was changed.
new_profile: The updated profile for the user.
by_admin: Whether the profile update was performed by a server admin.
deactivation: Whether this change was made while deactivating the user.
"""
for callback in self._on_profile_update_callbacks:
try:
await callback(user_id, new_profile, by_admin, deactivation)
except Exception as e:
logger.exception(
"Failed to run module API callback %s: %s", callback, e
)
async def on_user_deactivation_status_changed(
self, user_id: str, deactivated: bool, by_admin: bool
) -> None:
"""Called after a user has been deactivated or reactivated.
Args:
user_id: The deactivated user.
deactivated: Whether the user is now deactivated.
by_admin: Whether the deactivation was performed by a server admin.
"""
for callback in self._on_user_deactivation_status_changed_callbacks:
try:
await callback(user_id, deactivated, by_admin)
except Exception as e:
logger.exception(
"Failed to run module API callback %s: %s", callback, e
)

View file

@ -425,6 +425,33 @@ class EventClientSerializer:
return serialized_event
def _apply_edit(
self, orig_event: EventBase, serialized_event: JsonDict, edit: EventBase
) -> None:
"""Replace the content, preserving existing relations of the serialized event.
Args:
orig_event: The original event.
serialized_event: The original event, serialized. This is modified.
edit: The event which edits the above.
"""
# Ensure we take copies of the edit content, otherwise we risk modifying
# the original event.
edit_content = edit.content.copy()
# Unfreeze the event content if necessary, so that we may modify it below
edit_content = unfreeze(edit_content)
serialized_event["content"] = edit_content.get("m.new_content", {})
# Check for existing relations
relates_to = orig_event.content.get("m.relates_to")
if relates_to:
# Keep the relations, ensuring we use a dict copy of the original
serialized_event["content"]["m.relates_to"] = relates_to.copy()
else:
serialized_event["content"].pop("m.relates_to", None)
def _inject_bundled_aggregations(
self,
event: EventBase,
@ -450,26 +477,11 @@ class EventClientSerializer:
serialized_aggregations[RelationTypes.REFERENCE] = aggregations.references
if aggregations.replace:
# If there is an edit replace the content, preserving existing
# relations.
# If there is an edit, apply it to the event.
edit = aggregations.replace
self._apply_edit(event, serialized_event, edit)
# Ensure we take copies of the edit content, otherwise we risk modifying
# the original event.
edit_content = edit.content.copy()
# Unfreeze the event content if necessary, so that we may modify it below
edit_content = unfreeze(edit_content)
serialized_event["content"] = edit_content.get("m.new_content", {})
# Check for existing relations
relates_to = event.content.get("m.relates_to")
if relates_to:
# Keep the relations, ensuring we use a dict copy of the original
serialized_event["content"]["m.relates_to"] = relates_to.copy()
else:
serialized_event["content"].pop("m.relates_to", None)
# Include information about it in the relations dict.
serialized_aggregations[RelationTypes.REPLACE] = {
"event_id": edit.event_id,
"origin_server_ts": edit.origin_server_ts,
@ -478,13 +490,22 @@ class EventClientSerializer:
# If this event is the start of a thread, include a summary of the replies.
if aggregations.thread:
thread = aggregations.thread
# Don't bundle aggregations as this could recurse forever.
serialized_latest_event = self.serialize_event(
thread.latest_event, time_now, bundle_aggregations=None
)
# Manually apply an edit, if one exists.
if thread.latest_edit:
self._apply_edit(
thread.latest_event, serialized_latest_event, thread.latest_edit
)
serialized_aggregations[RelationTypes.THREAD] = {
# Don't bundle aggregations as this could recurse forever.
"latest_event": self.serialize_event(
aggregations.thread.latest_event, time_now, bundle_aggregations=None
),
"count": aggregations.thread.count,
"current_user_participated": aggregations.thread.current_user_participated,
"latest_event": serialized_latest_event,
"count": thread.count,
"current_user_participated": thread.current_user_participated,
}
# Include the bundled aggregations in the event.

View file

@ -39,7 +39,7 @@ class FederationBase:
self.server_name = hs.hostname
self.keyring = hs.get_keyring()
self.spam_checker = hs.get_spam_checker()
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self._clock = hs.get_clock()
async def _check_sigs_and_hash(
@ -47,6 +47,11 @@ class FederationBase:
) -> EventBase:
"""Checks that event is correctly signed by the sending server.
Also checks the content hash, and redacts the event if there is a mismatch.
Also runs the event through the spam checker; if it fails, redacts the event
and flags it as soft-failed.
Args:
room_version: The room version of the PDU
pdu: the event to be checked
@ -55,7 +60,10 @@ class FederationBase:
* the original event if the checks pass
* a redacted version of the event (if the signature
matched but the hash did not)
* throws a SynapseError if the signature check failed."""
Raises:
SynapseError if the signature check failed.
"""
try:
await _check_sigs_on_pdu(self.keyring, room_version, pdu)
except SynapseError as e:

View file

@ -1,4 +1,4 @@
# Copyright 2015-2021 The Matrix.org Foundation C.I.C.
# Copyright 2015-2022 The Matrix.org Foundation C.I.C.
# Copyright 2020 Sorunome
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -56,7 +56,7 @@ from synapse.api.room_versions import (
from synapse.events import EventBase, builder
from synapse.federation.federation_base import FederationBase, event_from_pdu_json
from synapse.federation.transport.client import SendJoinResponse
from synapse.types import JsonDict, get_domain_from_id
from synapse.types import JsonDict, UserID, get_domain_from_id
from synapse.util.async_helpers import concurrently_execute
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.retryutils import NotRetryingDestination
@ -89,6 +89,12 @@ class SendJoinResult:
state: List[EventBase]
auth_chain: List[EventBase]
# True if 'state' elides non-critical membership events
partial_state: bool
# if 'partial_state' is set, a list of the servers in the room (otherwise empty)
servers_in_room: List[str]
class FederationClient(FederationBase):
def __init__(self, hs: "HomeServer"):
@ -413,26 +419,90 @@ class FederationClient(FederationBase):
return state_event_ids, auth_event_ids
async def get_room_state(
self,
destination: str,
room_id: str,
event_id: str,
room_version: RoomVersion,
) -> Tuple[List[EventBase], List[EventBase]]:
"""Calls the /state endpoint to fetch the state at a particular point
in the room.
Any invalid events (those with incorrect or unverifiable signatures or hashes)
are filtered out from the response, and any duplicate events are removed.
(Size limits and other event-format checks are *not* performed.)
Note that the result is not ordered, so callers must be careful to process
the events in an order that handles dependencies.
Returns:
a tuple of (state events, auth events)
"""
result = await self.transport_layer.get_room_state(
room_version,
destination,
room_id,
event_id,
)
state_events = result.state
auth_events = result.auth_events
# we may as well filter out any duplicates from the response, to save
# processing them multiple times. (In particular, events may be present in
# `auth_events` as well as `state`, which is redundant).
#
# We don't rely on the sort order of the events, so we can just stick them
# in a dict.
state_event_map = {event.event_id: event for event in state_events}
auth_event_map = {
event.event_id: event
for event in auth_events
if event.event_id not in state_event_map
}
logger.info(
"Processing from /state: %d state events, %d auth events",
len(state_event_map),
len(auth_event_map),
)
valid_auth_events = await self._check_sigs_and_hash_and_fetch(
destination, auth_event_map.values(), room_version
)
valid_state_events = await self._check_sigs_and_hash_and_fetch(
destination, state_event_map.values(), room_version
)
return valid_state_events, valid_auth_events
async def _check_sigs_and_hash_and_fetch(
self,
origin: str,
pdus: Collection[EventBase],
room_version: RoomVersion,
) -> List[EventBase]:
"""Takes a list of PDUs and checks the signatures and hashes of each
one. If a PDU fails its signature check then we check if we have it in
the database and if not then request if from the originating server of
that PDU.
"""Checks the signatures and hashes of a list of events.
If a PDU fails its signature check then we check if we have it in
the database, and if not then request it from the sender's server (if that
is different from `origin`). If that still fails, the event is omitted from
the returned list.
If a PDU fails its content hash check then it is redacted.
The given list of PDUs are not modified, instead the function returns
Also runs each event through the spam checker; if it fails, redacts the event
and flags it as soft-failed.
The given list of PDUs are not modified; instead the function returns
a new list.
Args:
origin
pdu
room_version
origin: The server that sent us these events
pdus: The events to be checked
room_version: the version of the room these events are in
Returns:
A list of PDUs that have valid signatures and hashes.
@ -463,11 +533,16 @@ class FederationClient(FederationBase):
origin: str,
room_version: RoomVersion,
) -> Optional[EventBase]:
"""Takes a PDU and checks its signatures and hashes. If the PDU fails
its signature check then we check if we have it in the database and if
not then request if from the originating server of that PDU.
"""Takes a PDU and checks its signatures and hashes.
If then PDU fails its content hash check then it is redacted.
If the PDU fails its signature check then we check if we have it in the
database; if not, we then request it from sender's server (if that is not the
same as `origin`). If that still fails, we return None.
If the PDU fails its content hash check, it is redacted.
Also runs the event through the spam checker; if it fails, redacts the event
and flags it as soft-failed.
Args:
origin
@ -540,11 +615,15 @@ class FederationClient(FederationBase):
synapse_error = e.to_synapse_error()
# There is no good way to detect an "unknown" endpoint.
#
# Dendrite returns a 404 (with no body); synapse returns a 400
# Dendrite returns a 404 (with a body of "404 page not found");
# Conduit returns a 404 (with no body); and Synapse returns a 400
# with M_UNRECOGNISED.
return e.code == 404 or (
e.code == 400 and synapse_error.errcode == Codes.UNRECOGNIZED
)
#
# This needs to be rather specific as some endpoints truly do return 404
# errors.
return (
e.code == 404 and (not e.response or e.response == b"404 page not found")
) or (e.code == 400 and synapse_error.errcode == Codes.UNRECOGNIZED)
async def _try_destination_list(
self,
@ -864,23 +943,32 @@ class FederationClient(FederationBase):
for s in signed_state:
s.internal_metadata = copy.deepcopy(s.internal_metadata)
# double-check that the same create event has ended up in the auth chain
# double-check that the auth chain doesn't include a different create event
auth_chain_create_events = [
e.event_id
for e in signed_auth
if (e.type, e.state_key) == (EventTypes.Create, "")
]
if auth_chain_create_events != [create_event.event_id]:
if auth_chain_create_events and auth_chain_create_events != [
create_event.event_id
]:
raise InvalidResponseError(
"Unexpected create event(s) in auth chain: %s"
% (auth_chain_create_events,)
)
if response.partial_state and not response.servers_in_room:
raise InvalidResponseError(
"partial_state was set, but no servers were listed in the room"
)
return SendJoinResult(
event=event,
state=signed_state,
auth_chain=signed_auth,
origin=destination,
partial_state=response.partial_state,
servers_in_room=response.servers_in_room or [],
)
# MSC3083 defines additional error codes for room joins.
@ -918,7 +1006,7 @@ class FederationClient(FederationBase):
)
except HttpResponseException as e:
# If an error is received that is due to an unrecognised endpoint,
# fallback to the v1 endpoint. Otherwise consider it a legitmate error
# fallback to the v1 endpoint. Otherwise, consider it a legitimate error
# and raise.
if not self._is_unknown_endpoint(e):
raise
@ -987,7 +1075,7 @@ class FederationClient(FederationBase):
except HttpResponseException as e:
# If an error is received that is due to an unrecognised endpoint,
# fallback to the v1 endpoint if the room uses old-style event IDs.
# Otherwise consider it a legitmate error and raise.
# Otherwise, consider it a legitimate error and raise.
err = e.to_synapse_error()
if self._is_unknown_endpoint(e, err):
if room_version.event_format != EventFormatVersions.V1:
@ -1048,7 +1136,7 @@ class FederationClient(FederationBase):
)
except HttpResponseException as e:
# If an error is received that is due to an unrecognised endpoint,
# fallback to the v1 endpoint. Otherwise consider it a legitmate error
# fallback to the v1 endpoint. Otherwise, consider it a legitimate error
# and raise.
if not self._is_unknown_endpoint(e):
raise
@ -1274,61 +1362,6 @@ class FederationClient(FederationBase):
# server doesn't give it to us.
return None
async def get_space_summary(
self,
destinations: Iterable[str],
room_id: str,
suggested_only: bool,
max_rooms_per_space: Optional[int],
exclude_rooms: List[str],
) -> "FederationSpaceSummaryResult":
"""
Call other servers to get a summary of the given space
Args:
destinations: The remote servers. We will try them in turn, omitting any
that have been blacklisted.
room_id: ID of the space to be queried
suggested_only: If true, ask the remote server to only return children
with the "suggested" flag set
max_rooms_per_space: A limit on the number of children to return for each
space
exclude_rooms: A list of room IDs to tell the remote server to skip
Returns:
a parsed FederationSpaceSummaryResult
Raises:
SynapseError if we were unable to get a valid summary from any of the
remote servers
"""
async def send_request(destination: str) -> FederationSpaceSummaryResult:
res = await self.transport_layer.get_space_summary(
destination=destination,
room_id=room_id,
suggested_only=suggested_only,
max_rooms_per_space=max_rooms_per_space,
exclude_rooms=exclude_rooms,
)
try:
return FederationSpaceSummaryResult.from_json_dict(res)
except ValueError as e:
raise InvalidResponseError(str(e))
return await self._try_destination_list(
"fetch space summary",
destinations,
send_request,
failover_on_unknown_endpoint=True,
)
async def get_room_hierarchy(
self,
destinations: Iterable[str],
@ -1374,8 +1407,8 @@ class FederationClient(FederationBase):
)
except HttpResponseException as e:
# If an error is received that is due to an unrecognised endpoint,
# fallback to the unstable endpoint. Otherwise consider it a
# legitmate error and raise.
# fallback to the unstable endpoint. Otherwise, consider it a
# legitimate error and raise.
if not self._is_unknown_endpoint(e):
raise
@ -1400,10 +1433,8 @@ class FederationClient(FederationBase):
if any(not isinstance(e, dict) for e in children_state):
raise InvalidResponseError("Invalid event in 'children_state' list")
try:
[
FederationSpaceSummaryEventResult.from_json_dict(e)
for e in children_state
]
for child_state in children_state:
_validate_hierarchy_event(child_state)
except ValueError as e:
raise InvalidResponseError(str(e))
@ -1425,62 +1456,12 @@ class FederationClient(FederationBase):
return room, children_state, children, inaccessible_children
try:
result = await self._try_destination_list(
"fetch room hierarchy",
destinations,
send_request,
failover_on_unknown_endpoint=True,
)
except SynapseError as e:
# If an unexpected error occurred, re-raise it.
if e.code != 502:
raise
logger.debug(
"Couldn't fetch room hierarchy, falling back to the spaces API"
)
# Fallback to the old federation API and translate the results if
# no servers implement the new API.
#
# The algorithm below is a bit inefficient as it only attempts to
# parse information for the requested room, but the legacy API may
# return additional layers.
legacy_result = await self.get_space_summary(
destinations,
room_id,
suggested_only,
max_rooms_per_space=None,
exclude_rooms=[],
)
# Find the requested room in the response (and remove it).
for _i, room in enumerate(legacy_result.rooms):
if room.get("room_id") == room_id:
break
else:
# The requested room was not returned, nothing we can do.
raise
requested_room = legacy_result.rooms.pop(_i)
# Find any children events of the requested room.
children_events = []
children_room_ids = set()
for event in legacy_result.events:
if event.room_id == room_id:
children_events.append(event.data)
children_room_ids.add(event.state_key)
# Find the children rooms.
children = []
for room in legacy_result.rooms:
if room.get("room_id") in children_room_ids:
children.append(room)
# It isn't clear from the response whether some of the rooms are
# not accessible.
result = (requested_room, children_events, children, ())
result = await self._try_destination_list(
"fetch room hierarchy",
destinations,
send_request,
failover_on_unknown_endpoint=True,
)
# Cache the result to avoid fetching data over federation every time.
self._get_room_hierarchy_cache[(room_id, suggested_only)] = result
@ -1526,6 +1507,64 @@ class FederationClient(FederationBase):
except ValueError as e:
raise InvalidResponseError(str(e))
async def get_account_status(
self, destination: str, user_ids: List[str]
) -> Tuple[JsonDict, List[str]]:
"""Retrieves account statuses for a given list of users on a given remote
homeserver.
If the request fails for any reason, all user IDs for this destination are marked
as failed.
Args:
destination: the destination to contact
user_ids: the user ID(s) for which to request account status(es)
Returns:
The account statuses, as well as the list of user IDs for which it was not
possible to retrieve a status.
"""
try:
res = await self.transport_layer.get_account_status(destination, user_ids)
except Exception:
# If the query failed for any reason, mark all the users as failed.
return {}, user_ids
statuses = res.get("account_statuses", {})
failures = res.get("failures", [])
if not isinstance(statuses, dict) or not isinstance(failures, list):
# Make sure we're not feeding back malformed data back to the caller.
logger.warning(
"Destination %s responded with malformed data to account_status query",
destination,
)
return {}, user_ids
for user_id in user_ids:
# Any account whose status is missing is a user we failed to receive the
# status of.
if user_id not in statuses and user_id not in failures:
failures.append(user_id)
# Filter out any user ID that doesn't belong to the remote server that sent its
# status (or failure).
def filter_user_id(user_id: str) -> bool:
try:
return UserID.from_string(user_id).domain == destination
except SynapseError:
# If the user ID doesn't parse, ignore it.
return False
filtered_statuses = dict(
# item is a (key, value) tuple, so item[0] is the user ID.
filter(lambda item: filter_user_id(item[0]), statuses.items())
)
filtered_failures = list(filter(filter_user_id, failures))
return filtered_statuses, filtered_failures
@attr.s(frozen=True, slots=True, auto_attribs=True)
class TimestampToEventResponse:
@ -1564,89 +1603,34 @@ class TimestampToEventResponse:
return cls(event_id, origin_server_ts, d)
@attr.s(frozen=True, slots=True, auto_attribs=True)
class FederationSpaceSummaryEventResult:
"""Represents a single event in the result of a successful get_space_summary call.
def _validate_hierarchy_event(d: JsonDict) -> None:
"""Validate an event within the result of a /hierarchy request
It's essentially just a serialised event object, but we do a bit of parsing and
validation in `from_json_dict` and store some of the validated properties in
object attributes.
Args:
d: json object to be parsed
Raises:
ValueError if d is not a valid event
"""
event_type: str
room_id: str
state_key: str
via: Sequence[str]
event_type = d.get("type")
if not isinstance(event_type, str):
raise ValueError("Invalid event: 'event_type' must be a str")
# the raw data, including the above keys
data: JsonDict
room_id = d.get("room_id")
if not isinstance(room_id, str):
raise ValueError("Invalid event: 'room_id' must be a str")
@classmethod
def from_json_dict(cls, d: JsonDict) -> "FederationSpaceSummaryEventResult":
"""Parse an event within the result of a /spaces/ request
state_key = d.get("state_key")
if not isinstance(state_key, str):
raise ValueError("Invalid event: 'state_key' must be a str")
Args:
d: json object to be parsed
content = d.get("content")
if not isinstance(content, dict):
raise ValueError("Invalid event: 'content' must be a dict")
Raises:
ValueError if d is not a valid event
"""
event_type = d.get("type")
if not isinstance(event_type, str):
raise ValueError("Invalid event: 'event_type' must be a str")
room_id = d.get("room_id")
if not isinstance(room_id, str):
raise ValueError("Invalid event: 'room_id' must be a str")
state_key = d.get("state_key")
if not isinstance(state_key, str):
raise ValueError("Invalid event: 'state_key' must be a str")
content = d.get("content")
if not isinstance(content, dict):
raise ValueError("Invalid event: 'content' must be a dict")
via = content.get("via")
if not isinstance(via, Sequence):
raise ValueError("Invalid event: 'via' must be a list")
if any(not isinstance(v, str) for v in via):
raise ValueError("Invalid event: 'via' must be a list of strings")
return cls(event_type, room_id, state_key, via, d)
@attr.s(frozen=True, slots=True, auto_attribs=True)
class FederationSpaceSummaryResult:
"""Represents the data returned by a successful get_space_summary call."""
rooms: List[JsonDict]
events: Sequence[FederationSpaceSummaryEventResult]
@classmethod
def from_json_dict(cls, d: JsonDict) -> "FederationSpaceSummaryResult":
"""Parse the result of a /spaces/ request
Args:
d: json object to be parsed
Raises:
ValueError if d is not a valid /spaces/ response
"""
rooms = d.get("rooms")
if not isinstance(rooms, List):
raise ValueError("'rooms' must be a list")
if any(not isinstance(r, dict) for r in rooms):
raise ValueError("Invalid room in 'rooms' list")
events = d.get("events")
if not isinstance(events, Sequence):
raise ValueError("'events' must be a list")
if any(not isinstance(e, dict) for e in events):
raise ValueError("Invalid event in 'events' list")
parsed_events = [
FederationSpaceSummaryEventResult.from_json_dict(e) for e in events
]
return cls(rooms, parsed_events)
via = content.get("via")
if not isinstance(via, Sequence):
raise ValueError("Invalid event: 'via' must be a list")
if any(not isinstance(v, str) for v in via):
raise ValueError("Invalid event: 'via' must be a list of strings")

View file

@ -228,7 +228,7 @@ class FederationSender(AbstractFederationSender):
self.hs = hs
self.server_name = hs.hostname
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.state = hs.get_state_handler()
self.clock = hs.get_clock()

View file

@ -76,7 +76,7 @@ class PerDestinationQueue:
):
self._server_name = hs.hostname
self._clock = hs.get_clock()
self._store = hs.get_datastore()
self._store = hs.get_datastores().main
self._transaction_manager = transaction_manager
self._instance_name = hs.get_instance_name()
self._federation_shard_config = hs.config.worker.federation_shard_config
@ -381,7 +381,8 @@ class PerDestinationQueue:
)
)
if self._last_successful_stream_ordering is None:
_tmp_last_successful_stream_ordering = self._last_successful_stream_ordering
if _tmp_last_successful_stream_ordering is None:
# if it's still None, then this means we don't have the information
# in our database ­ we haven't successfully sent a PDU to this server
# (at least since the introduction of the feature tracking
@ -391,11 +392,12 @@ class PerDestinationQueue:
self._catching_up = False
return
last_successful_stream_ordering: int = _tmp_last_successful_stream_ordering
# get at most 50 catchup room/PDUs
while True:
event_ids = await self._store.get_catch_up_room_event_ids(
self._destination,
self._last_successful_stream_ordering,
self._destination, last_successful_stream_ordering
)
if not event_ids:
@ -403,7 +405,7 @@ class PerDestinationQueue:
# of a race condition, so we check that no new events have been
# skipped due to us being in catch-up mode
if self._catchup_last_skipped > self._last_successful_stream_ordering:
if self._catchup_last_skipped > last_successful_stream_ordering:
# another event has been skipped because we were in catch-up mode
continue
@ -470,7 +472,7 @@ class PerDestinationQueue:
# offline
if (
p.internal_metadata.stream_ordering
< self._last_successful_stream_ordering
< last_successful_stream_ordering
):
continue
@ -513,12 +515,11 @@ class PerDestinationQueue:
# from the *original* PDU, rather than the PDU(s) we actually
# send. This is because we use it to mark our position in the
# queue of missed PDUs to process.
self._last_successful_stream_ordering = (
pdu.internal_metadata.stream_ordering
)
last_successful_stream_ordering = pdu.internal_metadata.stream_ordering
self._last_successful_stream_ordering = last_successful_stream_ordering
await self._store.set_destination_last_successful_stream_ordering(
self._destination, self._last_successful_stream_ordering
self._destination, last_successful_stream_ordering
)
def _get_rr_edus(self, force_flush: bool) -> Iterable[Edu]:

View file

@ -53,7 +53,7 @@ class TransactionManager:
def __init__(self, hs: "synapse.server.HomeServer"):
self._server_name = hs.hostname
self.clock = hs.get_clock() # nb must be called this for @measure_func
self._store = hs.get_datastore()
self._store = hs.get_datastores().main
self._transaction_actions = TransactionActions(self._store)
self._transport_layer = hs.get_federation_transport_client()

View file

@ -1,4 +1,4 @@
# Copyright 2014-2021 The Matrix.org Foundation C.I.C.
# Copyright 2014-2022 The Matrix.org Foundation C.I.C.
# Copyright 2020 Sorunome
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -60,17 +60,17 @@ class TransportLayerClient:
def __init__(self, hs):
self.server_name = hs.hostname
self.client = hs.get_federation_http_client()
self._faster_joins_enabled = hs.config.experimental.faster_joins_enabled
async def get_room_state_ids(
self, destination: str, room_id: str, event_id: str
) -> JsonDict:
"""Requests all state for a given room from the given server at the
given event. Returns the state's event_id's
"""Requests the IDs of all state for a given room at the given event.
Args:
destination: The host name of the remote homeserver we want
to get the state from.
context: The name of the context we want the state of
room_id: the room we want the state of
event_id: The event we want the context at.
Returns:
@ -86,6 +86,29 @@ class TransportLayerClient:
try_trailing_slash_on_400=True,
)
async def get_room_state(
self, room_version: RoomVersion, destination: str, room_id: str, event_id: str
) -> "StateRequestResponse":
"""Requests the full state for a given room at the given event.
Args:
room_version: the version of the room (required to build the event objects)
destination: The host name of the remote homeserver we want
to get the state from.
room_id: the room we want the state of
event_id: The event we want the context at.
Returns:
Results in a dict received from the remote homeserver.
"""
path = _create_v1_path("/state/%s", room_id)
return await self.client.get_json(
destination,
path=path,
args={"event_id": event_id},
parser=_StateParser(room_version),
)
async def get_event(
self, destination: str, event_id: str, timeout: Optional[int] = None
) -> JsonDict:
@ -235,8 +258,9 @@ class TransportLayerClient:
args: dict,
retry_on_dns_fail: bool,
ignore_backoff: bool = False,
prefix: str = FEDERATION_V1_PREFIX,
) -> JsonDict:
path = _create_v1_path("/query/%s", query_type)
path = _create_path(prefix, "/query/%s", query_type)
return await self.client.get_json(
destination=destination,
@ -336,10 +360,15 @@ class TransportLayerClient:
content: JsonDict,
) -> "SendJoinResponse":
path = _create_v2_path("/send_join/%s/%s", room_id, event_id)
query_params: Dict[str, str] = {}
if self._faster_joins_enabled:
# lazy-load state on join
query_params["org.matrix.msc3706.partial_state"] = "true"
return await self.client.put_json(
destination=destination,
path=path,
args=query_params,
data=content,
parser=SendJoinParser(room_version, v1_api=False),
max_response_size=MAX_RESPONSE_SIZE_SEND_JOIN,
@ -1150,39 +1179,6 @@ class TransportLayerClient:
return await self.client.get_json(destination=destination, path=path)
async def get_space_summary(
self,
destination: str,
room_id: str,
suggested_only: bool,
max_rooms_per_space: Optional[int],
exclude_rooms: List[str],
) -> JsonDict:
"""
Args:
destination: The remote server
room_id: The room ID to ask about.
suggested_only: if True, only suggested rooms will be returned
max_rooms_per_space: an optional limit to the number of children to be
returned per space
exclude_rooms: a list of any rooms we can skip
"""
# TODO When switching to the stable endpoint, use GET instead of POST.
path = _create_path(
FEDERATION_UNSTABLE_PREFIX, "/org.matrix.msc2946/spaces/%s", room_id
)
params = {
"suggested_only": suggested_only,
"exclude_rooms": exclude_rooms,
}
if max_rooms_per_space is not None:
params["max_rooms_per_space"] = max_rooms_per_space
return await self.client.post_json(
destination=destination, path=path, data=params
)
async def get_room_hierarchy(
self, destination: str, room_id: str, suggested_only: bool
) -> JsonDict:
@ -1219,6 +1215,22 @@ class TransportLayerClient:
args={"suggested_only": "true" if suggested_only else "false"},
)
async def get_account_status(
self, destination: str, user_ids: List[str]
) -> JsonDict:
"""
Args:
destination: The remote server.
user_ids: The user ID(s) for which to request account status(es).
"""
path = _create_path(
FEDERATION_UNSTABLE_PREFIX, "/org.matrix.msc3720/account_status"
)
return await self.client.post_json(
destination=destination, path=path, data={"user_ids": user_ids}
)
def _create_path(federation_prefix: str, path: str, *args: str) -> str:
"""
@ -1271,6 +1283,20 @@ class SendJoinResponse:
# "event" is not included in the response.
event: Optional[EventBase] = None
# The room state is incomplete
partial_state: bool = False
# List of servers in the room
servers_in_room: Optional[List[str]] = None
@attr.s(slots=True, auto_attribs=True)
class StateRequestResponse:
"""The parsed response of a `/state` request."""
auth_events: List[EventBase]
state: List[EventBase]
@ijson.coroutine
def _event_parser(event_dict: JsonDict) -> Generator[None, Tuple[str, Any], None]:
@ -1297,6 +1323,32 @@ def _event_list_parser(
events.append(event)
@ijson.coroutine
def _partial_state_parser(response: SendJoinResponse) -> Generator[None, Any, None]:
"""Helper function for use with `ijson.items_coro`
Parses the partial_state field in send_join responses
"""
while True:
val = yield
if not isinstance(val, bool):
raise TypeError("partial_state must be a boolean")
response.partial_state = val
@ijson.coroutine
def _servers_in_room_parser(response: SendJoinResponse) -> Generator[None, Any, None]:
"""Helper function for use with `ijson.items_coro`
Parses the servers_in_room field in send_join responses
"""
while True:
val = yield
if not isinstance(val, list) or any(not isinstance(x, str) for x in val):
raise TypeError("servers_in_room must be a list of strings")
response.servers_in_room = val
class SendJoinParser(ByteParser[SendJoinResponse]):
"""A parser for the response to `/send_join` requests.
@ -1308,44 +1360,62 @@ class SendJoinParser(ByteParser[SendJoinResponse]):
CONTENT_TYPE = "application/json"
def __init__(self, room_version: RoomVersion, v1_api: bool):
self._response = SendJoinResponse([], [], {})
self._response = SendJoinResponse([], [], event_dict={})
self._room_version = room_version
self._coros = []
# The V1 API has the shape of `[200, {...}]`, which we handle by
# prefixing with `item.*`.
prefix = "item." if v1_api else ""
self._coro_state = ijson.items_coro(
_event_list_parser(room_version, self._response.state),
prefix + "state.item",
use_float=True,
)
self._coro_auth = ijson.items_coro(
_event_list_parser(room_version, self._response.auth_events),
prefix + "auth_chain.item",
use_float=True,
)
# TODO Remove the unstable prefix when servers have updated.
#
# By re-using the same event dictionary this will cause the parsing of
# org.matrix.msc3083.v2.event and event to stomp over each other.
# Generally this should be fine.
self._coro_unstable_event = ijson.kvitems_coro(
_event_parser(self._response.event_dict),
prefix + "org.matrix.msc3083.v2.event",
use_float=True,
)
self._coro_event = ijson.kvitems_coro(
_event_parser(self._response.event_dict),
prefix + "event",
use_float=True,
)
self._coros = [
ijson.items_coro(
_event_list_parser(room_version, self._response.state),
prefix + "state.item",
use_float=True,
),
ijson.items_coro(
_event_list_parser(room_version, self._response.auth_events),
prefix + "auth_chain.item",
use_float=True,
),
# TODO Remove the unstable prefix when servers have updated.
#
# By re-using the same event dictionary this will cause the parsing of
# org.matrix.msc3083.v2.event and event to stomp over each other.
# Generally this should be fine.
ijson.kvitems_coro(
_event_parser(self._response.event_dict),
prefix + "org.matrix.msc3083.v2.event",
use_float=True,
),
ijson.kvitems_coro(
_event_parser(self._response.event_dict),
prefix + "event",
use_float=True,
),
]
if not v1_api:
self._coros.append(
ijson.items_coro(
_partial_state_parser(self._response),
"org.matrix.msc3706.partial_state",
use_float="True",
)
)
self._coros.append(
ijson.items_coro(
_servers_in_room_parser(self._response),
"org.matrix.msc3706.servers_in_room",
use_float="True",
)
)
def write(self, data: bytes) -> int:
self._coro_state.send(data)
self._coro_auth.send(data)
self._coro_unstable_event.send(data)
self._coro_event.send(data)
for c in self._coros:
c.send(data)
return len(data)
@ -1355,3 +1425,37 @@ class SendJoinParser(ByteParser[SendJoinResponse]):
self._response.event_dict, self._room_version
)
return self._response
class _StateParser(ByteParser[StateRequestResponse]):
"""A parser for the response to `/state` requests.
Args:
room_version: The version of the room.
"""
CONTENT_TYPE = "application/json"
def __init__(self, room_version: RoomVersion):
self._response = StateRequestResponse([], [])
self._room_version = room_version
self._coros = [
ijson.items_coro(
_event_list_parser(room_version, self._response.state),
"pdus.item",
use_float=True,
),
ijson.items_coro(
_event_list_parser(room_version, self._response.auth_events),
"auth_chain.item",
use_float=True,
),
]
def write(self, data: bytes) -> int:
for c in self._coros:
c.send(data)
return len(data)
def finish(self) -> StateRequestResponse:
return self._response

View file

@ -24,6 +24,7 @@ from synapse.federation.transport.server._base import (
)
from synapse.federation.transport.server.federation import (
FEDERATION_SERVLET_CLASSES,
FederationAccountStatusServlet,
FederationTimestampLookupServlet,
)
from synapse.federation.transport.server.groups_local import GROUP_LOCAL_SERVLET_CLASSES
@ -336,6 +337,13 @@ def register_servlets(
):
continue
# Only allow the `/account_status` servlet if msc3720 is enabled
if (
servletclass == FederationAccountStatusServlet
and not hs.config.experimental.msc3720_enabled
):
continue
servletclass(
hs=hs,
authenticator=authenticator,

View file

@ -55,7 +55,7 @@ class Authenticator:
self._clock = hs.get_clock()
self.keyring = hs.get_keyring()
self.server_name = hs.hostname
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.federation_domain_whitelist = (
hs.config.federation.federation_domain_whitelist
)

View file

@ -110,7 +110,7 @@ class FederationSendServlet(BaseFederationServerServlet):
if issue_8631_logger.isEnabledFor(logging.DEBUG):
DEVICE_UPDATE_EDUS = ["m.device_list_update", "m.signing_key_update"]
device_list_updates = [
edu.content
edu.get("content", {})
for edu in transaction_data.get("edus", [])
if edu.get("edu_type") in DEVICE_UPDATE_EDUS
]
@ -624,81 +624,6 @@ class FederationVersionServlet(BaseFederationServlet):
)
class FederationSpaceSummaryServlet(BaseFederationServlet):
PREFIX = FEDERATION_UNSTABLE_PREFIX + "/org.matrix.msc2946"
PATH = "/spaces/(?P<room_id>[^/]*)"
def __init__(
self,
hs: "HomeServer",
authenticator: Authenticator,
ratelimiter: FederationRateLimiter,
server_name: str,
):
super().__init__(hs, authenticator, ratelimiter, server_name)
self.handler = hs.get_room_summary_handler()
async def on_GET(
self,
origin: str,
content: Literal[None],
query: Mapping[bytes, Sequence[bytes]],
room_id: str,
) -> Tuple[int, JsonDict]:
suggested_only = parse_boolean_from_args(query, "suggested_only", default=False)
max_rooms_per_space = parse_integer_from_args(query, "max_rooms_per_space")
if max_rooms_per_space is not None and max_rooms_per_space < 0:
raise SynapseError(
400,
"Value for 'max_rooms_per_space' must be a non-negative integer",
Codes.BAD_JSON,
)
exclude_rooms = parse_strings_from_args(query, "exclude_rooms", default=[])
return 200, await self.handler.federation_space_summary(
origin, room_id, suggested_only, max_rooms_per_space, exclude_rooms
)
# TODO When switching to the stable endpoint, remove the POST handler.
async def on_POST(
self,
origin: str,
content: JsonDict,
query: Mapping[bytes, Sequence[bytes]],
room_id: str,
) -> Tuple[int, JsonDict]:
suggested_only = content.get("suggested_only", False)
if not isinstance(suggested_only, bool):
raise SynapseError(
400, "'suggested_only' must be a boolean", Codes.BAD_JSON
)
exclude_rooms = content.get("exclude_rooms", [])
if not isinstance(exclude_rooms, list) or any(
not isinstance(x, str) for x in exclude_rooms
):
raise SynapseError(400, "bad value for 'exclude_rooms'", Codes.BAD_JSON)
max_rooms_per_space = content.get("max_rooms_per_space")
if max_rooms_per_space is not None:
if not isinstance(max_rooms_per_space, int):
raise SynapseError(
400, "bad value for 'max_rooms_per_space'", Codes.BAD_JSON
)
if max_rooms_per_space < 0:
raise SynapseError(
400,
"Value for 'max_rooms_per_space' must be a non-negative integer",
Codes.BAD_JSON,
)
return 200, await self.handler.federation_space_summary(
origin, room_id, suggested_only, max_rooms_per_space, exclude_rooms
)
class FederationRoomHierarchyServlet(BaseFederationServlet):
PATH = "/hierarchy/(?P<room_id>[^/]*)"
@ -746,7 +671,7 @@ class RoomComplexityServlet(BaseFederationServlet):
server_name: str,
):
super().__init__(hs, authenticator, ratelimiter, server_name)
self._store = self.hs.get_datastore()
self._store = self.hs.get_datastores().main
async def on_GET(
self,
@ -766,6 +691,40 @@ class RoomComplexityServlet(BaseFederationServlet):
return 200, complexity
class FederationAccountStatusServlet(BaseFederationServerServlet):
PATH = "/query/account_status"
PREFIX = FEDERATION_UNSTABLE_PREFIX + "/org.matrix.msc3720"
def __init__(
self,
hs: "HomeServer",
authenticator: Authenticator,
ratelimiter: FederationRateLimiter,
server_name: str,
):
super().__init__(hs, authenticator, ratelimiter, server_name)
self._account_handler = hs.get_account_handler()
async def on_POST(
self,
origin: str,
content: JsonDict,
query: Mapping[bytes, Sequence[bytes]],
room_id: str,
) -> Tuple[int, JsonDict]:
if "user_ids" not in content:
raise SynapseError(
400, "Required parameter 'user_ids' is missing", Codes.MISSING_PARAM
)
statuses, failures = await self._account_handler.get_account_statuses(
content["user_ids"],
allow_remote=False,
)
return 200, {"account_statuses": statuses, "failures": failures}
FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
FederationSendServlet,
FederationEventServlet,
@ -792,9 +751,9 @@ FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
On3pidBindServlet,
FederationVersionServlet,
RoomComplexityServlet,
FederationSpaceSummaryServlet,
FederationRoomHierarchyServlet,
FederationRoomHierarchyUnstableServlet,
FederationV1SendKnockServlet,
FederationMakeKnockServlet,
FederationAccountStatusServlet,
)

View file

@ -140,7 +140,7 @@ class GroupAttestionRenewer:
def __init__(self, hs: "HomeServer"):
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.assestations = hs.get_groups_attestation_signing()
self.transport_client = hs.get_federation_transport_client()
self.is_mine_id = hs.is_mine_id

View file

@ -45,7 +45,7 @@ MAX_LONG_DESC_LEN = 10000
class GroupsServerWorkerHandler:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.room_list_handler = hs.get_room_list_handler()
self.auth = hs.get_auth()
self.clock = hs.get_clock()

144
synapse/handlers/account.py Normal file
View file

@ -0,0 +1,144 @@
# Copyright 2022 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING, Dict, List, Tuple
from synapse.api.errors import Codes, SynapseError
from synapse.types import JsonDict, UserID
if TYPE_CHECKING:
from synapse.server import HomeServer
class AccountHandler:
def __init__(self, hs: "HomeServer"):
self._main_store = hs.get_datastores().main
self._is_mine = hs.is_mine
self._federation_client = hs.get_federation_client()
async def get_account_statuses(
self,
user_ids: List[str],
allow_remote: bool,
) -> Tuple[JsonDict, List[str]]:
"""Get account statuses for a list of user IDs.
If one or more account(s) belong to remote homeservers, retrieve their status(es)
over federation if allowed.
Args:
user_ids: The list of accounts to retrieve the status of.
allow_remote: Whether to try to retrieve the status of remote accounts, if
any.
Returns:
The account statuses as well as the list of users whose statuses could not be
retrieved.
Raises:
SynapseError if a required parameter is missing or malformed, or if one of
the accounts isn't local to this homeserver and allow_remote is False.
"""
statuses = {}
failures = []
remote_users: List[UserID] = []
for raw_user_id in user_ids:
try:
user_id = UserID.from_string(raw_user_id)
except SynapseError:
raise SynapseError(
400,
f"Not a valid Matrix user ID: {raw_user_id}",
Codes.INVALID_PARAM,
)
if self._is_mine(user_id):
status = await self._get_local_account_status(user_id)
statuses[user_id.to_string()] = status
else:
if not allow_remote:
raise SynapseError(
400,
f"Not a local user: {raw_user_id}",
Codes.INVALID_PARAM,
)
remote_users.append(user_id)
if allow_remote and len(remote_users) > 0:
remote_statuses, remote_failures = await self._get_remote_account_statuses(
remote_users,
)
statuses.update(remote_statuses)
failures += remote_failures
return statuses, failures
async def _get_local_account_status(self, user_id: UserID) -> JsonDict:
"""Retrieve the status of a local account.
Args:
user_id: The account to retrieve the status of.
Returns:
The account's status.
"""
status = {"exists": False}
userinfo = await self._main_store.get_userinfo_by_id(user_id.to_string())
if userinfo is not None:
status = {
"exists": True,
"deactivated": userinfo.is_deactivated,
}
return status
async def _get_remote_account_statuses(
self, remote_users: List[UserID]
) -> Tuple[JsonDict, List[str]]:
"""Send out federation requests to retrieve the statuses of remote accounts.
Args:
remote_users: The accounts to retrieve the statuses of.
Returns:
The statuses of the accounts, and a list of accounts for which no status
could be retrieved.
"""
# Group remote users by destination, so we only send one request per remote
# homeserver.
by_destination: Dict[str, List[str]] = {}
for user in remote_users:
if user.domain not in by_destination:
by_destination[user.domain] = []
by_destination[user.domain].append(user.to_string())
# Retrieve the statuses and failures for remote accounts.
final_statuses: JsonDict = {}
final_failures: List[str] = []
for destination, users in by_destination.items():
statuses, failures = await self._federation_client.get_account_status(
destination,
users,
)
final_statuses.update(statuses)
final_failures += failures
return final_statuses, final_failures

View file

@ -30,7 +30,7 @@ if TYPE_CHECKING:
class AccountDataHandler:
def __init__(self, hs: "HomeServer"):
self._store = hs.get_datastore()
self._store = hs.get_datastores().main
self._instance_name = hs.get_instance_name()
self._notifier = hs.get_notifier()
@ -166,7 +166,7 @@ class AccountDataHandler:
class AccountDataEventSource(EventSource[int, JsonDict]):
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
def get_current_key(self, direction: str = "f") -> int:
return self.store.get_max_account_data_stream_id()

View file

@ -43,7 +43,7 @@ class AccountValidityHandler:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.config = hs.config
self.store = self.hs.get_datastore()
self.store = self.hs.get_datastores().main
self.send_email_handler = self.hs.get_send_email_handler()
self.clock = self.hs.get_clock()

View file

@ -29,7 +29,7 @@ logger = logging.getLogger(__name__)
class AdminHandler:
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.storage = hs.get_storage()
self.state_store = self.storage.state

View file

@ -47,7 +47,7 @@ events_processed_counter = Counter("synapse_handlers_appservice_events_processed
class ApplicationServicesHandler:
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.is_mine_id = hs.is_mine_id
self.appservice_api = hs.get_application_service_api()
self.scheduler = hs.get_application_service_scheduler()

View file

@ -194,7 +194,7 @@ class AuthHandler:
SESSION_EXPIRE_MS = 48 * 60 * 60 * 1000
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.auth = hs.get_auth()
self.clock = hs.get_clock()
self.checkers: Dict[str, UserInteractiveAuthChecker] = {}
@ -1183,7 +1183,7 @@ class AuthHandler:
# No password providers were able to handle this 3pid
# Check local store
user_id = await self.hs.get_datastore().get_user_id_by_threepid(
user_id = await self.hs.get_datastores().main.get_user_id_by_threepid(
medium, address
)
if not user_id:
@ -2064,6 +2064,10 @@ GET_USERNAME_FOR_REGISTRATION_CALLBACK = Callable[
[JsonDict, JsonDict],
Awaitable[Optional[str]],
]
GET_DISPLAYNAME_FOR_REGISTRATION_CALLBACK = Callable[
[JsonDict, JsonDict],
Awaitable[Optional[str]],
]
IS_3PID_ALLOWED_CALLBACK = Callable[[str, str, bool], Awaitable[bool]]
@ -2080,6 +2084,9 @@ class PasswordAuthProvider:
self.get_username_for_registration_callbacks: List[
GET_USERNAME_FOR_REGISTRATION_CALLBACK
] = []
self.get_displayname_for_registration_callbacks: List[
GET_DISPLAYNAME_FOR_REGISTRATION_CALLBACK
] = []
self.is_3pid_allowed_callbacks: List[IS_3PID_ALLOWED_CALLBACK] = []
# Mapping from login type to login parameters
@ -2099,6 +2106,9 @@ class PasswordAuthProvider:
get_username_for_registration: Optional[
GET_USERNAME_FOR_REGISTRATION_CALLBACK
] = None,
get_displayname_for_registration: Optional[
GET_DISPLAYNAME_FOR_REGISTRATION_CALLBACK
] = None,
) -> None:
# Register check_3pid_auth callback
if check_3pid_auth is not None:
@ -2148,6 +2158,11 @@ class PasswordAuthProvider:
get_username_for_registration,
)
if get_displayname_for_registration is not None:
self.get_displayname_for_registration_callbacks.append(
get_displayname_for_registration,
)
if is_3pid_allowed is not None:
self.is_3pid_allowed_callbacks.append(is_3pid_allowed)
@ -2350,6 +2365,49 @@ class PasswordAuthProvider:
return None
async def get_displayname_for_registration(
self,
uia_results: JsonDict,
params: JsonDict,
) -> Optional[str]:
"""Defines the display name to use when registering the user, using the
credentials and parameters provided during the UIA flow.
Stops at the first callback that returns a tuple containing at least one string.
Args:
uia_results: The credentials provided during the UIA flow.
params: The parameters provided by the registration request.
Returns:
A tuple which first element is the display name, and the second is an MXC URL
to the user's avatar.
"""
for callback in self.get_displayname_for_registration_callbacks:
try:
res = await callback(uia_results, params)
if isinstance(res, str):
return res
elif res is not None:
# mypy complains that this line is unreachable because it assumes the
# data returned by the module fits the expected type. We just want
# to make sure this is the case.
logger.warning( # type: ignore[unreachable]
"Ignoring non-string value returned by"
" get_displayname_for_registration callback %s: %s",
callback,
res,
)
except Exception as e:
logger.error(
"Module raised an exception in get_displayname_for_registration: %s",
e,
)
raise SynapseError(code=500, msg="Internal Server Error")
return None
async def is_3pid_allowed(
self,
medium: str,

View file

@ -61,7 +61,7 @@ class CasHandler:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self._hostname = hs.hostname
self._store = hs.get_datastore()
self._store = hs.get_datastores().main
self._auth_handler = hs.get_auth_handler()
self._registration_handler = hs.get_registration_handler()

View file

@ -29,7 +29,7 @@ class DeactivateAccountHandler:
"""Handler which deals with deactivating user accounts."""
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.hs = hs
self._auth_handler = hs.get_auth_handler()
self._device_handler = hs.get_device_handler()
@ -38,6 +38,7 @@ class DeactivateAccountHandler:
self._profile_handler = hs.get_profile_handler()
self.user_directory_handler = hs.get_user_directory_handler()
self._server_name = hs.hostname
self._third_party_rules = hs.get_third_party_event_rules()
# Flag that indicates whether the process to part users from rooms is running
self._user_parter_running = False
@ -135,9 +136,13 @@ class DeactivateAccountHandler:
if erase_data:
user = UserID.from_string(user_id)
# Remove avatar URL from this user
await self._profile_handler.set_avatar_url(user, requester, "", by_admin)
await self._profile_handler.set_avatar_url(
user, requester, "", by_admin, deactivation=True
)
# Remove displayname from this user
await self._profile_handler.set_displayname(user, requester, "", by_admin)
await self._profile_handler.set_displayname(
user, requester, "", by_admin, deactivation=True
)
logger.info("Marking %s as erased", user_id)
await self.store.mark_user_erased(user_id)
@ -160,6 +165,13 @@ class DeactivateAccountHandler:
# Remove account data (including ignored users and push rules).
await self.store.purge_account_data_for_user(user_id)
# Let modules know the user has been deactivated.
await self._third_party_rules.on_user_deactivation_status_changed(
user_id,
True,
by_admin,
)
return identity_server_supports_unbinding
async def _reject_pending_invites_for_user(self, user_id: str) -> None:
@ -264,6 +276,10 @@ class DeactivateAccountHandler:
# Mark the user as active.
await self.store.set_user_deactivated_status(user_id, False)
await self._third_party_rules.on_user_deactivation_status_changed(
user_id, False, True
)
# Add the user to the directory, if necessary. Note that
# this must be done after the user is re-activated, because
# deactivated users are excluded from the user directory.

View file

@ -63,7 +63,7 @@ class DeviceWorkerHandler:
def __init__(self, hs: "HomeServer"):
self.clock = hs.get_clock()
self.hs = hs
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.notifier = hs.get_notifier()
self.state = hs.get_state_handler()
self.state_store = hs.get_storage().state
@ -628,7 +628,7 @@ class DeviceListUpdater:
"Handles incoming device list updates from federation and updates the DB"
def __init__(self, hs: "HomeServer", device_handler: DeviceHandler):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.federation = hs.get_federation_client()
self.clock = hs.get_clock()
self.device_handler = device_handler

View file

@ -43,7 +43,7 @@ class DeviceMessageHandler:
Args:
hs: server
"""
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.notifier = hs.get_notifier()
self.is_mine = hs.is_mine

View file

@ -44,7 +44,7 @@ class DirectoryHandler:
self.state = hs.get_state_handler()
self.appservice_handler = hs.get_application_service_handler()
self.event_creation_handler = hs.get_event_creation_handler()
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.config = hs.config
self.enable_room_list_search = hs.config.roomdirectory.enable_room_list_search
self.require_membership = hs.config.server.require_membership_for_aliases

View file

@ -47,7 +47,7 @@ logger = logging.getLogger(__name__)
class E2eKeysHandler:
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.federation = hs.get_federation_client()
self.device_handler = hs.get_device_handler()
self.is_mine = hs.is_mine
@ -1335,7 +1335,7 @@ class SigningKeyEduUpdater:
"""Handles incoming signing key updates from federation and updates the DB"""
def __init__(self, hs: "HomeServer", e2e_keys_handler: E2eKeysHandler):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.federation = hs.get_federation_client()
self.clock = hs.get_clock()
self.e2e_keys_handler = e2e_keys_handler

View file

@ -45,7 +45,7 @@ class E2eRoomKeysHandler:
"""
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
# Used to lock whenever a client is uploading key data. This prevents collisions
# between clients trying to upload the details of a new session, given all

View file

@ -43,7 +43,7 @@ class EventAuthHandler:
def __init__(self, hs: "HomeServer"):
self._clock = hs.get_clock()
self._store = hs.get_datastore()
self._store = hs.get_datastores().main
self._server_name = hs.hostname
async def check_auth_rules_from_context(

View file

@ -33,7 +33,7 @@ logger = logging.getLogger(__name__)
class EventStreamHandler:
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.clock = hs.get_clock()
self.hs = hs
@ -134,7 +134,7 @@ class EventStreamHandler:
class EventHandler:
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.storage = hs.get_storage()
async def get_event(

View file

@ -49,8 +49,8 @@ from synapse.logging.context import (
make_deferred_yieldable,
nested_logging_context,
preserve_fn,
run_in_background,
)
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.http.federation import (
ReplicationCleanRoomRestServlet,
ReplicationStoreRoomOnOutlierMembershipRestServlet,
@ -107,7 +107,7 @@ class FederationHandler:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.storage = hs.get_storage()
self.state_store = self.storage.state
self.federation_client = hs.get_federation_client()
@ -516,11 +516,20 @@ class FederationHandler:
await self.store.upsert_room_on_join(
room_id=room_id,
room_version=room_version_obj,
auth_events=auth_chain,
state_events=state,
)
if ret.partial_state:
await self.store.store_partial_state_room(room_id, ret.servers_in_room)
max_stream_id = await self._federation_event_handler.process_remote_join(
origin, room_id, auth_chain, state, event, room_version_obj
origin,
room_id,
auth_chain,
state,
event,
room_version_obj,
partial_state=ret.partial_state,
)
# We wait here until this instance has seen the events come down
@ -559,7 +568,9 @@ class FederationHandler:
# lots of requests for missing prev_events which we do actually
# have. Hence we fire off the background task, but don't wait for it.
run_in_background(self._handle_queued_pdus, room_queue)
run_as_background_process(
"handle_queued_pdus", self._handle_queued_pdus, room_queue
)
async def do_knock(
self,

View file

@ -95,7 +95,7 @@ class FederationEventHandler:
"""
def __init__(self, hs: "HomeServer"):
self._store = hs.get_datastore()
self._store = hs.get_datastores().main
self._storage = hs.get_storage()
self._state_store = self._storage.state
@ -397,6 +397,7 @@ class FederationEventHandler:
state: List[EventBase],
event: EventBase,
room_version: RoomVersion,
partial_state: bool,
) -> int:
"""Persists the events returned by a send_join
@ -412,6 +413,7 @@ class FederationEventHandler:
event
room_version: The room version we expect this room to have, and
will raise if it doesn't match the version in the create event.
partial_state: True if the state omits non-critical membership events
Returns:
The stream ID after which all events have been persisted.
@ -419,10 +421,8 @@ class FederationEventHandler:
Raises:
SynapseError if the response is in some way invalid.
"""
event_map = {e.event_id: e for e in itertools.chain(auth_events, state)}
create_event = None
for e in auth_events:
for e in state:
if (e.type, e.state_key) == (EventTypes.Create, ""):
create_event = e
break
@ -439,11 +439,6 @@ class FederationEventHandler:
if room_version.identifier != room_version_id:
raise SynapseError(400, "Room version mismatch")
# filter out any events we have already seen
seen_remotes = await self._store.have_seen_events(room_id, event_map.keys())
for s in seen_remotes:
event_map.pop(s, None)
# persist the auth chain and state events.
#
# any invalid events here will be marked as rejected, and we'll carry on.
@ -455,13 +450,19 @@ class FederationEventHandler:
# signatures right now doesn't mean that we will *never* be able to, so it
# is premature to reject them.
#
await self._auth_and_persist_outliers(room_id, event_map.values())
await self._auth_and_persist_outliers(
room_id, itertools.chain(auth_events, state)
)
# and now persist the join event itself.
logger.info("Peristing join-via-remote %s", event)
logger.info(
"Peristing join-via-remote %s (partial_state: %s)", event, partial_state
)
with nested_logging_context(suffix=event.event_id):
context = await self._state_handler.compute_event_context(
event, old_state=state
event,
old_state=state,
partial_state=partial_state,
)
context = await self._check_event_auth(origin, event, context)
@ -703,6 +704,8 @@ class FederationEventHandler:
try:
state = await self._resolve_state_at_missing_prevs(origin, event)
# TODO(faster_joins): make sure that _resolve_state_at_missing_prevs does
# not return partial state
await self._process_received_pdu(
origin, event, state=state, backfilled=backfilled
)
@ -1245,6 +1248,16 @@ class FederationEventHandler:
"""
event_map = {event.event_id: event for event in events}
# filter out any events we have already seen. This might happen because
# the events were eagerly pushed to us (eg, during a room join), or because
# another thread has raced against us since we decided to request the event.
#
# This is just an optimisation, so it doesn't need to be watertight - the event
# persister does another round of deduplication.
seen_remotes = await self._store.have_seen_events(room_id, event_map.keys())
for s in seen_remotes:
event_map.pop(s, None)
# XXX: it might be possible to kick this process off in parallel with fetching
# the events.
while event_map:
@ -1717,31 +1730,22 @@ class FederationEventHandler:
event_id: the event for which we are lacking auth events
"""
try:
remote_event_map = {
e.event_id: e
for e in await self._federation_client.get_event_auth(
destination, room_id, event_id
)
}
remote_events = await self._federation_client.get_event_auth(
destination, room_id, event_id
)
except RequestSendFailed as e1:
# The other side isn't around or doesn't implement the
# endpoint, so lets just bail out.
logger.info("Failed to get event auth from remote: %s", e1)
return
logger.info("/event_auth returned %i events", len(remote_event_map))
logger.info("/event_auth returned %i events", len(remote_events))
# `event` may be returned, but we should not yet process it.
remote_event_map.pop(event_id, None)
remote_auth_events = (e for e in remote_events if e.event_id != event_id)
# nor should we reprocess any events we have already seen.
seen_remotes = await self._store.have_seen_events(
room_id, remote_event_map.keys()
)
for s in seen_remotes:
remote_event_map.pop(s, None)
await self._auth_and_persist_outliers(room_id, remote_event_map.values())
await self._auth_and_persist_outliers(room_id, remote_auth_events)
async def _update_context_for_auth_events(
self, event: EventBase, context: EventContext, auth_events: StateMap[EventBase]
@ -1795,6 +1799,7 @@ class FederationEventHandler:
prev_state_ids=prev_state_ids,
prev_group=prev_group,
delta_ids=state_updates,
partial_state=context.partial_state,
)
async def _run_push_actions_and_persist_event(

View file

@ -63,7 +63,7 @@ def _create_rerouter(func_name: str) -> Callable[..., Awaitable[JsonDict]]:
class GroupsLocalWorkerHandler:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.room_list_handler = hs.get_room_list_handler()
self.groups_server_handler = hs.get_groups_server_handler()
self.transport_client = hs.get_federation_transport_client()

View file

@ -49,7 +49,7 @@ id_server_scheme = "https://"
class IdentityHandler:
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
# An HTTP client for contacting trusted URLs.
self.http_client = SimpleHttpClient(hs)
# An HTTP client for contacting identity servers specified by clients.

View file

@ -46,7 +46,7 @@ logger = logging.getLogger(__name__)
class InitialSyncHandler:
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.auth = hs.get_auth()
self.state_handler = hs.get_state_handler()
self.hs = hs

View file

@ -55,8 +55,8 @@ from synapse.replication.http.send_event import ReplicationSendEventRestServlet
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
from synapse.storage.state import StateFilter
from synapse.types import Requester, RoomAlias, StreamToken, UserID, create_requester
from synapse.util import json_decoder, json_encoder, log_failure
from synapse.util.async_helpers import Linearizer, gather_results, unwrapFirstError
from synapse.util import json_decoder, json_encoder, log_failure, unwrapFirstError
from synapse.util.async_helpers import Linearizer, gather_results
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.metrics import measure_func
from synapse.visibility import filter_events_for_client
@ -75,7 +75,7 @@ class MessageHandler:
self.auth = hs.get_auth()
self.clock = hs.get_clock()
self.state = hs.get_state_handler()
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.storage = hs.get_storage()
self.state_store = self.storage.state
self._event_serializer = hs.get_event_client_serializer()
@ -397,7 +397,7 @@ class EventCreationHandler:
self.hs = hs
self.auth = hs.get_auth()
self._event_auth_handler = hs.get_event_auth_handler()
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.storage = hs.get_storage()
self.state = hs.get_state_handler()
self.clock = hs.get_clock()
@ -552,10 +552,11 @@ class EventCreationHandler:
if event_dict["type"] == EventTypes.Create and event_dict["state_key"] == "":
room_version_id = event_dict["content"]["room_version"]
room_version_obj = KNOWN_ROOM_VERSIONS.get(room_version_id)
if not room_version_obj:
maybe_room_version_obj = KNOWN_ROOM_VERSIONS.get(room_version_id)
if not maybe_room_version_obj:
# this can happen if support is withdrawn for a room version
raise UnsupportedRoomVersionError(room_version_id)
room_version_obj = maybe_room_version_obj
else:
try:
room_version_obj = await self.store.get_room_version(
@ -993,6 +994,8 @@ class EventCreationHandler:
and full_state_ids_at_event
and builder.internal_metadata.is_historical()
):
# TODO(faster_joins): figure out how this works, and make sure that the
# old state is complete.
old_state = await self.store.get_events_as_list(full_state_ids_at_event)
context = await self.state.compute_event_context(event, old_state=old_state)
else:
@ -1147,12 +1150,13 @@ class EventCreationHandler:
room_version_id = event.content.get(
"room_version", RoomVersions.V1.identifier
)
room_version_obj = KNOWN_ROOM_VERSIONS.get(room_version_id)
if not room_version_obj:
maybe_room_version_obj = KNOWN_ROOM_VERSIONS.get(room_version_id)
if not maybe_room_version_obj:
raise UnsupportedRoomVersionError(
"Attempt to create a room with unsupported room version %s"
% (room_version_id,)
)
room_version_obj = maybe_room_version_obj
else:
room_version_obj = await self.store.get_room_version(event.room_id)

View file

@ -273,7 +273,7 @@ class OidcProvider:
token_generator: "OidcSessionTokenGenerator",
provider: OidcProviderConfig,
):
self._store = hs.get_datastore()
self._store = hs.get_datastores().main
self._token_generator = token_generator

View file

@ -127,7 +127,7 @@ class PaginationHandler:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.storage = hs.get_storage()
self.state_store = self.storage.state
self.clock = hs.get_clock()

View file

@ -133,7 +133,7 @@ class BasePresenceHandler(abc.ABC):
def __init__(self, hs: "HomeServer"):
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.presence_router = hs.get_presence_router()
self.state = hs.get_state_handler()
self.is_mine_id = hs.is_mine_id
@ -204,25 +204,27 @@ class BasePresenceHandler(abc.ABC):
Returns:
dict: `user_id` -> `UserPresenceState`
"""
states = {
user_id: self.user_to_current_state.get(user_id, None)
for user_id in user_ids
}
states = {}
missing = []
for user_id in user_ids:
state = self.user_to_current_state.get(user_id, None)
if state:
states[user_id] = state
else:
missing.append(user_id)
missing = [user_id for user_id, state in states.items() if not state]
if missing:
# There are things not in our in memory cache. Lets pull them out of
# the database.
res = await self.store.get_presence_for_users(missing)
states.update(res)
missing = [user_id for user_id, state in states.items() if not state]
if missing:
new = {
user_id: UserPresenceState.default(user_id) for user_id in missing
}
states.update(new)
self.user_to_current_state.update(new)
for user_id in missing:
# if user has no state in database, create the state
if not res.get(user_id, None):
new_state = UserPresenceState.default(user_id)
states[user_id] = new_state
self.user_to_current_state[user_id] = new_state
return states
@ -1539,7 +1541,7 @@ class PresenceEventSource(EventSource[int, UserPresenceState]):
self.get_presence_handler = hs.get_presence_handler
self.get_presence_router = hs.get_presence_router
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
async def get_new_events(
self,

View file

@ -54,7 +54,7 @@ class ProfileHandler:
PROFILE_UPDATE_EVERY_MS = 24 * 60 * 60 * 1000
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.clock = hs.get_clock()
self.hs = hs
@ -71,6 +71,8 @@ class ProfileHandler:
self.server_name = hs.config.server.server_name
self._third_party_rules = hs.get_third_party_event_rules()
if hs.config.worker.run_background_tasks:
self.clock.looping_call(
self._update_remote_profile_cache, self.PROFILE_UPDATE_MS
@ -171,6 +173,7 @@ class ProfileHandler:
requester: Requester,
new_displayname: str,
by_admin: bool = False,
deactivation: bool = False,
) -> None:
"""Set the displayname of a user
@ -179,6 +182,7 @@ class ProfileHandler:
requester: The user attempting to make this change.
new_displayname: The displayname to give this user.
by_admin: Whether this change was made by an administrator.
deactivation: Whether this change was made while deactivating the user.
"""
if not self.hs.is_mine(target_user):
raise SynapseError(400, "User is not hosted on this homeserver")
@ -227,6 +231,10 @@ class ProfileHandler:
target_user.to_string(), profile
)
await self._third_party_rules.on_profile_update(
target_user.to_string(), profile, by_admin, deactivation
)
await self._update_join_states(requester, target_user)
async def get_avatar_url(self, target_user: UserID) -> Optional[str]:
@ -261,6 +269,7 @@ class ProfileHandler:
requester: Requester,
new_avatar_url: str,
by_admin: bool = False,
deactivation: bool = False,
) -> None:
"""Set a new avatar URL for a user.
@ -269,6 +278,7 @@ class ProfileHandler:
requester: The user attempting to make this change.
new_avatar_url: The avatar URL to give this user.
by_admin: Whether this change was made by an administrator.
deactivation: Whether this change was made while deactivating the user.
"""
if not self.hs.is_mine(target_user):
raise SynapseError(400, "User is not hosted on this homeserver")
@ -315,6 +325,10 @@ class ProfileHandler:
target_user.to_string(), profile
)
await self._third_party_rules.on_profile_update(
target_user.to_string(), profile, by_admin, deactivation
)
await self._update_join_states(requester, target_user)
@cached()

View file

@ -27,7 +27,7 @@ logger = logging.getLogger(__name__)
class ReadMarkerHandler:
def __init__(self, hs: "HomeServer"):
self.server_name = hs.config.server.server_name
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.account_data_handler = hs.get_account_data_handler()
self.read_marker_linearizer = Linearizer(name="read_marker")

View file

@ -29,7 +29,7 @@ class ReceiptsHandler:
def __init__(self, hs: "HomeServer"):
self.notifier = hs.get_notifier()
self.server_name = hs.config.server.server_name
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.event_auth_handler = hs.get_event_auth_handler()
self.hs = hs
@ -164,7 +164,7 @@ class ReceiptsHandler:
class ReceiptEventSource(EventSource[int, JsonDict]):
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.config = hs.config
@staticmethod

View file

@ -86,7 +86,7 @@ class LoginDict(TypedDict):
class RegistrationHandler:
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.clock = hs.get_clock()
self.hs = hs
self.auth = hs.get_auth()
@ -327,12 +327,12 @@ class RegistrationHandler:
if fail_count > 10:
raise SynapseError(500, "Unable to find a suitable guest user ID")
localpart = await self.store.generate_user_id()
user = UserID(localpart, self.hs.hostname)
generated_localpart = await self.store.generate_user_id()
user = UserID(generated_localpart, self.hs.hostname)
user_id = user.to_string()
self.check_user_id_not_appservice_exclusive(user_id)
if generate_display_name:
default_display_name = localpart
default_display_name = generated_localpart
try:
await self.register_with_store(
user_id=user_id,

View file

@ -105,7 +105,7 @@ class EventContext:
class RoomCreationHandler:
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.auth = hs.get_auth()
self.clock = hs.get_clock()
self.hs = hs
@ -1125,7 +1125,7 @@ class RoomContextHandler:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.storage = hs.get_storage()
self.state_store = self.storage.state
@ -1256,7 +1256,7 @@ class RoomContextHandler:
class TimestampLookupHandler:
def __init__(self, hs: "HomeServer"):
self.server_name = hs.hostname
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.state_handler = hs.get_state_handler()
self.federation_client = hs.get_federation_client()
@ -1396,7 +1396,7 @@ class TimestampLookupHandler:
class RoomEventSource(EventSource[RoomStreamToken, EventBase]):
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
async def get_new_events(
self,
@ -1486,7 +1486,7 @@ class RoomShutdownHandler:
self._room_creation_handler = hs.get_room_creation_handler()
self._replication = hs.get_replication_data_handler()
self.event_creation_handler = hs.get_event_creation_handler()
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
async def shutdown_room(
self,

View file

@ -16,7 +16,7 @@ logger = logging.getLogger(__name__)
class RoomBatchHandler:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.state_store = hs.get_storage().state
self.event_creation_handler = hs.get_event_creation_handler()
self.room_member_handler = hs.get_room_member_handler()

View file

@ -49,7 +49,7 @@ EMPTY_THIRD_PARTY_ID = ThirdPartyInstanceID(None, None)
class RoomListHandler:
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.hs = hs
self.enable_room_list_search = hs.config.roomdirectory.enable_room_list_search
self.response_cache: ResponseCache[

View file

@ -66,7 +66,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.auth = hs.get_auth()
self.state_handler = hs.get_state_handler()
self.config = hs.config
@ -82,6 +82,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
self.event_auth_handler = hs.get_event_auth_handler()
self.member_linearizer: Linearizer = Linearizer(name="member")
self.member_as_limiter = Linearizer(max_count=10, name="member_as_limiter")
self.clock = hs.get_clock()
self.spam_checker = hs.get_spam_checker()
@ -500,25 +501,32 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
key = (room_id,)
with (await self.member_linearizer.queue(key)):
result = await self.update_membership_locked(
requester,
target,
room_id,
action,
txn_id=txn_id,
remote_room_hosts=remote_room_hosts,
third_party_signed=third_party_signed,
ratelimit=ratelimit,
content=content,
new_room=new_room,
require_consent=require_consent,
outlier=outlier,
historical=historical,
allow_no_prev_events=allow_no_prev_events,
prev_event_ids=prev_event_ids,
auth_event_ids=auth_event_ids,
)
as_id = object()
if requester.app_service:
as_id = requester.app_service.id
# We first linearise by the application service (to try to limit concurrent joins
# by application services), and then by room ID.
with (await self.member_as_limiter.queue(as_id)):
with (await self.member_linearizer.queue(key)):
result = await self.update_membership_locked(
requester,
target,
room_id,
action,
txn_id=txn_id,
remote_room_hosts=remote_room_hosts,
third_party_signed=third_party_signed,
ratelimit=ratelimit,
content=content,
new_room=new_room,
require_consent=require_consent,
outlier=outlier,
historical=historical,
allow_no_prev_events=allow_no_prev_events,
prev_event_ids=prev_event_ids,
auth_event_ids=auth_event_ids,
)
return result

View file

@ -15,7 +15,6 @@
import itertools
import logging
import re
from collections import deque
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Sequence, Set, Tuple
import attr
@ -90,7 +89,7 @@ class RoomSummaryHandler:
def __init__(self, hs: "HomeServer"):
self._event_auth_handler = hs.get_event_auth_handler()
self._store = hs.get_datastore()
self._store = hs.get_datastores().main
self._event_serializer = hs.get_event_client_serializer()
self._server_name = hs.hostname
self._federation_client = hs.get_federation_client()
@ -107,153 +106,6 @@ class RoomSummaryHandler:
"get_room_hierarchy",
)
async def get_space_summary(
self,
requester: str,
room_id: str,
suggested_only: bool = False,
max_rooms_per_space: Optional[int] = None,
) -> JsonDict:
"""
Implementation of the space summary C-S API
Args:
requester: user id of the user making this request
room_id: room id to start the summary at
suggested_only: whether we should only return children with the "suggested"
flag set.
max_rooms_per_space: an optional limit on the number of child rooms we will
return. This does not apply to the root room (ie, room_id), and
is overridden by MAX_ROOMS_PER_SPACE.
Returns:
summary dict to return
"""
# First of all, check that the room is accessible.
if not await self._is_local_room_accessible(room_id, requester):
raise AuthError(
403,
"User %s not in room %s, and room previews are disabled"
% (requester, room_id),
)
# the queue of rooms to process
room_queue = deque((_RoomQueueEntry(room_id, ()),))
# rooms we have already processed
processed_rooms: Set[str] = set()
# events we have already processed. We don't necessarily have their event ids,
# so instead we key on (room id, state key)
processed_events: Set[Tuple[str, str]] = set()
rooms_result: List[JsonDict] = []
events_result: List[JsonDict] = []
if max_rooms_per_space is None or max_rooms_per_space > MAX_ROOMS_PER_SPACE:
max_rooms_per_space = MAX_ROOMS_PER_SPACE
while room_queue and len(rooms_result) < MAX_ROOMS:
queue_entry = room_queue.popleft()
room_id = queue_entry.room_id
if room_id in processed_rooms:
# already done this room
continue
logger.debug("Processing room %s", room_id)
is_in_room = await self._store.is_host_joined(room_id, self._server_name)
# The client-specified max_rooms_per_space limit doesn't apply to the
# room_id specified in the request, so we ignore it if this is the
# first room we are processing.
max_children = max_rooms_per_space if processed_rooms else MAX_ROOMS
if is_in_room:
room_entry = await self._summarize_local_room(
requester, None, room_id, suggested_only, max_children
)
events: Sequence[JsonDict] = []
if room_entry:
rooms_result.append(room_entry.room)
events = room_entry.children_state_events
logger.debug(
"Query of local room %s returned events %s",
room_id,
["%s->%s" % (ev["room_id"], ev["state_key"]) for ev in events],
)
else:
fed_rooms = await self._summarize_remote_room(
queue_entry,
suggested_only,
max_children,
exclude_rooms=processed_rooms,
)
# The results over federation might include rooms that the we,
# as the requesting server, are allowed to see, but the requesting
# user is not permitted see.
#
# Filter the returned results to only what is accessible to the user.
events = []
for room_entry in fed_rooms:
room = room_entry.room
fed_room_id = room_entry.room_id
# The user can see the room, include it!
if await self._is_remote_room_accessible(
requester, fed_room_id, room
):
# Before returning to the client, remove the allowed_room_ids
# and allowed_spaces keys.
room.pop("allowed_room_ids", None)
room.pop("allowed_spaces", None) # historical
rooms_result.append(room)
events.extend(room_entry.children_state_events)
# All rooms returned don't need visiting again (even if the user
# didn't have access to them).
processed_rooms.add(fed_room_id)
logger.debug(
"Query of %s returned rooms %s, events %s",
room_id,
[room_entry.room.get("room_id") for room_entry in fed_rooms],
["%s->%s" % (ev["room_id"], ev["state_key"]) for ev in events],
)
# the room we queried may or may not have been returned, but don't process
# it again, anyway.
processed_rooms.add(room_id)
# XXX: is it ok that we blindly iterate through any events returned by
# a remote server, whether or not they actually link to any rooms in our
# tree?
for ev in events:
# remote servers might return events we have already processed
# (eg, Dendrite returns inward pointers as well as outward ones), so
# we need to filter them out, to avoid returning duplicate links to the
# client.
ev_key = (ev["room_id"], ev["state_key"])
if ev_key in processed_events:
continue
events_result.append(ev)
# add the child to the queue. we have already validated
# that the vias are a list of server names.
room_queue.append(
_RoomQueueEntry(ev["state_key"], ev["content"]["via"])
)
processed_events.add(ev_key)
return {"rooms": rooms_result, "events": events_result}
async def get_room_hierarchy(
self,
requester: Requester,
@ -398,8 +250,6 @@ class RoomSummaryHandler:
None,
room_id,
suggested_only,
# Do not limit the maximum children.
max_children=None,
)
# Otherwise, attempt to use information for federation.
@ -488,74 +338,6 @@ class RoomSummaryHandler:
return result
async def federation_space_summary(
self,
origin: str,
room_id: str,
suggested_only: bool,
max_rooms_per_space: Optional[int],
exclude_rooms: Iterable[str],
) -> JsonDict:
"""
Implementation of the space summary Federation API
Args:
origin: The server requesting the spaces summary.
room_id: room id to start the summary at
suggested_only: whether we should only return children with the "suggested"
flag set.
max_rooms_per_space: an optional limit on the number of child rooms we will
return. Unlike the C-S API, this applies to the root room (room_id).
It is clipped to MAX_ROOMS_PER_SPACE.
exclude_rooms: a list of rooms to skip over (presumably because the
calling server has already seen them).
Returns:
summary dict to return
"""
# the queue of rooms to process
room_queue = deque((room_id,))
# the set of rooms that we should not walk further. Initialise it with the
# excluded-rooms list; we will add other rooms as we process them so that
# we do not loop.
processed_rooms: Set[str] = set(exclude_rooms)
rooms_result: List[JsonDict] = []
events_result: List[JsonDict] = []
# Set a limit on the number of rooms to return.
if max_rooms_per_space is None or max_rooms_per_space > MAX_ROOMS_PER_SPACE:
max_rooms_per_space = MAX_ROOMS_PER_SPACE
while room_queue and len(rooms_result) < MAX_ROOMS:
room_id = room_queue.popleft()
if room_id in processed_rooms:
# already done this room
continue
room_entry = await self._summarize_local_room(
None, origin, room_id, suggested_only, max_rooms_per_space
)
processed_rooms.add(room_id)
if room_entry:
rooms_result.append(room_entry.room)
events_result.extend(room_entry.children_state_events)
# add any children to the queue
room_queue.extend(
edge_event["state_key"]
for edge_event in room_entry.children_state_events
)
return {"rooms": rooms_result, "events": events_result}
async def get_federation_hierarchy(
self,
origin: str,
@ -579,7 +361,7 @@ class RoomSummaryHandler:
The JSON hierarchy dictionary.
"""
root_room_entry = await self._summarize_local_room(
None, origin, requested_room_id, suggested_only, max_children=None
None, origin, requested_room_id, suggested_only
)
if root_room_entry is None:
# Room is inaccessible to the requesting server.
@ -600,7 +382,7 @@ class RoomSummaryHandler:
continue
room_entry = await self._summarize_local_room(
None, origin, room_id, suggested_only, max_children=0
None, origin, room_id, suggested_only, include_children=False
)
# If the room is accessible, include it in the results.
#
@ -626,7 +408,7 @@ class RoomSummaryHandler:
origin: Optional[str],
room_id: str,
suggested_only: bool,
max_children: Optional[int],
include_children: bool = True,
) -> Optional["_RoomEntry"]:
"""
Generate a room entry and a list of event entries for a given room.
@ -641,9 +423,8 @@ class RoomSummaryHandler:
room_id: The room ID to summarize.
suggested_only: True if only suggested children should be returned.
Otherwise, all children are returned.
max_children:
The maximum number of children rooms to include. A value of None
means no limit.
include_children:
Whether to include the events of any children.
Returns:
A room entry if the room should be returned. None, otherwise.
@ -653,9 +434,8 @@ class RoomSummaryHandler:
room_entry = await self._build_room_entry(room_id, for_federation=bool(origin))
# If the room is not a space or the children don't matter, return just
# the room information.
if room_entry.get("room_type") != RoomTypes.SPACE or max_children == 0:
# If the room is not a space return just the room information.
if room_entry.get("room_type") != RoomTypes.SPACE or not include_children:
return _RoomEntry(room_id, room_entry)
# Otherwise, look for child rooms/spaces.
@ -665,14 +445,6 @@ class RoomSummaryHandler:
# we only care about suggested children
child_events = filter(_is_suggested_child_event, child_events)
# TODO max_children is legacy code for the /spaces endpoint.
if max_children is not None:
child_iter: Iterable[EventBase] = itertools.islice(
child_events, max_children
)
else:
child_iter = child_events
stripped_events: List[JsonDict] = [
{
"type": e.type,
@ -682,80 +454,10 @@ class RoomSummaryHandler:
"sender": e.sender,
"origin_server_ts": e.origin_server_ts,
}
for e in child_iter
for e in child_events
]
return _RoomEntry(room_id, room_entry, stripped_events)
async def _summarize_remote_room(
self,
room: "_RoomQueueEntry",
suggested_only: bool,
max_children: Optional[int],
exclude_rooms: Iterable[str],
) -> Iterable["_RoomEntry"]:
"""
Request room entries and a list of event entries for a given room by querying a remote server.
Args:
room: The room to summarize.
suggested_only: True if only suggested children should be returned.
Otherwise, all children are returned.
max_children:
The maximum number of children rooms to include. This is capped
to a server-set limit.
exclude_rooms:
Rooms IDs which do not need to be summarized.
Returns:
An iterable of room entries.
"""
room_id = room.room_id
logger.info("Requesting summary for %s via %s", room_id, room.via)
# we need to make the exclusion list json-serialisable
exclude_rooms = list(exclude_rooms)
via = itertools.islice(room.via, MAX_SERVERS_PER_SPACE)
try:
res = await self._federation_client.get_space_summary(
via,
room_id,
suggested_only=suggested_only,
max_rooms_per_space=max_children,
exclude_rooms=exclude_rooms,
)
except Exception as e:
logger.warning(
"Unable to get summary of %s via federation: %s",
room_id,
e,
exc_info=logger.isEnabledFor(logging.DEBUG),
)
return ()
# Group the events by their room.
children_by_room: Dict[str, List[JsonDict]] = {}
for ev in res.events:
if ev.event_type == EventTypes.SpaceChild:
children_by_room.setdefault(ev.room_id, []).append(ev.data)
# Generate the final results.
results = []
for fed_room in res.rooms:
fed_room_id = fed_room.get("room_id")
if not fed_room_id or not isinstance(fed_room_id, str):
continue
results.append(
_RoomEntry(
fed_room_id,
fed_room,
children_by_room.get(fed_room_id, []),
)
)
return results
async def _summarize_remote_room_hierarchy(
self, room: "_RoomQueueEntry", suggested_only: bool
) -> Tuple[Optional["_RoomEntry"], Dict[str, JsonDict], Set[str]]:
@ -958,9 +660,8 @@ class RoomSummaryHandler:
):
return True
# Check if the user is a member of any of the allowed spaces
# from the response.
allowed_rooms = room.get("allowed_room_ids") or room.get("allowed_spaces")
# Check if the user is a member of any of the allowed rooms from the response.
allowed_rooms = room.get("allowed_room_ids")
if allowed_rooms and isinstance(allowed_rooms, list):
if await self._event_auth_handler.is_user_in_rooms(
allowed_rooms, requester
@ -1028,8 +729,6 @@ class RoomSummaryHandler:
)
if allowed_rooms:
entry["allowed_room_ids"] = allowed_rooms
# TODO Remove this key once the API is stable.
entry["allowed_spaces"] = allowed_rooms
# Filter out Nones rather omit the field altogether
room_entry = {k: v for k, v in entry.items() if v is not None}
@ -1094,7 +793,7 @@ class RoomSummaryHandler:
room_id,
# Suggested-only doesn't matter since no children are requested.
suggested_only=False,
max_children=0,
include_children=False,
)
if not room_entry:

View file

@ -52,7 +52,7 @@ class Saml2SessionData:
class SamlHandler:
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.clock = hs.get_clock()
self.server_name = hs.hostname
self._saml_client = Saml2Client(hs.config.saml2.saml2_sp_config)

View file

@ -14,8 +14,9 @@
import itertools
import logging
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional
from typing import TYPE_CHECKING, Collection, Dict, Iterable, List, Optional, Set, Tuple
import attr
from unpaddedbase64 import decode_base64, encode_base64
from synapse.api.constants import EventTypes, Membership
@ -32,9 +33,23 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class _SearchResult:
# The count of results.
count: int
# A mapping of event ID to the rank of that event.
rank_map: Dict[str, int]
# A list of the resulting events.
allowed_events: List[EventBase]
# A map of room ID to results.
room_groups: Dict[str, JsonDict]
# A set of event IDs to highlight.
highlights: Set[str]
class SearchHandler:
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.store = hs.get_datastores().main
self.state_handler = hs.get_state_handler()
self.clock = hs.get_clock()
self.hs = hs
@ -100,7 +115,7 @@ class SearchHandler:
"""Performs a full text search for a user.
Args:
user
user: The user performing the search.
content: Search parameters
batch: The next_batch parameter. Used for pagination.
@ -156,6 +171,8 @@ class SearchHandler:
# Include context around each event?
event_context = room_cat.get("event_context", None)
before_limit = after_limit = None
include_profile = False
# Group results together? May allow clients to paginate within a
# group
@ -182,6 +199,73 @@ class SearchHandler:
% (set(group_keys) - {"room_id", "sender"},),
)
return await self._search(
user,
batch_group,
batch_group_key,
batch_token,
search_term,
keys,
filter_dict,
order_by,
include_state,
group_keys,
event_context,
before_limit,
after_limit,
include_profile,
)
async def _search(
self,
user: UserID,
batch_group: Optional[str],
batch_group_key: Optional[str],
batch_token: Optional[str],
search_term: str,
keys: List[str],
filter_dict: JsonDict,
order_by: str,
include_state: bool,
group_keys: List[str],
event_context: Optional[bool],
before_limit: Optional[int],
after_limit: Optional[int],
include_profile: bool,
) -> JsonDict:
"""Performs a full text search for a user.
Args:
user: The user performing the search.
batch_group: Pagination information.
batch_group_key: Pagination information.
batch_token: Pagination information.
search_term: Search term to search for
keys: List of keys to search in, currently supports
"content.body", "content.name", "content.topic"
filter_dict: The JSON to build a filter out of.
order_by: How to order the results. Valid values ore "rank" and "recent".
include_state: True if the state of the room at each result should
be included.
group_keys: A list of ways to group the results. Valid values are
"room_id" and "sender".
event_context: True to include contextual events around results.
before_limit:
The number of events before a result to include as context.
Only used if event_context is True.
after_limit:
The number of events after a result to include as context.
Only used if event_context is True.
include_profile: True if historical profile information should be
included in the event context.
Only used if event_context is True.
Returns:
dict to be returned to the client with results of search
"""
search_filter = Filter(self.hs, filter_dict)
# TODO: Search through left rooms too
@ -216,209 +300,57 @@ class SearchHandler:
}
}
rank_map = {} # event_id -> rank of event
allowed_events = []
# Holds result of grouping by room, if applicable
room_groups: Dict[str, JsonDict] = {}
# Holds result of grouping by sender, if applicable
sender_group: Dict[str, JsonDict] = {}
# Holds the next_batch for the entire result set if one of those exists
global_next_batch = None
highlights = set()
count = None
sender_group: Optional[Dict[str, JsonDict]]
if order_by == "rank":
search_result = await self.store.search_msgs(room_ids, search_term, keys)
count = search_result["count"]
if search_result["highlights"]:
highlights.update(search_result["highlights"])
results = search_result["results"]
rank_map.update({r["event"].event_id: r["rank"] for r in results})
filtered_events = await search_filter.filter([r["event"] for r in results])
events = await filter_events_for_client(
self.storage, user.to_string(), filtered_events
search_result, sender_group = await self._search_by_rank(
user, room_ids, search_term, keys, search_filter
)
events.sort(key=lambda e: -rank_map[e.event_id])
allowed_events = events[: search_filter.limit]
for e in allowed_events:
rm = room_groups.setdefault(
e.room_id, {"results": [], "order": rank_map[e.event_id]}
)
rm["results"].append(e.event_id)
s = sender_group.setdefault(
e.sender, {"results": [], "order": rank_map[e.event_id]}
)
s["results"].append(e.event_id)
# Unused return values for rank search.
global_next_batch = None
elif order_by == "recent":
room_events: List[EventBase] = []
i = 0
pagination_token = batch_token
# We keep looping and we keep filtering until we reach the limit
# or we run out of things.
# But only go around 5 times since otherwise synapse will be sad.
while len(room_events) < search_filter.limit and i < 5:
i += 1
search_result = await self.store.search_rooms(
room_ids,
search_term,
keys,
search_filter.limit * 2,
pagination_token=pagination_token,
)
if search_result["highlights"]:
highlights.update(search_result["highlights"])
count = search_result["count"]
results = search_result["results"]
results_map = {r["event"].event_id: r for r in results}
rank_map.update({r["event"].event_id: r["rank"] for r in results})
filtered_events = await search_filter.filter(
[r["event"] for r in results]
)
events = await filter_events_for_client(
self.storage, user.to_string(), filtered_events
)
room_events.extend(events)
room_events = room_events[: search_filter.limit]
if len(results) < search_filter.limit * 2:
pagination_token = None
break
else:
pagination_token = results[-1]["pagination_token"]
for event in room_events:
group = room_groups.setdefault(event.room_id, {"results": []})
group["results"].append(event.event_id)
if room_events and len(room_events) >= search_filter.limit:
last_event_id = room_events[-1].event_id
pagination_token = results_map[last_event_id]["pagination_token"]
# We want to respect the given batch group and group keys so
# that if people blindly use the top level `next_batch` token
# it returns more from the same group (if applicable) rather
# than reverting to searching all results again.
if batch_group and batch_group_key:
global_next_batch = encode_base64(
(
"%s\n%s\n%s"
% (batch_group, batch_group_key, pagination_token)
).encode("ascii")
)
else:
global_next_batch = encode_base64(
("%s\n%s\n%s" % ("all", "", pagination_token)).encode("ascii")
)
for room_id, group in room_groups.items():
group["next_batch"] = encode_base64(
("%s\n%s\n%s" % ("room_id", room_id, pagination_token)).encode(
"ascii"
)
)
allowed_events.extend(room_events)
search_result, global_next_batch = await self._search_by_recent(
user,
room_ids,
search_term,
keys,
search_filter,
batch_group,
batch_group_key,
batch_token,
)
# Unused return values for recent search.
sender_group = None
else:
# We should never get here due to the guard earlier.
raise NotImplementedError()
logger.info("Found %d events to return", len(allowed_events))
logger.info("Found %d events to return", len(search_result.allowed_events))
# If client has asked for "context" for each event (i.e. some surrounding
# events and state), fetch that
if event_context is not None:
now_token = self.hs.get_event_sources().get_current_token()
# Note that before and after limit must be set in this case.
assert before_limit is not None
assert after_limit is not None
contexts = {}
for event in allowed_events:
res = await self.store.get_events_around(
event.room_id, event.event_id, before_limit, after_limit
)
logger.info(
"Context for search returned %d and %d events",
len(res.events_before),
len(res.events_after),
)
events_before = await filter_events_for_client(
self.storage, user.to_string(), res.events_before
)
events_after = await filter_events_for_client(
self.storage, user.to_string(), res.events_after
)
context = {
"events_before": events_before,
"events_after": events_after,
"start": await now_token.copy_and_replace(
"room_key", res.start
).to_string(self.store),
"end": await now_token.copy_and_replace(
"room_key", res.end
).to_string(self.store),
}
if include_profile:
senders = {
ev.sender
for ev in itertools.chain(events_before, [event], events_after)
}
if events_after:
last_event_id = events_after[-1].event_id
else:
last_event_id = event.event_id
state_filter = StateFilter.from_types(
[(EventTypes.Member, sender) for sender in senders]
)
state = await self.state_store.get_state_for_event(
last_event_id, state_filter
)
context["profile_info"] = {
s.state_key: {
"displayname": s.content.get("displayname", None),
"avatar_url": s.content.get("avatar_url", None),
}
for s in state.values()
if s.type == EventTypes.Member and s.state_key in senders
}
contexts[event.event_id] = context
contexts = await self._calculate_event_contexts(
user,
search_result.allowed_events,
before_limit,
after_limit,
include_profile,
)
else:
contexts = {}
# TODO: Add a limit
time_now = self.clock.time_msec()
state_results = {}
if include_state:
for room_id in {e.room_id for e in search_result.allowed_events}:
state = await self.state_handler.get_current_state(room_id)
state_results[room_id] = list(state.values())
aggregations = None
if self._msc3666_enabled:
@ -432,11 +364,16 @@ class SearchHandler:
for context in contexts.values()
),
# The returned events.
allowed_events,
search_result.allowed_events,
),
user.to_string(),
)
# We're now about to serialize the events. We should not make any
# blocking calls after this. Otherwise, the 'age' will be wrong.
time_now = self.clock.time_msec()
for context in contexts.values():
context["events_before"] = self._event_serializer.serialize_events(
context["events_before"], time_now, bundle_aggregations=aggregations # type: ignore[arg-type]
@ -445,44 +382,33 @@ class SearchHandler:
context["events_after"], time_now, bundle_aggregations=aggregations # type: ignore[arg-type]
)
state_results = {}
if include_state:
for room_id in {e.room_id for e in allowed_events}:
state = await self.state_handler.get_current_state(room_id)
state_results[room_id] = list(state.values())
results = [
{
"rank": search_result.rank_map[e.event_id],
"result": self._event_serializer.serialize_event(
e, time_now, bundle_aggregations=aggregations
),
"context": contexts.get(e.event_id, {}),
}
for e in search_result.allowed_events
]
# We're now about to serialize the events. We should not make any
# blocking calls after this. Otherwise the 'age' will be wrong
results = []
for e in allowed_events:
results.append(
{
"rank": rank_map[e.event_id],
"result": self._event_serializer.serialize_event(
e, time_now, bundle_aggregations=aggregations
),
"context": contexts.get(e.event_id, {}),
}
)
rooms_cat_res = {
rooms_cat_res: JsonDict = {
"results": results,
"count": count,
"highlights": list(highlights),
"count": search_result.count,
"highlights": list(search_result.highlights),
}
if state_results:
s = {}
for room_id, state_events in state_results.items():
s[room_id] = self._event_serializer.serialize_events(
state_events, time_now
)
rooms_cat_res["state"] = {
room_id: self._event_serializer.serialize_events(state_events, time_now)
for room_id, state_events in state_results.items()
}
rooms_cat_res["state"] = s
if room_groups and "room_id" in group_keys:
rooms_cat_res.setdefault("groups", {})["room_id"] = room_groups
if search_result.room_groups and "room_id" in group_keys:
rooms_cat_res.setdefault("groups", {})[
"room_id"
] = search_result.room_groups
if sender_group and "sender" in group_keys:
rooms_cat_res.setdefault("groups", {})["sender"] = sender_group
@ -491,3 +417,282 @@ class SearchHandler:
rooms_cat_res["next_batch"] = global_next_batch
return {"search_categories": {"room_events": rooms_cat_res}}
async def _search_by_rank(
self,
user: UserID,
room_ids: Collection[str],
search_term: str,
keys: Iterable[str],
search_filter: Filter,
) -> Tuple[_SearchResult, Dict[str, JsonDict]]:
"""
Performs a full text search for a user ordering by rank.
Args:
user: The user performing the search.
room_ids: List of room ids to search in
search_term: Search term to search for
keys: List of keys to search in, currently supports
"content.body", "content.name", "content.topic"
search_filter: The event filter to use.
Returns:
A tuple of:
The search results.
A map of sender ID to results.
"""
rank_map = {} # event_id -> rank of event
# Holds result of grouping by room, if applicable
room_groups: Dict[str, JsonDict] = {}
# Holds result of grouping by sender, if applicable
sender_group: Dict[str, JsonDict] = {}
search_result = await self.store.search_msgs(room_ids, search_term, keys)
if search_result["highlights"]:
highlights = search_result["highlights"]
else:
highlights = set()
results = search_result["results"]
# event_id -> rank of event
rank_map = {r["event"].event_id: r["rank"] for r in results}
filtered_events = await search_filter.filter([r["event"] for r in results])
events = await filter_events_for_client(
self.storage, user.to_string(), filtered_events
)
events.sort(key=lambda e: -rank_map[e.event_id])
allowed_events = events[: search_filter.limit]
for e in allowed_events:
rm = room_groups.setdefault(
e.room_id, {"results": [], "order": rank_map[e.event_id]}
)
rm["results"].append(e.event_id)
s = sender_group.setdefault(
e.sender, {"results": [], "order": rank_map[e.event_id]}
)
s["results"].append(e.event_id)
return (
_SearchResult(
search_result["count"],
rank_map,
allowed_events,
room_groups,
highlights,
),
sender_group,
)
async def _search_by_recent(
self,
user: UserID,
room_ids: Collection[str],
search_term: str,
keys: Iterable[str],
search_filter: Filter,
batch_group: Optional[str],
batch_group_key: Optional[str],
batch_token: Optional[str],
) -> Tuple[_SearchResult, Optional[str]]:
"""
Performs a full text search for a user ordering by recent.
Args:
user: The user performing the search.
room_ids: List of room ids to search in
search_term: Search term to search for
keys: List of keys to search in, currently supports
"content.body", "content.name", "content.topic"
search_filter: The event filter to use.
batch_group: Pagination information.
batch_group_key: Pagination information.
batch_token: Pagination information.
Returns:
A tuple of:
The search results.
Optionally, a pagination token.
"""
rank_map = {} # event_id -> rank of event
# Holds result of grouping by room, if applicable
room_groups: Dict[str, JsonDict] = {}
# Holds the next_batch for the entire result set if one of those exists
global_next_batch = None
highlights = set()
room_events: List[EventBase] = []
i = 0
pagination_token = batch_token
# We keep looping and we keep filtering until we reach the limit
# or we run out of things.
# But only go around 5 times since otherwise synapse will be sad.
while len(room_events) < search_filter.limit and i < 5:
i += 1
search_result = await self.store.search_rooms(
room_ids,
search_term,
keys,
search_filter.limit * 2,
pagination_token=pagination_token,
)
if search_result["highlights"]:
highlights.update(search_result["highlights"])
count = search_result["count"]
results = search_result["results"]
results_map = {r["event"].event_id: r for r in results}
rank_map.update({r["event"].event_id: r["rank"] for r in results})
filtered_events = await search_filter.filter([r["event"] for r in results])
events = await filter_events_for_client(
self.storage, user.to_string(), filtered_events
)
room_events.extend(events)
room_events = room_events[: search_filter.limit]
if len(results) < search_filter.limit * 2:
break
else:
pagination_token = results[-1]["pagination_token"]
for event in room_events:
group = room_groups.setdefault(event.room_id, {"results": []})
group["results"].append(event.event_id)
if room_events and len(room_events) >= search_filter.limit:
last_event_id = room_events[-1].event_id
pagination_token = results_map[last_event_id]["pagination_token"]
# We want to respect the given batch group and group keys so
# that if people blindly use the top level `next_batch` token
# it returns more from the same group (if applicable) rather
# than reverting to searching all results again.
if batch_group and batch_group_key:
global_next_batch = encode_base64(
(
"%s\n%s\n%s" % (batch_group, batch_group_key, pagination_token)
).encode("ascii")
)
else:
global_next_batch = encode_base64(
("%s\n%s\n%s" % ("all", "", pagination_token)).encode("ascii")
)
for room_id, group in room_groups.items():
group["next_batch"] = encode_base64(
("%s\n%s\n%s" % ("room_id", room_id, pagination_token)).encode(
"ascii"
)
)
return (
_SearchResult(count, rank_map, room_events, room_groups, highlights),
global_next_batch,
)
async def _calculate_event_contexts(
self,
user: UserID,
allowed_events: List[EventBase],
before_limit: int,
after_limit: int,
include_profile: bool,
) -> Dict[str, JsonDict]:
"""
Calculates the contextual events for any search results.
Args:
user: The user performing the search.
allowed_events: The search results.
before_limit:
The number of events before a result to include as context.
after_limit:
The number of events after a result to include as context.
include_profile: True if historical profile information should be
included in the event context.
Returns:
A map of event ID to contextual information.
"""
now_token = self.hs.get_event_sources().get_current_token()
contexts = {}
for event in allowed_events:
res = await self.store.get_events_around(
event.room_id, event.event_id, before_limit, after_limit
)
logger.info(
"Context for search returned %d and %d events",
len(res.events_before),
len(res.events_after),
)
events_before = await filter_events_for_client(
self.storage, user.to_string(), res.events_before
)
events_after = await filter_events_for_client(
self.storage, user.to_string(), res.events_after
)
context: JsonDict = {
"events_before": events_before,
"events_after": events_after,
"start": await now_token.copy_and_replace(
"room_key", res.start
).to_string(self.store),
"end": await now_token.copy_and_replace("room_key", res.end).to_string(
self.store
),
}
if include_profile:
senders = {
ev.sender
for ev in itertools.chain(events_before, [event], events_after)
}
if events_after:
last_event_id = events_after[-1].event_id
else:
last_event_id = event.event_id
state_filter = StateFilter.from_types(
[(EventTypes.Member, sender) for sender in senders]
)
state = await self.state_store.get_state_for_event(
last_event_id, state_filter
)
context["profile_info"] = {
s.state_key: {
"displayname": s.content.get("displayname", None),
"avatar_url": s.content.get("avatar_url", None),
}
for s in state.values()
if s.type == EventTypes.Member and s.state_key in senders
}
contexts[event.event_id] = context
return contexts

Some files were not shown because too many files have changed in this diff Show more