Compare commits

...

18 commits

Author SHA1 Message Date
Neeeflix 6ce19b94e8
Fix error in thumbnail generation (#11288)
Signed-off-by: Jonas Zeunert <jonas@zeunert.org>
2021-11-10 20:49:43 +00:00
Patrick Cloke 5cace20bf1
Add missing type hints to synapse.app. (#11287) 2021-11-10 15:06:54 -05:00
Patrick Cloke 66c4b774fd
Add type hints to synapse._scripts (#11297) 2021-11-10 17:55:32 +00:00
Andrew Morgan 5f277ffe89
Add documentation page stubs for Single Sign-On, SAML and CAS pages (#11298) 2021-11-10 17:54:56 +00:00
Richard van der Hoff 73cbb284b9
Remove redundant parameters on _check_event_auth (#11292)
as of #11012, these parameters are unused.
2021-11-10 14:16:06 +00:00
Olivier Wilkinson (reivilibre) 68c258a604 Synapse 1.47.0rc2 (2021-11-10)
==============================
 
 This fixes an issue with publishing the Debian packages for 1.47.0rc1.
 It is otherwise identical to 1.47.0rc1.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEE8SRSDO7gYkSP4chELS76LzL74EcFAmGLl4sACgkQLS76LzL7
 4EdpXxAArEBqEWCUCy6wSNRfexzI+qAITzhhqR5BiDDtkt33GzXLWDN83lT3kCS/
 xNZyhiwwUbtt/TqkZ0/Tqu2afI5JZCizpP/kXLVS2WA03jY8+l+eQbKEQR5vEsEV
 752J8OVJ9GUunewOI4Uo4xRqndzvOMQBKaoPzyq44PcFd6lS2rkAnJYstVnow+rB
 JvRIXFjOocaOzpemal5Mh8ToH5Y6yfe7MEE8B0s3PjX1FMAv87475X1oLaK7GeKn
 3yf8XJ1mJcvJfkHuopqX8PxW+pFGorb4N1AFk2BikxB4XV0nCo5VHUejSWaEP4oH
 uHtTYDV5RHwHhZyHYMbBOyPD2bSIKZ6wXqgn/o5io7mfsUS77SdwuGhRDqnupCi/
 Vg+4F5e1nuSPaLEJd5Qfb1NM2xErGm8gfYN3DlRBzpuQ+Jx/034YWds42ZuNv2IO
 qdAOeztiOMdrPkdPTL+XlNNXV80waMsQFt2EaycYkYmPVtgvlKvr/c/Wg06jyD7u
 dll33KlE8d+jM3JbOf6D/Ze5ECQlf0gGtCukXTMIEOorrCdQuLH4R8hb49YqmiLq
 nL8aPXCv1pZOrMbHGbcYfHoZIV+qMhUK04PN7jMLF/+nyBkTI1wRODN0lvBh8gy6
 dNWTzM/dMk5IvFY0FDipkR7Cv5c2U2Wu1xCy8026VHCu/DIzVRw=
 =Baxl
 -----END PGP SIGNATURE-----

Merge tag 'v1.47.0rc2' into develop

Synapse 1.47.0rc2 (2021-11-10)
==============================

This fixes an issue with publishing the Debian packages for 1.47.0rc1.
It is otherwise identical to 1.47.0rc1.
2021-11-10 13:01:08 +00:00
Olivier Wilkinson (reivilibre) 595f28529c Changelog tweak from feedback 2021-11-10 09:54:34 +00:00
Olivier Wilkinson (reivilibre) ef7f9286d1 Move Debian changelog entries to rc2 since rc1 was not published 2021-11-10 09:48:50 +00:00
Olivier Wilkinson (reivilibre) 82e62b488a 1.47.0rc2 2021-11-10 09:44:38 +00:00
Olivier Wilkinson (reivilibre) af6374905a Correct the Debian changelog 2021-11-10 09:37:48 +00:00
Stanislav Motylkov b09d90cac9
Fix typos in the username_available admin API documentation. (#11286) 2021-11-09 21:11:05 +00:00
Eric Eastwood f1d5c2f269
Split out federated PDU retrieval into a non-cached version (#11242)
Context: https://github.com/matrix-org/synapse/pull/11114/files#r741643968
2021-11-09 15:07:57 -06:00
Patrick Cloke 0ef69ddbdc
Ignore missing imports for parameterized. (#11285)
This was due to a conflict between #11282, which changed
mypy configuration, and #11228, a normal change.
2021-11-09 19:04:53 +00:00
Dan Callahan 3b951445a7
Require mypy for synapse/ & tests/ unless excluded (#11282)
Signed-off-by: Dan Callahan <danc@element.io>
2021-11-09 16:22:47 +00:00
Andrew Morgan a026695083
Clarifications and small fixes to to-device related code (#11247)
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2021-11-09 14:31:15 +00:00
David Robertson b6f4d122ef
Allow admins to proactively block rooms (#11228)
Co-authored-by: Dirk Klimpel <5740567+dklimpel@users.noreply.github.com>
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2021-11-09 13:11:47 +00:00
Patrick Cloke a19d01c3d9
Support filtering by relations per MSC3440 (#11236)
Adds experimental support for `relation_types` and `relation_senders`
fields for filters.
2021-11-09 08:10:58 -05:00
Andrew Morgan 4b3e30c276
Fix typo in RelationAggregationPaginationServlet error response (#11278) 2021-11-09 12:11:50 +00:00
67 changed files with 1376 additions and 462 deletions

View file

@ -1,3 +1,10 @@
Synapse 1.47.0rc2 (2021-11-10)
==============================
This fixes an issue with publishing the Debian packages for 1.47.0rc1.
It is otherwise identical to 1.47.0rc1.
Synapse 1.47.0rc1 (2021-11-09)
==============================

View file

@ -0,0 +1 @@
Allow the admin [Delete Room API](https://matrix-org.github.io/synapse/latest/admin_api/rooms.html#delete-room-api) to block a room without the need to join it.

View file

@ -0,0 +1 @@
Support filtering by relation senders & types per [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440).

1
changelog.d/11242.misc Normal file
View file

@ -0,0 +1 @@
Split out federated PDU retrieval function into a non-cached version.

1
changelog.d/11247.misc Normal file
View file

@ -0,0 +1 @@
Clean up code relating to to-device messages and sending ephemeral events to application services.

1
changelog.d/11278.misc Normal file
View file

@ -0,0 +1 @@
Fix a small typo in the error response when a relation type other than 'm.annotation' is passed to `GET /rooms/{room_id}/aggregations/{event_id}`.

1
changelog.d/11282.misc Normal file
View file

@ -0,0 +1 @@
Require all files in synapse/ and tests/ to pass mypy unless specifically excluded.

1
changelog.d/11285.misc Normal file
View file

@ -0,0 +1 @@
Require all files in synapse/ and tests/ to pass mypy unless specifically excluded.

1
changelog.d/11286.doc Normal file
View file

@ -0,0 +1 @@
Fix typo in the word `available` and fix HTTP method (should be `GET`) for the `username_available` admin API. Contributed by Stanislav Motylkov.

1
changelog.d/11287.misc Normal file
View file

@ -0,0 +1 @@
Add missing type hints to `synapse.app`.

1
changelog.d/11288.bugfix Normal file
View file

@ -0,0 +1 @@
Fix a long-standing bug where uploading extremely thin images (e.g. 1000x1) would fail. Contributed by @Neeeflix.

1
changelog.d/11292.misc Normal file
View file

@ -0,0 +1 @@
Remove unused parameters on `FederationEventHandler._check_event_auth`.

1
changelog.d/11297.misc Normal file
View file

@ -0,0 +1 @@
Add type hints to `synapse._scripts`.

1
changelog.d/11298.doc Normal file
View file

@ -0,0 +1 @@
Add Single Sign-On, SAML and CAS pages to the documentation.

7
debian/changelog vendored
View file

@ -1,4 +1,4 @@
matrix-synapse-py3 (1.47.0+nmu1) stable; urgency=medium
matrix-synapse-py3 (1.47.0~rc2) stable; urgency=medium
[ Dan Callahan ]
* Update scripts to pass Shellcheck lints.
@ -7,7 +7,10 @@ matrix-synapse-py3 (1.47.0+nmu1) stable; urgency=medium
* Preinstall the "wheel" package when building virtualenvs.
* Do not error if /etc/default/matrix-synapse is missing.
-- Synapse Packaging team <packages@matrix.org> Tue, 09 Nov 2021 12:16:43 +0000
[ Synapse Packaging team ]
* New synapse release 1.47.0~rc2.
-- Synapse Packaging team <packages@matrix.org> Wed, 10 Nov 2021 09:41:01 +0000
matrix-synapse-py3 (1.46.0) stable; urgency=medium

View file

@ -23,10 +23,10 @@
- [Structured Logging](structured_logging.md)
- [Templates](templates.md)
- [User Authentication](usage/configuration/user_authentication/README.md)
- [Single-Sign On]()
- [Single-Sign On](usage/configuration/user_authentication/single_sign_on/README.md)
- [OpenID Connect](openid.md)
- [SAML]()
- [CAS]()
- [SAML](usage/configuration/user_authentication/single_sign_on/saml.md)
- [CAS](usage/configuration/user_authentication/single_sign_on/cas.md)
- [SSO Mapping Providers](sso_mapping_providers.md)
- [Password Auth Providers](password_auth_providers.md)
- [JSON Web Tokens](jwt.md)

View file

@ -396,13 +396,17 @@ The new room will be created with the user specified by the `new_room_user_id` p
as room administrator and will contain a message explaining what happened. Users invited
to the new room will have power level `-10` by default, and thus be unable to speak.
If `block` is `True` it prevents new joins to the old room.
If `block` is `true`, users will be prevented from joining the old room.
This option can also be used to pre-emptively block a room, even if it's unknown
to this homeserver. In this case, the room will be blocked, and no further action
will be taken. If `block` is `false`, attempting to delete an unknown room is
invalid and will be rejected as a bad request.
This API will remove all trace of the old room from your database after removing
all local users. If `purge` is `true` (the default), all traces of the old room will
be removed from your database after removing all local users. If you do not want
this to happen, set `purge` to `false`.
Depending on the amount of history being purged a call to the API may take
Depending on the amount of history being purged, a call to the API may take
several minutes or longer.
The local server will only have the power to move local user and room aliases to
@ -464,8 +468,9 @@ The following JSON body parameters are available:
`new_room_user_id` in the new room. Ideally this will clearly convey why the
original room was shut down. Defaults to `Sharing illegal content on this server
is not permitted and rooms in violation will be blocked.`
* `block` - Optional. If set to `true`, this room will be added to a blocking list, preventing
future attempts to join the room. Defaults to `false`.
* `block` - Optional. If set to `true`, this room will be added to a blocking list,
preventing future attempts to join the room. Rooms can be blocked
even if they're not yet known to the homeserver. Defaults to `false`.
* `purge` - Optional. If set to `true`, it will remove all traces of the room from your database.
Defaults to `true`.
* `force_purge` - Optional, and ignored unless `purge` is `true`. If set to `true`, it
@ -483,7 +488,8 @@ The following fields are returned in the JSON response body:
* `failed_to_kick_users` - An array of users (`user_id`) that that were not kicked.
* `local_aliases` - An array of strings representing the local aliases that were migrated from
the old room to the new.
* `new_room_id` - A string representing the room ID of the new room.
* `new_room_id` - A string representing the room ID of the new room, or `null` if
no such room was created.
## Undoing room deletions

View file

@ -1107,7 +1107,7 @@ This endpoint will work even if registration is disabled on the server, unlike
The API is:
```
POST /_synapse/admin/v1/username_availabile?username=$localpart
GET /_synapse/admin/v1/username_available?username=$localpart
```
The request and response format is the same as the [/_matrix/client/r0/register/available](https://matrix.org/docs/spec/client_server/r0.6.0#get-matrix-client-r0-register-available) API.

View file

@ -0,0 +1,5 @@
# Single Sign-On
Synapse supports single sign-on through the SAML, Open ID Connect or CAS protocols.
LDAP and other login methods are supported through first and third-party password
auth provider modules.

View file

@ -0,0 +1,8 @@
# CAS
Synapse supports authenticating users via the [Central Authentication
Service protocol](https://en.wikipedia.org/wiki/Central_Authentication_Service)
(CAS) natively.
Please see the `cas_config` and `sso` sections of the [Synapse configuration
file](../../../configuration/homeserver_sample_config.md) for more details.

View file

@ -0,0 +1,8 @@
# SAML
Synapse supports authenticating users via the [Security Assertion
Markup Language](https://en.wikipedia.org/wiki/Security_Assertion_Markup_Language)
(SAML) protocol natively.
Please see the `saml2_config` and `sso` sections of the [Synapse configuration
file](../../../configuration/homeserver_sample_config.md) for more details.

227
mypy.ini
View file

@ -10,86 +10,162 @@ warn_unreachable = True
local_partial_types = True
no_implicit_optional = True
# To find all folders that pass mypy you run:
#
# find synapse/* -type d -not -name __pycache__ -exec bash -c "mypy '{}' > /dev/null" \; -print
files =
scripts-dev/sign_json,
synapse/__init__.py,
synapse/api,
synapse/appservice,
synapse/config,
synapse/crypto,
synapse/event_auth.py,
synapse/events,
synapse/federation,
synapse/groups,
synapse/handlers,
synapse/http,
synapse/logging,
synapse/metrics,
synapse/module_api,
synapse/notifier.py,
synapse/push,
synapse/replication,
synapse/rest,
synapse/server.py,
synapse/server_notices,
synapse/spam_checker_api,
synapse/state,
synapse/storage/__init__.py,
synapse/storage/_base.py,
synapse/storage/background_updates.py,
synapse/storage/databases/main/appservice.py,
synapse/storage/databases/main/client_ips.py,
synapse/storage/databases/main/events.py,
synapse/storage/databases/main/keys.py,
synapse/storage/databases/main/pusher.py,
synapse/storage/databases/main/registration.py,
synapse/storage/databases/main/relations.py,
synapse/storage/databases/main/session.py,
synapse/storage/databases/main/stream.py,
synapse/storage/databases/main/ui_auth.py,
synapse/storage/databases/state,
synapse/storage/database.py,
synapse/storage/engines,
synapse/storage/keys.py,
synapse/storage/persist_events.py,
synapse/storage/prepare_database.py,
synapse/storage/purge_events.py,
synapse/storage/push_rule.py,
synapse/storage/relations.py,
synapse/storage/roommember.py,
synapse/storage/state.py,
synapse/storage/types.py,
synapse/storage/util,
synapse/streams,
synapse/types.py,
synapse/util,
synapse/visibility.py,
tests/replication,
tests/test_event_auth.py,
tests/test_utils,
tests/handlers/test_password_providers.py,
tests/handlers/test_room.py,
tests/handlers/test_room_summary.py,
tests/handlers/test_send_email.py,
tests/handlers/test_sync.py,
tests/handlers/test_user_directory.py,
tests/rest/client/test_login.py,
tests/rest/client/test_auth.py,
tests/rest/client/test_relations.py,
tests/rest/media/v1/test_filepath.py,
tests/rest/media/v1/test_oembed.py,
tests/storage/test_state.py,
tests/storage/test_user_directory.py,
tests/util/test_itertools.py,
tests/util/test_stream_change_cache.py
setup.py,
synapse/,
tests/
# Note: Better exclusion syntax coming in mypy > 0.910
# https://github.com/python/mypy/pull/11329
#
# For now, set the (?x) flag enable "verbose" regexes
# https://docs.python.org/3/library/re.html#re.X
exclude = (?x)
^(
|synapse/storage/databases/__init__.py
|synapse/storage/databases/main/__init__.py
|synapse/storage/databases/main/account_data.py
|synapse/storage/databases/main/cache.py
|synapse/storage/databases/main/censor_events.py
|synapse/storage/databases/main/deviceinbox.py
|synapse/storage/databases/main/devices.py
|synapse/storage/databases/main/directory.py
|synapse/storage/databases/main/e2e_room_keys.py
|synapse/storage/databases/main/end_to_end_keys.py
|synapse/storage/databases/main/event_federation.py
|synapse/storage/databases/main/event_push_actions.py
|synapse/storage/databases/main/events_bg_updates.py
|synapse/storage/databases/main/events_forward_extremities.py
|synapse/storage/databases/main/events_worker.py
|synapse/storage/databases/main/filtering.py
|synapse/storage/databases/main/group_server.py
|synapse/storage/databases/main/lock.py
|synapse/storage/databases/main/media_repository.py
|synapse/storage/databases/main/metrics.py
|synapse/storage/databases/main/monthly_active_users.py
|synapse/storage/databases/main/openid.py
|synapse/storage/databases/main/presence.py
|synapse/storage/databases/main/profile.py
|synapse/storage/databases/main/purge_events.py
|synapse/storage/databases/main/push_rule.py
|synapse/storage/databases/main/receipts.py
|synapse/storage/databases/main/rejections.py
|synapse/storage/databases/main/room.py
|synapse/storage/databases/main/room_batch.py
|synapse/storage/databases/main/roommember.py
|synapse/storage/databases/main/search.py
|synapse/storage/databases/main/signatures.py
|synapse/storage/databases/main/state.py
|synapse/storage/databases/main/state_deltas.py
|synapse/storage/databases/main/stats.py
|synapse/storage/databases/main/tags.py
|synapse/storage/databases/main/transactions.py
|synapse/storage/databases/main/user_directory.py
|synapse/storage/databases/main/user_erasure_store.py
|synapse/storage/schema/
|tests/api/test_auth.py
|tests/api/test_ratelimiting.py
|tests/app/test_openid_listener.py
|tests/appservice/test_scheduler.py
|tests/config/test_cache.py
|tests/config/test_tls.py
|tests/crypto/test_keyring.py
|tests/events/test_presence_router.py
|tests/events/test_utils.py
|tests/federation/test_federation_catch_up.py
|tests/federation/test_federation_sender.py
|tests/federation/test_federation_server.py
|tests/federation/transport/test_knocking.py
|tests/federation/transport/test_server.py
|tests/handlers/test_cas.py
|tests/handlers/test_directory.py
|tests/handlers/test_e2e_keys.py
|tests/handlers/test_federation.py
|tests/handlers/test_oidc.py
|tests/handlers/test_presence.py
|tests/handlers/test_profile.py
|tests/handlers/test_saml.py
|tests/handlers/test_typing.py
|tests/http/federation/test_matrix_federation_agent.py
|tests/http/federation/test_srv_resolver.py
|tests/http/test_fedclient.py
|tests/http/test_proxyagent.py
|tests/http/test_servlet.py
|tests/http/test_site.py
|tests/logging/__init__.py
|tests/logging/test_terse_json.py
|tests/module_api/test_api.py
|tests/push/test_email.py
|tests/push/test_http.py
|tests/push/test_presentable_names.py
|tests/push/test_push_rule_evaluator.py
|tests/rest/admin/test_admin.py
|tests/rest/admin/test_device.py
|tests/rest/admin/test_media.py
|tests/rest/admin/test_server_notice.py
|tests/rest/admin/test_user.py
|tests/rest/admin/test_username_available.py
|tests/rest/client/test_account.py
|tests/rest/client/test_events.py
|tests/rest/client/test_filter.py
|tests/rest/client/test_groups.py
|tests/rest/client/test_register.py
|tests/rest/client/test_report_event.py
|tests/rest/client/test_rooms.py
|tests/rest/client/test_third_party_rules.py
|tests/rest/client/test_transactions.py
|tests/rest/client/test_typing.py
|tests/rest/client/utils.py
|tests/rest/key/v2/test_remote_key_resource.py
|tests/rest/media/v1/test_base.py
|tests/rest/media/v1/test_media_storage.py
|tests/rest/media/v1/test_url_preview.py
|tests/scripts/test_new_matrix_user.py
|tests/server.py
|tests/server_notices/test_resource_limits_server_notices.py
|tests/state/test_v2.py
|tests/storage/test_account_data.py
|tests/storage/test_appservice.py
|tests/storage/test_background_update.py
|tests/storage/test_base.py
|tests/storage/test_client_ips.py
|tests/storage/test_database.py
|tests/storage/test_event_federation.py
|tests/storage/test_id_generators.py
|tests/storage/test_roommember.py
|tests/test_metrics.py
|tests/test_phone_home.py
|tests/test_server.py
|tests/test_state.py
|tests/test_terms_auth.py
|tests/test_visibility.py
|tests/unittest.py
|tests/util/caches/test_cached_call.py
|tests/util/caches/test_deferred_cache.py
|tests/util/caches/test_descriptors.py
|tests/util/caches/test_response_cache.py
|tests/util/caches/test_ttlcache.py
|tests/util/test_async_helpers.py
|tests/util/test_batching_queue.py
|tests/util/test_dict_cache.py
|tests/util/test_expiring_cache.py
|tests/util/test_file_consumer.py
|tests/util/test_linearizer.py
|tests/util/test_logcontext.py
|tests/util/test_lrucache.py
|tests/util/test_rwlock.py
|tests/util/test_wheel_timer.py
|tests/utils.py
)$
[mypy-synapse.api.*]
disallow_untyped_defs = True
[mypy-synapse.app.*]
disallow_untyped_defs = True
[mypy-synapse.crypto.*]
disallow_untyped_defs = True
@ -272,6 +348,9 @@ ignore_missing_imports = True
[mypy-opentracing]
ignore_missing_imports = True
[mypy-parameterized.*]
ignore_missing_imports = True
[mypy-phonenumbers.*]
ignore_missing_imports = True

View file

@ -17,6 +17,7 @@
# limitations under the License.
import glob
import os
from typing import Any, Dict
from setuptools import Command, find_packages, setup
@ -49,8 +50,6 @@ here = os.path.abspath(os.path.dirname(__file__))
# [1]: http://tox.readthedocs.io/en/2.5.0/example/basic.html#integration-with-setup-py-test-command
# [2]: https://pypi.python.org/pypi/setuptools_trial
class TestCommand(Command):
user_options = []
def initialize_options(self):
pass
@ -75,7 +74,7 @@ def read_file(path_segments):
def exec_file(path_segments):
"""Execute a single python file to get the variables defined in it"""
result = {}
result: Dict[str, Any] = {}
code = read_file(path_segments)
exec(code, result)
return result
@ -111,6 +110,7 @@ CONDITIONAL_REQUIREMENTS["mypy"] = [
"types-Pillow>=8.3.4",
"types-pyOpenSSL>=20.0.7",
"types-PyYAML>=5.4.10",
"types-requests>=2.26.0",
"types-setuptools>=57.4.0",
]

View file

@ -47,7 +47,7 @@ try:
except ImportError:
pass
__version__ = "1.47.0rc1"
__version__ = "1.47.0rc2"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View file

@ -1,5 +1,6 @@
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -19,22 +20,23 @@ import hashlib
import hmac
import logging
import sys
from typing import Callable, Optional
import requests as _requests
import yaml
def request_registration(
user,
password,
server_location,
shared_secret,
admin=False,
user_type=None,
user: str,
password: str,
server_location: str,
shared_secret: str,
admin: bool = False,
user_type: Optional[str] = None,
requests=_requests,
_print=print,
exit=sys.exit,
):
_print: Callable[[str], None] = print,
exit: Callable[[int], None] = sys.exit,
) -> None:
url = "%s/_synapse/admin/v1/register" % (server_location.rstrip("/"),)
@ -65,13 +67,13 @@ def request_registration(
mac.update(b"\x00")
mac.update(user_type.encode("utf8"))
mac = mac.hexdigest()
hex_mac = mac.hexdigest()
data = {
"nonce": nonce,
"username": user,
"password": password,
"mac": mac,
"mac": hex_mac,
"admin": admin,
"user_type": user_type,
}
@ -91,10 +93,17 @@ def request_registration(
_print("Success!")
def register_new_user(user, password, server_location, shared_secret, admin, user_type):
def register_new_user(
user: str,
password: str,
server_location: str,
shared_secret: str,
admin: Optional[bool],
user_type: Optional[str],
) -> None:
if not user:
try:
default_user = getpass.getuser()
default_user: Optional[str] = getpass.getuser()
except Exception:
default_user = None
@ -123,8 +132,8 @@ def register_new_user(user, password, server_location, shared_secret, admin, use
sys.exit(1)
if admin is None:
admin = input("Make admin [no]: ")
if admin in ("y", "yes", "true"):
admin_inp = input("Make admin [no]: ")
if admin_inp in ("y", "yes", "true"):
admin = True
else:
admin = False
@ -134,7 +143,7 @@ def register_new_user(user, password, server_location, shared_secret, admin, use
)
def main():
def main() -> None:
logging.captureWarnings(True)

View file

@ -92,7 +92,7 @@ def get_recent_users(txn: LoggingTransaction, since_ms: int) -> List[UserInfo]:
return user_infos
def main():
def main() -> None:
parser = argparse.ArgumentParser()
parser.add_argument(
"-c",
@ -142,7 +142,8 @@ def main():
engine = create_engine(database_config.config)
with make_conn(database_config, engine, "review_recent_signups") as db_conn:
user_infos = get_recent_users(db_conn.cursor(), since_ms)
# This generates a type of Cursor, not LoggingTransaction.
user_infos = get_recent_users(db_conn.cursor(), since_ms) # type: ignore[arg-type]
for user_info in user_infos:
if exclude_users_with_email and user_info.emails:

View file

@ -1,7 +1,7 @@
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2017 Vector Creations Ltd
# Copyright 2018-2019 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
# Copyright 2019-2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -86,6 +86,9 @@ ROOM_EVENT_FILTER_SCHEMA = {
# cf https://github.com/matrix-org/matrix-doc/pull/2326
"org.matrix.labels": {"type": "array", "items": {"type": "string"}},
"org.matrix.not_labels": {"type": "array", "items": {"type": "string"}},
# MSC3440, filtering by event relations.
"io.element.relation_senders": {"type": "array", "items": {"type": "string"}},
"io.element.relation_types": {"type": "array", "items": {"type": "string"}},
},
}
@ -146,14 +149,16 @@ def matrix_user_id_validator(user_id_str: str) -> UserID:
class Filtering:
def __init__(self, hs: "HomeServer"):
super().__init__()
self._hs = hs
self.store = hs.get_datastore()
self.DEFAULT_FILTER_COLLECTION = FilterCollection(hs, {})
async def get_user_filter(
self, user_localpart: str, filter_id: Union[int, str]
) -> "FilterCollection":
result = await self.store.get_user_filter(user_localpart, filter_id)
return FilterCollection(result)
return FilterCollection(self._hs, result)
def add_user_filter(
self, user_localpart: str, user_filter: JsonDict
@ -191,21 +196,22 @@ FilterEvent = TypeVar("FilterEvent", EventBase, UserPresenceState, JsonDict)
class FilterCollection:
def __init__(self, filter_json: JsonDict):
def __init__(self, hs: "HomeServer", filter_json: JsonDict):
self._filter_json = filter_json
room_filter_json = self._filter_json.get("room", {})
self._room_filter = Filter(
{k: v for k, v in room_filter_json.items() if k in ("rooms", "not_rooms")}
hs,
{k: v for k, v in room_filter_json.items() if k in ("rooms", "not_rooms")},
)
self._room_timeline_filter = Filter(room_filter_json.get("timeline", {}))
self._room_state_filter = Filter(room_filter_json.get("state", {}))
self._room_ephemeral_filter = Filter(room_filter_json.get("ephemeral", {}))
self._room_account_data = Filter(room_filter_json.get("account_data", {}))
self._presence_filter = Filter(filter_json.get("presence", {}))
self._account_data = Filter(filter_json.get("account_data", {}))
self._room_timeline_filter = Filter(hs, room_filter_json.get("timeline", {}))
self._room_state_filter = Filter(hs, room_filter_json.get("state", {}))
self._room_ephemeral_filter = Filter(hs, room_filter_json.get("ephemeral", {}))
self._room_account_data = Filter(hs, room_filter_json.get("account_data", {}))
self._presence_filter = Filter(hs, filter_json.get("presence", {}))
self._account_data = Filter(hs, filter_json.get("account_data", {}))
self.include_leave = filter_json.get("room", {}).get("include_leave", False)
self.event_fields = filter_json.get("event_fields", [])
@ -232,25 +238,37 @@ class FilterCollection:
def include_redundant_members(self) -> bool:
return self._room_state_filter.include_redundant_members
def filter_presence(
async def filter_presence(
self, events: Iterable[UserPresenceState]
) -> List[UserPresenceState]:
return self._presence_filter.filter(events)
return await self._presence_filter.filter(events)
def filter_account_data(self, events: Iterable[JsonDict]) -> List[JsonDict]:
return self._account_data.filter(events)
async def filter_account_data(self, events: Iterable[JsonDict]) -> List[JsonDict]:
return await self._account_data.filter(events)
def filter_room_state(self, events: Iterable[EventBase]) -> List[EventBase]:
return self._room_state_filter.filter(self._room_filter.filter(events))
async def filter_room_state(self, events: Iterable[EventBase]) -> List[EventBase]:
return await self._room_state_filter.filter(
await self._room_filter.filter(events)
)
def filter_room_timeline(self, events: Iterable[EventBase]) -> List[EventBase]:
return self._room_timeline_filter.filter(self._room_filter.filter(events))
async def filter_room_timeline(
self, events: Iterable[EventBase]
) -> List[EventBase]:
return await self._room_timeline_filter.filter(
await self._room_filter.filter(events)
)
def filter_room_ephemeral(self, events: Iterable[JsonDict]) -> List[JsonDict]:
return self._room_ephemeral_filter.filter(self._room_filter.filter(events))
async def filter_room_ephemeral(self, events: Iterable[JsonDict]) -> List[JsonDict]:
return await self._room_ephemeral_filter.filter(
await self._room_filter.filter(events)
)
def filter_room_account_data(self, events: Iterable[JsonDict]) -> List[JsonDict]:
return self._room_account_data.filter(self._room_filter.filter(events))
async def filter_room_account_data(
self, events: Iterable[JsonDict]
) -> List[JsonDict]:
return await self._room_account_data.filter(
await self._room_filter.filter(events)
)
def blocks_all_presence(self) -> bool:
return (
@ -274,7 +292,9 @@ class FilterCollection:
class Filter:
def __init__(self, filter_json: JsonDict):
def __init__(self, hs: "HomeServer", filter_json: JsonDict):
self._hs = hs
self._store = hs.get_datastore()
self.filter_json = filter_json
self.limit = filter_json.get("limit", 10)
@ -297,6 +317,20 @@ class Filter:
self.labels = filter_json.get("org.matrix.labels", None)
self.not_labels = filter_json.get("org.matrix.not_labels", [])
# Ideally these would be rejected at the endpoint if they were provided
# and not supported, but that would involve modifying the JSON schema
# based on the homeserver configuration.
if hs.config.experimental.msc3440_enabled:
self.relation_senders = self.filter_json.get(
"io.element.relation_senders", None
)
self.relation_types = self.filter_json.get(
"io.element.relation_types", None
)
else:
self.relation_senders = None
self.relation_types = None
def filters_all_types(self) -> bool:
return "*" in self.not_types
@ -306,7 +340,7 @@ class Filter:
def filters_all_rooms(self) -> bool:
return "*" in self.not_rooms
def check(self, event: FilterEvent) -> bool:
def _check(self, event: FilterEvent) -> bool:
"""Checks whether the filter matches the given event.
Args:
@ -420,8 +454,30 @@ class Filter:
return room_ids
def filter(self, events: Iterable[FilterEvent]) -> List[FilterEvent]:
return list(filter(self.check, events))
async def _check_event_relations(
self, events: Iterable[FilterEvent]
) -> List[FilterEvent]:
# The event IDs to check, mypy doesn't understand the ifinstance check.
event_ids = [event.event_id for event in events if isinstance(event, EventBase)] # type: ignore[attr-defined]
event_ids_to_keep = set(
await self._store.events_have_relations(
event_ids, self.relation_senders, self.relation_types
)
)
return [
event
for event in events
if not isinstance(event, EventBase) or event.event_id in event_ids_to_keep
]
async def filter(self, events: Iterable[FilterEvent]) -> List[FilterEvent]:
result = [event for event in events if self._check(event)]
if self.relation_senders or self.relation_types:
return await self._check_event_relations(result)
return result
def with_room_ids(self, room_ids: Iterable[str]) -> "Filter":
"""Returns a new filter with the given room IDs appended.
@ -433,7 +489,7 @@ class Filter:
filter: A new filter including the given rooms and the old
filter's rooms.
"""
newFilter = Filter(self.filter_json)
newFilter = Filter(self._hs, self.filter_json)
newFilter.rooms += room_ids
return newFilter
@ -444,6 +500,3 @@ def _matches_wildcard(actual_value: Optional[str], filter_value: str) -> bool:
return actual_value.startswith(type_prefix)
else:
return actual_value == filter_value
DEFAULT_FILTER_COLLECTION = FilterCollection({})

View file

@ -13,6 +13,7 @@
# limitations under the License.
import logging
import sys
from typing import Container
from synapse import python_dependencies # noqa: E402
@ -27,7 +28,9 @@ except python_dependencies.DependencyException as e:
sys.exit(1)
def check_bind_error(e, address, bind_addresses):
def check_bind_error(
e: Exception, address: str, bind_addresses: Container[str]
) -> None:
"""
This method checks an exception occurred while binding on 0.0.0.0.
If :: is specified in the bind addresses a warning is shown.
@ -38,9 +41,9 @@ def check_bind_error(e, address, bind_addresses):
When binding on 0.0.0.0 after :: this can safely be ignored.
Args:
e (Exception): Exception that was caught.
address (str): Address on which binding was attempted.
bind_addresses (list): Addresses on which the service listens.
e: Exception that was caught.
address: Address on which binding was attempted.
bind_addresses: Addresses on which the service listens.
"""
if address == "0.0.0.0" and "::" in bind_addresses:
logger.warning(

View file

@ -22,13 +22,27 @@ import socket
import sys
import traceback
import warnings
from typing import TYPE_CHECKING, Awaitable, Callable, Iterable
from typing import (
TYPE_CHECKING,
Any,
Awaitable,
Callable,
Collection,
Dict,
Iterable,
List,
NoReturn,
Tuple,
cast,
)
from cryptography.utils import CryptographyDeprecationWarning
from typing_extensions import NoReturn
import twisted
from twisted.internet import defer, error, reactor
from twisted.internet import defer, error, reactor as _reactor
from twisted.internet.interfaces import IOpenSSLContextFactory, IReactorSSL, IReactorTCP
from twisted.internet.protocol import ServerFactory
from twisted.internet.tcp import Port
from twisted.logger import LoggingFile, LogLevel
from twisted.protocols.tls import TLSMemoryBIOFactory
from twisted.python.threadpool import ThreadPool
@ -48,6 +62,7 @@ from synapse.logging.context import PreserveLoggingContext
from synapse.metrics import register_threadpool
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.metrics.jemalloc import setup_jemalloc_stats
from synapse.types import ISynapseReactor
from synapse.util.caches.lrucache import setup_expire_lru_cache_entries
from synapse.util.daemonize import daemonize_process
from synapse.util.gai_resolver import GAIResolver
@ -57,33 +72,44 @@ from synapse.util.versionstring import get_version_string
if TYPE_CHECKING:
from synapse.server import HomeServer
# Twisted injects the global reactor to make it easier to import, this confuses
# mypy which thinks it is a module. Tell it that it a more proper type.
reactor = cast(ISynapseReactor, _reactor)
logger = logging.getLogger(__name__)
# list of tuples of function, args list, kwargs dict
_sighup_callbacks = []
_sighup_callbacks: List[
Tuple[Callable[..., None], Tuple[Any, ...], Dict[str, Any]]
] = []
def register_sighup(func, *args, **kwargs):
def register_sighup(func: Callable[..., None], *args: Any, **kwargs: Any) -> None:
"""
Register a function to be called when a SIGHUP occurs.
Args:
func (function): Function to be called when sent a SIGHUP signal.
func: Function to be called when sent a SIGHUP signal.
*args, **kwargs: args and kwargs to be passed to the target function.
"""
_sighup_callbacks.append((func, args, kwargs))
def start_worker_reactor(appname, config, run_command=reactor.run):
def start_worker_reactor(
appname: str,
config: HomeServerConfig,
run_command: Callable[[], None] = reactor.run,
) -> None:
"""Run the reactor in the main process
Daemonizes if necessary, and then configures some resources, before starting
the reactor. Pulls configuration from the 'worker' settings in 'config'.
Args:
appname (str): application name which will be sent to syslog
config (synapse.config.Config): config object
run_command (Callable[]): callable that actually runs the reactor
appname: application name which will be sent to syslog
config: config object
run_command: callable that actually runs the reactor
"""
logger = logging.getLogger(config.worker.worker_app)
@ -101,32 +127,32 @@ def start_worker_reactor(appname, config, run_command=reactor.run):
def start_reactor(
appname,
soft_file_limit,
gc_thresholds,
pid_file,
daemonize,
print_pidfile,
logger,
run_command=reactor.run,
):
appname: str,
soft_file_limit: int,
gc_thresholds: Tuple[int, int, int],
pid_file: str,
daemonize: bool,
print_pidfile: bool,
logger: logging.Logger,
run_command: Callable[[], None] = reactor.run,
) -> None:
"""Run the reactor in the main process
Daemonizes if necessary, and then configures some resources, before starting
the reactor
Args:
appname (str): application name which will be sent to syslog
soft_file_limit (int):
appname: application name which will be sent to syslog
soft_file_limit:
gc_thresholds:
pid_file (str): name of pid file to write to if daemonize is True
daemonize (bool): true to run the reactor in a background process
print_pidfile (bool): whether to print the pid file, if daemonize is True
logger (logging.Logger): logger instance to pass to Daemonize
run_command (Callable[]): callable that actually runs the reactor
pid_file: name of pid file to write to if daemonize is True
daemonize: true to run the reactor in a background process
print_pidfile: whether to print the pid file, if daemonize is True
logger: logger instance to pass to Daemonize
run_command: callable that actually runs the reactor
"""
def run():
def run() -> None:
logger.info("Running")
setup_jemalloc_stats()
change_resource_limit(soft_file_limit)
@ -185,7 +211,7 @@ def redirect_stdio_to_logs() -> None:
print("Redirected stdout/stderr to logs")
def register_start(cb: Callable[..., Awaitable], *args, **kwargs) -> None:
def register_start(cb: Callable[..., Awaitable], *args: Any, **kwargs: Any) -> None:
"""Register a callback with the reactor, to be called once it is running
This can be used to initialise parts of the system which require an asynchronous
@ -195,7 +221,7 @@ def register_start(cb: Callable[..., Awaitable], *args, **kwargs) -> None:
will exit.
"""
async def wrapper():
async def wrapper() -> None:
try:
await cb(*args, **kwargs)
except Exception:
@ -224,7 +250,7 @@ def register_start(cb: Callable[..., Awaitable], *args, **kwargs) -> None:
reactor.callWhenRunning(lambda: defer.ensureDeferred(wrapper()))
def listen_metrics(bind_addresses, port):
def listen_metrics(bind_addresses: Iterable[str], port: int) -> None:
"""
Start Prometheus metrics server.
"""
@ -236,11 +262,11 @@ def listen_metrics(bind_addresses, port):
def listen_manhole(
bind_addresses: Iterable[str],
bind_addresses: Collection[str],
port: int,
manhole_settings: ManholeConfig,
manhole_globals: dict,
):
) -> None:
# twisted.conch.manhole 21.1.0 uses "int_from_bytes", which produces a confusing
# warning. It's fixed by https://github.com/twisted/twisted/pull/1522), so
# suppress the warning for now.
@ -259,12 +285,18 @@ def listen_manhole(
)
def listen_tcp(bind_addresses, port, factory, reactor=reactor, backlog=50):
def listen_tcp(
bind_addresses: Collection[str],
port: int,
factory: ServerFactory,
reactor: IReactorTCP = reactor,
backlog: int = 50,
) -> List[Port]:
"""
Create a TCP socket for a port and several addresses
Returns:
list[twisted.internet.tcp.Port]: listening for TCP connections
list of twisted.internet.tcp.Port listening for TCP connections
"""
r = []
for address in bind_addresses:
@ -273,12 +305,19 @@ def listen_tcp(bind_addresses, port, factory, reactor=reactor, backlog=50):
except error.CannotListenError as e:
check_bind_error(e, address, bind_addresses)
return r
# IReactorTCP returns an object implementing IListeningPort from listenTCP,
# but we know it will be a Port instance.
return r # type: ignore[return-value]
def listen_ssl(
bind_addresses, port, factory, context_factory, reactor=reactor, backlog=50
):
bind_addresses: Collection[str],
port: int,
factory: ServerFactory,
context_factory: IOpenSSLContextFactory,
reactor: IReactorSSL = reactor,
backlog: int = 50,
) -> List[Port]:
"""
Create an TLS-over-TCP socket for a port and several addresses
@ -294,10 +333,13 @@ def listen_ssl(
except error.CannotListenError as e:
check_bind_error(e, address, bind_addresses)
return r
# IReactorSSL incorrectly declares that an int is returned from listenSSL,
# it actually returns an object implementing IListeningPort, but we know it
# will be a Port instance.
return r # type: ignore[return-value]
def refresh_certificate(hs: "HomeServer"):
def refresh_certificate(hs: "HomeServer") -> None:
"""
Refresh the TLS certificates that Synapse is using by re-reading them from
disk and updating the TLS context factories to use them.
@ -329,7 +371,7 @@ def refresh_certificate(hs: "HomeServer"):
logger.info("Context factories updated.")
async def start(hs: "HomeServer"):
async def start(hs: "HomeServer") -> None:
"""
Start a Synapse server or worker.
@ -360,7 +402,7 @@ async def start(hs: "HomeServer"):
if hasattr(signal, "SIGHUP"):
@wrap_as_background_process("sighup")
def handle_sighup(*args, **kwargs):
def handle_sighup(*args: Any, **kwargs: Any) -> None:
# Tell systemd our state, if we're using it. This will silently fail if
# we're not using systemd.
sdnotify(b"RELOADING=1")
@ -373,7 +415,7 @@ async def start(hs: "HomeServer"):
# We defer running the sighup handlers until next reactor tick. This
# is so that we're in a sane state, e.g. flushing the logs may fail
# if the sighup happens in the middle of writing a log entry.
def run_sighup(*args, **kwargs):
def run_sighup(*args: Any, **kwargs: Any) -> None:
# `callFromThread` should be "signal safe" as well as thread
# safe.
reactor.callFromThread(handle_sighup, *args, **kwargs)
@ -436,12 +478,8 @@ async def start(hs: "HomeServer"):
atexit.register(gc.freeze)
def setup_sentry(hs: "HomeServer"):
"""Enable sentry integration, if enabled in configuration
Args:
hs
"""
def setup_sentry(hs: "HomeServer") -> None:
"""Enable sentry integration, if enabled in configuration"""
if not hs.config.metrics.sentry_enabled:
return
@ -466,7 +504,7 @@ def setup_sentry(hs: "HomeServer"):
scope.set_tag("worker_name", name)
def setup_sdnotify(hs: "HomeServer"):
def setup_sdnotify(hs: "HomeServer") -> None:
"""Adds process state hooks to tell systemd what we are up to."""
# Tell systemd our state, if we're using it. This will silently fail if
@ -481,7 +519,7 @@ def setup_sdnotify(hs: "HomeServer"):
sdnotify_sockaddr = os.getenv("NOTIFY_SOCKET")
def sdnotify(state):
def sdnotify(state: bytes) -> None:
"""
Send a notification to systemd, if the NOTIFY_SOCKET env var is set.
@ -490,7 +528,7 @@ def sdnotify(state):
package which many OSes don't include as a matter of principle.
Args:
state (bytes): notification to send
state: notification to send
"""
if not isinstance(state, bytes):
raise TypeError("sdnotify should be called with a bytes")

View file

@ -17,6 +17,7 @@ import logging
import os
import sys
import tempfile
from typing import List, Optional
from twisted.internet import defer, task
@ -25,6 +26,7 @@ from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.events import EventBase
from synapse.handlers.admin import ExfiltrationWriter
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
@ -40,6 +42,7 @@ from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.server import HomeServer
from synapse.storage.databases.main.room import RoomWorkerStore
from synapse.types import StateMap
from synapse.util.logcontext import LoggingContext
from synapse.util.versionstring import get_version_string
@ -65,16 +68,11 @@ class AdminCmdSlavedStore(
class AdminCmdServer(HomeServer):
DATASTORE_CLASS = AdminCmdSlavedStore
DATASTORE_CLASS = AdminCmdSlavedStore # type: ignore
async def export_data_command(hs: HomeServer, args):
"""Export data for a user.
Args:
hs
args (argparse.Namespace)
"""
async def export_data_command(hs: HomeServer, args: argparse.Namespace) -> None:
"""Export data for a user."""
user_id = args.user_id
directory = args.output_directory
@ -92,12 +90,12 @@ class FileExfiltrationWriter(ExfiltrationWriter):
Note: This writes to disk on the main reactor thread.
Args:
user_id (str): The user whose data is being exfiltrated.
directory (str|None): The directory to write the data to, if None then
will write to a temporary directory.
user_id: The user whose data is being exfiltrated.
directory: The directory to write the data to, if None then will write
to a temporary directory.
"""
def __init__(self, user_id, directory=None):
def __init__(self, user_id: str, directory: Optional[str] = None):
self.user_id = user_id
if directory:
@ -111,7 +109,7 @@ class FileExfiltrationWriter(ExfiltrationWriter):
if list(os.listdir(self.base_directory)):
raise Exception("Directory must be empty")
def write_events(self, room_id, events):
def write_events(self, room_id: str, events: List[EventBase]) -> None:
room_directory = os.path.join(self.base_directory, "rooms", room_id)
os.makedirs(room_directory, exist_ok=True)
events_file = os.path.join(room_directory, "events")
@ -120,7 +118,9 @@ class FileExfiltrationWriter(ExfiltrationWriter):
for event in events:
print(json.dumps(event.get_pdu_json()), file=f)
def write_state(self, room_id, event_id, state):
def write_state(
self, room_id: str, event_id: str, state: StateMap[EventBase]
) -> None:
room_directory = os.path.join(self.base_directory, "rooms", room_id)
state_directory = os.path.join(room_directory, "state")
os.makedirs(state_directory, exist_ok=True)
@ -131,7 +131,9 @@ class FileExfiltrationWriter(ExfiltrationWriter):
for event in state.values():
print(json.dumps(event.get_pdu_json()), file=f)
def write_invite(self, room_id, event, state):
def write_invite(
self, room_id: str, event: EventBase, state: StateMap[EventBase]
) -> None:
self.write_events(room_id, [event])
# We write the invite state somewhere else as they aren't full events
@ -145,7 +147,9 @@ class FileExfiltrationWriter(ExfiltrationWriter):
for event in state.values():
print(json.dumps(event), file=f)
def write_knock(self, room_id, event, state):
def write_knock(
self, room_id: str, event: EventBase, state: StateMap[EventBase]
) -> None:
self.write_events(room_id, [event])
# We write the knock state somewhere else as they aren't full events
@ -159,11 +163,11 @@ class FileExfiltrationWriter(ExfiltrationWriter):
for event in state.values():
print(json.dumps(event), file=f)
def finished(self):
def finished(self) -> str:
return self.base_directory
def start(config_options):
def start(config_options: List[str]) -> None:
parser = argparse.ArgumentParser(description="Synapse Admin Command")
HomeServerConfig.add_arguments_to_parser(parser)
@ -231,7 +235,7 @@ def start(config_options):
# We also make sure that `_base.start` gets run before we actually run the
# command.
async def run():
async def run() -> None:
with LoggingContext("command"):
await _base.start(ss)
await args.func(ss, args)

View file

@ -14,11 +14,10 @@
# limitations under the License.
import logging
import sys
from typing import Dict, Optional
from typing import Dict, List, Optional, Tuple
from twisted.internet import address
from twisted.web.resource import IResource
from twisted.web.server import Request
from twisted.web.resource import Resource
import synapse
import synapse.events
@ -44,7 +43,7 @@ from synapse.config.server import ListenerConfig
from synapse.federation.transport.server import TransportLayerServer
from synapse.http.server import JsonResource, OptionsResource
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseSite
from synapse.http.site import SynapseRequest, SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
@ -119,6 +118,7 @@ from synapse.storage.databases.main.stats import StatsStore
from synapse.storage.databases.main.transactions import TransactionWorkerStore
from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore
from synapse.storage.databases.main.user_directory import UserDirectoryStore
from synapse.types import JsonDict
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.versionstring import get_version_string
@ -143,7 +143,9 @@ class KeyUploadServlet(RestServlet):
self.http_client = hs.get_simple_http_client()
self.main_uri = hs.config.worker.worker_main_http_uri
async def on_POST(self, request: Request, device_id: Optional[str]):
async def on_POST(
self, request: SynapseRequest, device_id: Optional[str]
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request, allow_guest=True)
user_id = requester.user.to_string()
body = parse_json_object_from_request(request)
@ -187,9 +189,8 @@ class KeyUploadServlet(RestServlet):
# If the header exists, add to the comma-separated list of the first
# instance of the header. Otherwise, generate a new header.
if x_forwarded_for:
x_forwarded_for = [
x_forwarded_for[0] + b", " + previous_host
] + x_forwarded_for[1:]
x_forwarded_for = [x_forwarded_for[0] + b", " + previous_host]
x_forwarded_for.extend(x_forwarded_for[1:])
else:
x_forwarded_for = [previous_host]
headers[b"X-Forwarded-For"] = x_forwarded_for
@ -253,13 +254,16 @@ class GenericWorkerSlavedStore(
SessionStore,
BaseSlavedStore,
):
pass
# Properties that multiple storage classes define. Tell mypy what the
# expected type is.
server_name: str
config: HomeServerConfig
class GenericWorkerServer(HomeServer):
DATASTORE_CLASS = GenericWorkerSlavedStore
DATASTORE_CLASS = GenericWorkerSlavedStore # type: ignore
def _listen_http(self, listener_config: ListenerConfig):
def _listen_http(self, listener_config: ListenerConfig) -> None:
port = listener_config.port
bind_addresses = listener_config.bind_addresses
@ -267,10 +271,10 @@ class GenericWorkerServer(HomeServer):
site_tag = listener_config.http_options.tag
if site_tag is None:
site_tag = port
site_tag = str(port)
# We always include a health resource.
resources: Dict[str, IResource] = {"/health": HealthResource()}
resources: Dict[str, Resource] = {"/health": HealthResource()}
for res in listener_config.http_options.resources:
for name in res.names:
@ -386,7 +390,7 @@ class GenericWorkerServer(HomeServer):
logger.info("Synapse worker now listening on port %d", port)
def start_listening(self):
def start_listening(self) -> None:
for listener in self.config.worker.worker_listeners:
if listener.type == "http":
self._listen_http(listener)
@ -411,7 +415,7 @@ class GenericWorkerServer(HomeServer):
self.get_tcp_replication().start_replication(self)
def start(config_options):
def start(config_options: List[str]) -> None:
try:
config = HomeServerConfig.load_config("Synapse worker", config_options)
except ConfigError as e:

View file

@ -16,10 +16,10 @@
import logging
import os
import sys
from typing import Iterator
from typing import Dict, Iterable, Iterator, List
from twisted.internet import reactor
from twisted.web.resource import EncodingResourceWrapper, IResource
from twisted.internet.tcp import Port
from twisted.web.resource import EncodingResourceWrapper, Resource
from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File
@ -76,23 +76,27 @@ from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.homeserver")
def gz_wrap(r):
def gz_wrap(r: Resource) -> Resource:
return EncodingResourceWrapper(r, [GzipEncoderFactory()])
class SynapseHomeServer(HomeServer):
DATASTORE_CLASS = DataStore
DATASTORE_CLASS = DataStore # type: ignore
def _listener_http(self, config: HomeServerConfig, listener_config: ListenerConfig):
def _listener_http(
self, config: HomeServerConfig, listener_config: ListenerConfig
) -> Iterable[Port]:
port = listener_config.port
bind_addresses = listener_config.bind_addresses
tls = listener_config.tls
# Must exist since this is an HTTP listener.
assert listener_config.http_options is not None
site_tag = listener_config.http_options.tag
if site_tag is None:
site_tag = str(port)
# We always include a health resource.
resources = {"/health": HealthResource()}
resources: Dict[str, Resource] = {"/health": HealthResource()}
for res in listener_config.http_options.resources:
for name in res.names:
@ -111,7 +115,7 @@ class SynapseHomeServer(HomeServer):
("listeners", site_tag, "additional_resources", "<%s>" % (path,)),
)
handler = handler_cls(config, module_api)
if IResource.providedBy(handler):
if isinstance(handler, Resource):
resource = handler
elif hasattr(handler, "handle_request"):
resource = AdditionalResource(self, handler.handle_request)
@ -128,7 +132,7 @@ class SynapseHomeServer(HomeServer):
# try to find something useful to redirect '/' to
if WEB_CLIENT_PREFIX in resources:
root_resource = RootOptionsRedirectResource(WEB_CLIENT_PREFIX)
root_resource: Resource = RootOptionsRedirectResource(WEB_CLIENT_PREFIX)
elif STATIC_PREFIX in resources:
root_resource = RootOptionsRedirectResource(STATIC_PREFIX)
else:
@ -145,6 +149,8 @@ class SynapseHomeServer(HomeServer):
)
if tls:
# refresh_certificate should have been called before this.
assert self.tls_server_context_factory is not None
ports = listen_ssl(
bind_addresses,
port,
@ -165,20 +171,21 @@ class SynapseHomeServer(HomeServer):
return ports
def _configure_named_resource(self, name, compress=False):
def _configure_named_resource(
self, name: str, compress: bool = False
) -> Dict[str, Resource]:
"""Build a resource map for a named resource
Args:
name (str): named resource: one of "client", "federation", etc
compress (bool): whether to enable gzip compression for this
resource
name: named resource: one of "client", "federation", etc
compress: whether to enable gzip compression for this resource
Returns:
dict[str, Resource]: map from path to HTTP resource
map from path to HTTP resource
"""
resources = {}
resources: Dict[str, Resource] = {}
if name == "client":
client_resource = ClientRestResource(self)
client_resource: Resource = ClientRestResource(self)
if compress:
client_resource = gz_wrap(client_resource)
@ -207,7 +214,7 @@ class SynapseHomeServer(HomeServer):
if name == "consent":
from synapse.rest.consent.consent_resource import ConsentResource
consent_resource = ConsentResource(self)
consent_resource: Resource = ConsentResource(self)
if compress:
consent_resource = gz_wrap(consent_resource)
resources.update({"/_matrix/consent": consent_resource})
@ -277,7 +284,7 @@ class SynapseHomeServer(HomeServer):
return resources
def start_listening(self):
def start_listening(self) -> None:
if self.config.redis.redis_enabled:
# If redis is enabled we connect via the replication command handler
# in the same way as the workers (since we're effectively a client
@ -303,7 +310,9 @@ class SynapseHomeServer(HomeServer):
ReplicationStreamProtocolFactory(self),
)
for s in services:
reactor.addSystemEventTrigger("before", "shutdown", s.stopListening)
self.get_reactor().addSystemEventTrigger(
"before", "shutdown", s.stopListening
)
elif listener.type == "metrics":
if not self.config.metrics.enable_metrics:
logger.warning(
@ -318,14 +327,13 @@ class SynapseHomeServer(HomeServer):
logger.warning("Unrecognized listener type: %s", listener.type)
def setup(config_options):
def setup(config_options: List[str]) -> SynapseHomeServer:
"""
Args:
config_options_options: The options passed to Synapse. Usually
`sys.argv[1:]`.
config_options_options: The options passed to Synapse. Usually `sys.argv[1:]`.
Returns:
HomeServer
A homeserver instance.
"""
try:
config = HomeServerConfig.load_or_generate_config(
@ -364,7 +372,7 @@ def setup(config_options):
except Exception as e:
handle_startup_exception(e)
async def start():
async def start() -> None:
# Load the OIDC provider metadatas, if OIDC is enabled.
if hs.config.oidc.oidc_enabled:
oidc = hs.get_oidc_handler()
@ -404,39 +412,15 @@ def format_config_error(e: ConfigError) -> Iterator[str]:
yield ":\n %s" % (e.msg,)
e = e.__cause__
parent_e = e.__cause__
indent = 1
while e:
while parent_e:
indent += 1
yield ":\n%s%s" % (" " * indent, str(e))
e = e.__cause__
yield ":\n%s%s" % (" " * indent, str(parent_e))
parent_e = parent_e.__cause__
def run(hs: HomeServer):
PROFILE_SYNAPSE = False
if PROFILE_SYNAPSE:
def profile(func):
from cProfile import Profile
from threading import current_thread
def profiled(*args, **kargs):
profile = Profile()
profile.enable()
func(*args, **kargs)
profile.disable()
ident = current_thread().ident
profile.dump_stats(
"/tmp/%s.%s.%i.pstat" % (hs.hostname, func.__name__, ident)
)
return profiled
from twisted.python.threadpool import ThreadPool
ThreadPool._worker = profile(ThreadPool._worker)
reactor.run = profile(reactor.run)
def run(hs: HomeServer) -> None:
_base.start_reactor(
"synapse-homeserver",
soft_file_limit=hs.config.server.soft_file_limit,
@ -448,7 +432,7 @@ def run(hs: HomeServer):
)
def main():
def main() -> None:
with LoggingContext("main"):
# check base requirements
check_requirements()

View file

@ -15,11 +15,12 @@ import logging
import math
import resource
import sys
from typing import TYPE_CHECKING
from typing import TYPE_CHECKING, List, Sized, Tuple
from prometheus_client import Gauge
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.types import JsonDict
if TYPE_CHECKING:
from synapse.server import HomeServer
@ -28,7 +29,7 @@ logger = logging.getLogger("synapse.app.homeserver")
# Contains the list of processes we will be monitoring
# currently either 0 or 1
_stats_process = []
_stats_process: List[Tuple[int, "resource.struct_rusage"]] = []
# Gauges to expose monthly active user control metrics
current_mau_gauge = Gauge("synapse_admin_mau:current", "Current MAU")
@ -45,9 +46,15 @@ registered_reserved_users_mau_gauge = Gauge(
@wrap_as_background_process("phone_stats_home")
async def phone_stats_home(hs: "HomeServer", stats, stats_process=_stats_process):
async def phone_stats_home(
hs: "HomeServer",
stats: JsonDict,
stats_process: List[Tuple[int, "resource.struct_rusage"]] = _stats_process,
) -> None:
logger.info("Gathering stats for reporting")
now = int(hs.get_clock().time())
# Ensure the homeserver has started.
assert hs.start_time is not None
uptime = int(now - hs.start_time)
if uptime < 0:
uptime = 0
@ -146,15 +153,15 @@ async def phone_stats_home(hs: "HomeServer", stats, stats_process=_stats_process
logger.warning("Error reporting stats: %s", e)
def start_phone_stats_home(hs: "HomeServer"):
def start_phone_stats_home(hs: "HomeServer") -> None:
"""
Start the background tasks which report phone home stats.
"""
clock = hs.get_clock()
stats = {}
stats: JsonDict = {}
def performance_stats_init():
def performance_stats_init() -> None:
_stats_process.clear()
_stats_process.append(
(int(hs.get_clock().time()), resource.getrusage(resource.RUSAGE_SELF))
@ -170,10 +177,10 @@ def start_phone_stats_home(hs: "HomeServer"):
hs.get_datastore().reap_monthly_active_users()
@wrap_as_background_process("generate_monthly_active_users")
async def generate_monthly_active_users():
async def generate_monthly_active_users() -> None:
current_mau_count = 0
current_mau_count_by_service = {}
reserved_users = ()
reserved_users: Sized = ()
store = hs.get_datastore()
if hs.config.server.limit_usage_by_mau or hs.config.server.mau_stats_only:
current_mau_count = await store.get_monthly_active_count()

View file

@ -277,6 +277,58 @@ class FederationClient(FederationBase):
return pdus
async def get_pdu_from_destination_raw(
self,
destination: str,
event_id: str,
room_version: RoomVersion,
outlier: bool = False,
timeout: Optional[int] = None,
) -> Optional[EventBase]:
"""Requests the PDU with given origin and ID from the remote home
server. Does not have any caching or rate limiting!
Args:
destination: Which homeserver to query
event_id: event to fetch
room_version: version of the room
outlier: Indicates whether the PDU is an `outlier`, i.e. if
it's from an arbitrary point in the context as opposed to part
of the current block of PDUs. Defaults to `False`
timeout: How long to try (in ms) each destination for before
moving to the next destination. None indicates no timeout.
Returns:
The requested PDU, or None if we were unable to find it.
Raises:
SynapseError, NotRetryingDestination, FederationDeniedError
"""
transaction_data = await self.transport_layer.get_event(
destination, event_id, timeout=timeout
)
logger.debug(
"retrieved event id %s from %s: %r",
event_id,
destination,
transaction_data,
)
pdu_list: List[EventBase] = [
event_from_pdu_json(p, room_version, outlier=outlier)
for p in transaction_data["pdus"]
]
if pdu_list and pdu_list[0]:
pdu = pdu_list[0]
# Check signatures are correct.
signed_pdu = await self._check_sigs_and_hash(room_version, pdu)
return signed_pdu
return None
async def get_pdu(
self,
destinations: Iterable[str],
@ -321,30 +373,14 @@ class FederationClient(FederationBase):
continue
try:
transaction_data = await self.transport_layer.get_event(
destination, event_id, timeout=timeout
signed_pdu = await self.get_pdu_from_destination_raw(
destination=destination,
event_id=event_id,
room_version=room_version,
outlier=outlier,
timeout=timeout,
)
logger.debug(
"retrieved event id %s from %s: %r",
event_id,
destination,
transaction_data,
)
pdu_list: List[EventBase] = [
event_from_pdu_json(p, room_version, outlier=outlier)
for p in transaction_data["pdus"]
]
if pdu_list and pdu_list[0]:
pdu = pdu_list[0]
# Check signatures are correct.
signed_pdu = await self._check_sigs_and_hash(room_version, pdu)
break
pdu_attempts[destination] = now
except SynapseError as e:

View file

@ -234,7 +234,7 @@ class ExfiltrationWriter(metaclass=abc.ABCMeta):
@abc.abstractmethod
def write_invite(
self, room_id: str, event: EventBase, state: StateMap[dict]
self, room_id: str, event: EventBase, state: StateMap[EventBase]
) -> None:
"""Write an invite for the room, with associated invite state.
@ -248,7 +248,7 @@ class ExfiltrationWriter(metaclass=abc.ABCMeta):
@abc.abstractmethod
def write_knock(
self, room_id: str, event: EventBase, state: StateMap[dict]
self, room_id: str, event: EventBase, state: StateMap[EventBase]
) -> None:
"""Write a knock for the room, with associated knock state.

View file

@ -188,7 +188,7 @@ class ApplicationServicesHandler:
self,
stream_key: str,
new_token: Union[int, RoomStreamToken],
users: Optional[Collection[Union[str, UserID]]] = None,
users: Collection[Union[str, UserID]],
) -> None:
"""
This is called by the notifier in the background when an ephemeral event is handled
@ -203,7 +203,9 @@ class ApplicationServicesHandler:
value for `stream_key` will cause this function to return early.
Ephemeral events will only be pushed to appservices that have opted into
them.
receiving them by setting `push_ephemeral` to true in their registration
file. Note that while MSC2409 is experimental, this option is called
`de.sorunome.msc2409.push_ephemeral`.
Appservices will only receive ephemeral events that fall within their
registered user and room namespaces.
@ -214,6 +216,7 @@ class ApplicationServicesHandler:
if not self.notify_appservices:
return
# Ignore any unsupported streams
if stream_key not in ("typing_key", "receipt_key", "presence_key"):
return
@ -230,18 +233,25 @@ class ApplicationServicesHandler:
# Additional context: https://github.com/matrix-org/synapse/pull/11137
assert isinstance(new_token, int)
# Check whether there are any appservices which have registered to receive
# ephemeral events.
#
# Note that whether these events are actually relevant to these appservices
# is decided later on.
services = [
service
for service in self.store.get_app_services()
if service.supports_ephemeral
]
if not services:
# Bail out early if none of the target appservices have explicitly registered
# to receive these ephemeral events.
return
# We only start a new background process if necessary rather than
# optimistically (to cut down on overhead).
self._notify_interested_services_ephemeral(
services, stream_key, new_token, users or []
services, stream_key, new_token, users
)
@wrap_as_background_process("notify_interested_services_ephemeral")
@ -252,7 +262,7 @@ class ApplicationServicesHandler:
new_token: int,
users: Collection[Union[str, UserID]],
) -> None:
logger.debug("Checking interested services for %s" % (stream_key))
logger.debug("Checking interested services for %s", stream_key)
with Measure(self.clock, "notify_interested_services_ephemeral"):
for service in services:
if stream_key == "typing_key":
@ -345,6 +355,9 @@ class ApplicationServicesHandler:
Args:
service: The application service to check for which events it should receive.
new_token: A receipts event stream token. Purely used to double-check that the
from_token we pull from the database isn't greater than or equal to this
token. Prevents accidentally duplicating work.
Returns:
A list of JSON dictionaries containing data derived from the read receipts that
@ -382,6 +395,9 @@ class ApplicationServicesHandler:
Args:
service: The application service that ephemeral events are being sent to.
users: The users that should receive the presence update.
new_token: A presence update stream token. Purely used to double-check that the
from_token we pull from the database isn't greater than or equal to this
token. Prevents accidentally duplicating work.
Returns:
A list of json dictionaries containing data derived from the presence events

View file

@ -89,6 +89,13 @@ class DeviceMessageHandler:
)
async def on_direct_to_device_edu(self, origin: str, content: JsonDict) -> None:
"""
Handle receiving to-device messages from remote homeservers.
Args:
origin: The remote homeserver.
content: The JSON dictionary containing the to-device messages.
"""
local_messages = {}
sender_user_id = content["sender"]
if origin != get_domain_from_id(sender_user_id):
@ -135,12 +142,16 @@ class DeviceMessageHandler:
message_type, sender_user_id, by_device
)
stream_id = await self.store.add_messages_from_remote_to_device_inbox(
# Add messages to the database.
# Retrieve the stream id of the last-processed to-device message.
last_stream_id = await self.store.add_messages_from_remote_to_device_inbox(
origin, message_id, local_messages
)
# Notify listeners that there are new to-device messages to process,
# handing them the latest stream id.
self.notifier.on_new_event(
"to_device_key", stream_id, users=local_messages.keys()
"to_device_key", last_stream_id, users=local_messages.keys()
)
async def _check_for_unknown_devices(
@ -195,6 +206,14 @@ class DeviceMessageHandler:
message_type: str,
messages: Dict[str, Dict[str, JsonDict]],
) -> None:
"""
Handle a request from a user to send to-device message(s).
Args:
requester: The user that is sending the to-device messages.
message_type: The type of to-device messages that are being sent.
messages: A dictionary containing recipients mapped to messages intended for them.
"""
sender_user_id = requester.user.to_string()
message_id = random_string(16)
@ -257,12 +276,16 @@ class DeviceMessageHandler:
"org.matrix.opentracing_context": json_encoder.encode(context),
}
stream_id = await self.store.add_messages_to_device_inbox(
# Add messages to the database.
# Retrieve the stream id of the last-processed to-device message.
last_stream_id = await self.store.add_messages_to_device_inbox(
local_messages, remote_edu_contents
)
# Notify listeners that there are new to-device messages to process,
# handing them the latest stream id.
self.notifier.on_new_event(
"to_device_key", stream_id, users=local_messages.keys()
"to_device_key", last_stream_id, users=local_messages.keys()
)
if self.federation_sender:

View file

@ -981,8 +981,6 @@ class FederationEventHandler:
origin,
event,
context,
state=state,
backfilled=backfilled,
)
except AuthError as e:
# FIXME richvdh 2021/10/07 I don't think this is reachable. Let's log it
@ -1332,8 +1330,6 @@ class FederationEventHandler:
origin: str,
event: EventBase,
context: EventContext,
state: Optional[Iterable[EventBase]] = None,
backfilled: bool = False,
) -> EventContext:
"""
Checks whether an event should be rejected (for failing auth checks).
@ -1344,12 +1340,6 @@ class FederationEventHandler:
context:
The event context.
state:
The state events used to check the event for soft-fail. If this is
not provided the current state events will be used.
backfilled: True if the event was backfilled.
Returns:
The updated context object.

View file

@ -424,7 +424,7 @@ class PaginationHandler:
if events:
if event_filter:
events = event_filter.filter(events)
events = await event_filter.filter(events)
events = await filter_events_for_client(
self.storage, user_id, events, is_peeking=(member_event_id is None)

View file

@ -12,8 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
"""Contains functions for performing events on rooms."""
"""Contains functions for performing actions on rooms."""
import itertools
import logging
import math
@ -31,6 +30,8 @@ from typing import (
Tuple,
)
from typing_extensions import TypedDict
from synapse.api.constants import (
EventContentFields,
EventTypes,
@ -1158,8 +1159,10 @@ class RoomContextHandler:
)
if event_filter:
results["events_before"] = event_filter.filter(results["events_before"])
results["events_after"] = event_filter.filter(results["events_after"])
results["events_before"] = await event_filter.filter(
results["events_before"]
)
results["events_after"] = await event_filter.filter(results["events_after"])
results["events_before"] = await filter_evts(results["events_before"])
results["events_after"] = await filter_evts(results["events_after"])
@ -1195,7 +1198,7 @@ class RoomContextHandler:
state_events = list(state[last_event_id].values())
if event_filter:
state_events = event_filter.filter(state_events)
state_events = await event_filter.filter(state_events)
results["state"] = await filter_evts(state_events)
@ -1275,6 +1278,13 @@ class RoomEventSource(EventSource[RoomStreamToken, EventBase]):
return self.store.get_room_events_max_id(room_id)
class ShutdownRoomResponse(TypedDict):
kicked_users: List[str]
failed_to_kick_users: List[str]
local_aliases: List[str]
new_room_id: Optional[str]
class RoomShutdownHandler:
DEFAULT_MESSAGE = (
@ -1300,7 +1310,7 @@ class RoomShutdownHandler:
new_room_name: Optional[str] = None,
message: Optional[str] = None,
block: bool = False,
) -> dict:
) -> ShutdownRoomResponse:
"""
Shuts down a room. Moves all local users and room aliases automatically
to a new room if `new_room_user_id` is set. Otherwise local users only
@ -1334,8 +1344,13 @@ class RoomShutdownHandler:
Defaults to `Sharing illegal content on this server is not
permitted and rooms in violation will be blocked.`
block:
If set to `true`, this room will be added to a blocking list,
preventing future attempts to join the room. Defaults to `false`.
If set to `True`, users will be prevented from joining the old
room. This option can also be used to pre-emptively block a room,
even if it's unknown to this homeserver. In this case, the room
will be blocked, and no further action will be taken. If `False`,
attempting to delete an unknown room is invalid.
Defaults to `False`.
Returns: a dict containing the following keys:
kicked_users: An array of users (`user_id`) that were kicked.
@ -1344,7 +1359,9 @@ class RoomShutdownHandler:
local_aliases:
An array of strings representing the local aliases that were
migrated from the old room to the new.
new_room_id: A string representing the room ID of the new room.
new_room_id:
A string representing the room ID of the new room, or None if
no such room was created.
"""
if not new_room_name:
@ -1355,14 +1372,28 @@ class RoomShutdownHandler:
if not RoomID.is_valid(room_id):
raise SynapseError(400, "%s is not a legal room ID" % (room_id,))
if not await self.store.get_room(room_id):
raise NotFoundError("Unknown room id %s" % (room_id,))
# This will work even if the room is already blocked, but that is
# desirable in case the first attempt at blocking the room failed below.
# Action the block first (even if the room doesn't exist yet)
if block:
# This will work even if the room is already blocked, but that is
# desirable in case the first attempt at blocking the room failed below.
await self.store.block_room(room_id, requester_user_id)
if not await self.store.get_room(room_id):
if block:
# We allow you to block an unknown room.
return {
"kicked_users": [],
"failed_to_kick_users": [],
"local_aliases": [],
"new_room_id": None,
}
else:
# But if you don't want to preventatively block another room,
# this function can't do anything useful.
raise NotFoundError(
"Cannot shut down room: unknown room id %s" % (room_id,)
)
if new_room_user_id is not None:
if not self.hs.is_mine_id(new_room_user_id):
raise SynapseError(

View file

@ -180,7 +180,7 @@ class SearchHandler:
% (set(group_keys) - {"room_id", "sender"},),
)
search_filter = Filter(filter_dict)
search_filter = Filter(self.hs, filter_dict)
# TODO: Search through left rooms too
rooms = await self.store.get_rooms_for_local_user_where_membership_is(
@ -242,7 +242,7 @@ class SearchHandler:
rank_map.update({r["event"].event_id: r["rank"] for r in results})
filtered_events = search_filter.filter([r["event"] for r in results])
filtered_events = await search_filter.filter([r["event"] for r in results])
events = await filter_events_for_client(
self.storage, user.to_string(), filtered_events
@ -292,7 +292,9 @@ class SearchHandler:
rank_map.update({r["event"].event_id: r["rank"] for r in results})
filtered_events = search_filter.filter([r["event"] for r in results])
filtered_events = await search_filter.filter(
[r["event"] for r in results]
)
events = await filter_events_for_client(
self.storage, user.to_string(), filtered_events

View file

@ -510,7 +510,7 @@ class SyncHandler:
log_kv({"limited": limited})
if potential_recents:
recents = sync_config.filter_collection.filter_room_timeline(
recents = await sync_config.filter_collection.filter_room_timeline(
potential_recents
)
log_kv({"recents_after_sync_filtering": len(recents)})
@ -575,8 +575,8 @@ class SyncHandler:
log_kv({"loaded_recents": len(events)})
loaded_recents = sync_config.filter_collection.filter_room_timeline(
events
loaded_recents = (
await sync_config.filter_collection.filter_room_timeline(events)
)
log_kv({"loaded_recents_after_sync_filtering": len(loaded_recents)})
@ -1015,7 +1015,7 @@ class SyncHandler:
return {
(e.type, e.state_key): e
for e in sync_config.filter_collection.filter_room_state(
for e in await sync_config.filter_collection.filter_room_state(
list(state.values())
)
if e.type != EventTypes.Aliases # until MSC2261 or alternative solution
@ -1383,7 +1383,7 @@ class SyncHandler:
sync_config.user
)
account_data_for_user = sync_config.filter_collection.filter_account_data(
account_data_for_user = await sync_config.filter_collection.filter_account_data(
[
{"type": account_data_type, "content": content}
for account_data_type, content in account_data.items()
@ -1448,7 +1448,7 @@ class SyncHandler:
# Deduplicate the presence entries so that there's at most one per user
presence = list({p.user_id: p for p in presence}.values())
presence = sync_config.filter_collection.filter_presence(presence)
presence = await sync_config.filter_collection.filter_presence(presence)
sync_result_builder.presence = presence
@ -2021,12 +2021,14 @@ class SyncHandler:
)
account_data_events = (
sync_config.filter_collection.filter_room_account_data(
await sync_config.filter_collection.filter_room_account_data(
account_data_events
)
)
ephemeral = sync_config.filter_collection.filter_room_ephemeral(ephemeral)
ephemeral = await sync_config.filter_collection.filter_room_ephemeral(
ephemeral
)
if not (
always_include

View file

@ -3,7 +3,7 @@ import time
from logging import Handler, LogRecord
from logging.handlers import MemoryHandler
from threading import Thread
from typing import Optional
from typing import Optional, cast
from twisted.internet.interfaces import IReactorCore
@ -56,7 +56,7 @@ class PeriodicallyFlushingMemoryHandler(MemoryHandler):
if reactor is None:
from twisted.internet import reactor as global_reactor
reactor_to_use = global_reactor # type: ignore[assignment]
reactor_to_use = cast(IReactorCore, global_reactor)
else:
reactor_to_use = reactor

View file

@ -31,7 +31,7 @@ import attr
import jinja2
from twisted.internet import defer
from twisted.web.resource import IResource
from twisted.web.resource import Resource
from synapse.api.errors import SynapseError
from synapse.events import EventBase
@ -196,7 +196,7 @@ class ModuleApi:
"""
return self._password_auth_provider.register_password_auth_provider_callbacks
def register_web_resource(self, path: str, resource: IResource):
def register_web_resource(self, path: str, resource: Resource):
"""Registers a web resource to be served at the given path.
This function should be called during initialisation of the module.

View file

@ -20,7 +20,7 @@ from typing import TYPE_CHECKING
from prometheus_client import Counter
from twisted.internet.protocol import Factory
from twisted.internet.protocol import ServerFactory
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.tcp.commands import PositionCommand
@ -38,7 +38,7 @@ stream_updates_counter = Counter(
logger = logging.getLogger(__name__)
class ReplicationStreamProtocolFactory(Factory):
class ReplicationStreamProtocolFactory(ServerFactory):
"""Factory for new replication connections."""
def __init__(self, hs: "HomeServer"):

View file

@ -13,7 +13,7 @@
# limitations under the License.
import logging
from http import HTTPStatus
from typing import TYPE_CHECKING, List, Optional, Tuple
from typing import TYPE_CHECKING, List, Optional, Tuple, cast
from urllib import parse as urlparse
from synapse.api.constants import EventTypes, JoinRules, Membership
@ -239,9 +239,22 @@ class RoomRestServlet(RestServlet):
# Purge room
if purge:
await pagination_handler.purge_room(room_id, force=force_purge)
try:
await pagination_handler.purge_room(room_id, force=force_purge)
except NotFoundError:
if block:
# We can block unknown rooms with this endpoint, in which case
# a failed purge is expected.
pass
else:
# But otherwise, we expect this purge to have succeeded.
raise
return 200, ret
# Cast safety: cast away the knowledge that this is a TypedDict.
# See https://github.com/python/mypy/issues/4976#issuecomment-579883622
# for some discussion on why this is necessary. Either way,
# `ret` is an opaque dictionary blob as far as the rest of the app cares.
return 200, cast(JsonDict, ret)
class RoomMembersRestServlet(RestServlet):
@ -583,6 +596,7 @@ class RoomEventContextServlet(RestServlet):
def __init__(self, hs: "HomeServer"):
super().__init__()
self._hs = hs
self.clock = hs.get_clock()
self.room_context_handler = hs.get_room_context_handler()
self._event_serializer = hs.get_event_client_serializer()
@ -600,7 +614,9 @@ class RoomEventContextServlet(RestServlet):
filter_str = parse_string(request, "filter", encoding="utf-8")
if filter_str:
filter_json = urlparse.unquote(filter_str)
event_filter: Optional[Filter] = Filter(json_decoder.decode(filter_json))
event_filter: Optional[Filter] = Filter(
self._hs, json_decoder.decode(filter_json)
)
else:
event_filter = None

View file

@ -298,7 +298,9 @@ class RelationAggregationPaginationServlet(RestServlet):
raise SynapseError(404, "Unknown parent event.")
if relation_type not in (RelationTypes.ANNOTATION, None):
raise SynapseError(400, "Relation type must be 'annotation'")
raise SynapseError(
400, f"Relation type must be '{RelationTypes.ANNOTATION}'"
)
limit = parse_integer(request, "limit", default=5)
from_token_str = parse_string(request, "from")

View file

@ -550,6 +550,7 @@ class RoomMessageListRestServlet(RestServlet):
def __init__(self, hs: "HomeServer"):
super().__init__()
self._hs = hs
self.pagination_handler = hs.get_pagination_handler()
self.auth = hs.get_auth()
self.store = hs.get_datastore()
@ -567,7 +568,9 @@ class RoomMessageListRestServlet(RestServlet):
filter_str = parse_string(request, "filter", encoding="utf-8")
if filter_str:
filter_json = urlparse.unquote(filter_str)
event_filter: Optional[Filter] = Filter(json_decoder.decode(filter_json))
event_filter: Optional[Filter] = Filter(
self._hs, json_decoder.decode(filter_json)
)
if (
event_filter
and event_filter.filter_json.get("event_format", "client")
@ -672,6 +675,7 @@ class RoomEventContextServlet(RestServlet):
def __init__(self, hs: "HomeServer"):
super().__init__()
self._hs = hs
self.clock = hs.get_clock()
self.room_context_handler = hs.get_room_context_handler()
self._event_serializer = hs.get_event_client_serializer()
@ -688,7 +692,9 @@ class RoomEventContextServlet(RestServlet):
filter_str = parse_string(request, "filter", encoding="utf-8")
if filter_str:
filter_json = urlparse.unquote(filter_str)
event_filter: Optional[Filter] = Filter(json_decoder.decode(filter_json))
event_filter: Optional[Filter] = Filter(
self._hs, json_decoder.decode(filter_json)
)
else:
event_filter = None

View file

@ -29,7 +29,7 @@ from typing import (
from synapse.api.constants import Membership, PresenceState
from synapse.api.errors import Codes, StoreError, SynapseError
from synapse.api.filtering import DEFAULT_FILTER_COLLECTION, FilterCollection
from synapse.api.filtering import FilterCollection
from synapse.api.presence import UserPresenceState
from synapse.events import EventBase
from synapse.events.utils import (
@ -150,7 +150,7 @@ class SyncRestServlet(RestServlet):
request_key = (user, timeout, since, filter_id, full_state, device_id)
if filter_id is None:
filter_collection = DEFAULT_FILTER_COLLECTION
filter_collection = self.filtering.DEFAULT_FILTER_COLLECTION
elif filter_id.startswith("{"):
try:
filter_object = json_decoder.decode(filter_id)
@ -160,7 +160,7 @@ class SyncRestServlet(RestServlet):
except Exception:
raise SynapseError(400, "Invalid filter JSON")
self.filtering.check_valid_filter(filter_object)
filter_collection = FilterCollection(filter_object)
filter_collection = FilterCollection(self.hs, filter_object)
else:
try:
filter_collection = await self.filtering.get_user_filter(

View file

@ -101,8 +101,8 @@ class Thumbnailer:
fits within the given rectangle::
(w_in / h_in) = (w_out / h_out)
w_out = min(w_max, h_max * (w_in / h_in))
h_out = min(h_max, w_max * (h_in / w_in))
w_out = max(min(w_max, h_max * (w_in / h_in)), 1)
h_out = max(min(h_max, w_max * (h_in / w_in)), 1)
Args:
max_width: The largest possible width.
@ -110,9 +110,9 @@ class Thumbnailer:
"""
if max_width * self.height < max_height * self.width:
return max_width, (max_width * self.height) // self.width
return max_width, max((max_width * self.height) // self.width, 1)
else:
return (max_height * self.width) // self.height, max_height
return max((max_height * self.width) // self.height, 1), max_height
def _resize(self, width: int, height: int) -> Image.Image:
# 1-bit or 8-bit color palette images need converting to RGB

View file

@ -33,9 +33,10 @@ from typing import (
cast,
)
import twisted.internet.tcp
from twisted.internet.interfaces import IOpenSSLContextFactory
from twisted.internet.tcp import Port
from twisted.web.iweb import IPolicyForHTTPS
from twisted.web.resource import IResource
from twisted.web.resource import Resource
from synapse.api.auth import Auth
from synapse.api.filtering import Filtering
@ -206,7 +207,7 @@ class HomeServer(metaclass=abc.ABCMeta):
Attributes:
config (synapse.config.homeserver.HomeserverConfig):
_listening_services (list[twisted.internet.tcp.Port]): TCP ports that
_listening_services (list[Port]): TCP ports that
we are listening on to provide HTTP services.
"""
@ -225,6 +226,8 @@ class HomeServer(metaclass=abc.ABCMeta):
# instantiated during setup() for future return by get_datastore()
DATASTORE_CLASS = abc.abstractproperty()
tls_server_context_factory: Optional[IOpenSSLContextFactory]
def __init__(
self,
hostname: str,
@ -247,7 +250,7 @@ class HomeServer(metaclass=abc.ABCMeta):
# the key we use to sign events and requests
self.signing_key = config.key.signing_key[0]
self.config = config
self._listening_services: List[twisted.internet.tcp.Port] = []
self._listening_services: List[Port] = []
self.start_time: Optional[int] = None
self._instance_id = random_string(5)
@ -257,10 +260,10 @@ class HomeServer(metaclass=abc.ABCMeta):
self.datastores: Optional[Databases] = None
self._module_web_resources: Dict[str, IResource] = {}
self._module_web_resources: Dict[str, Resource] = {}
self._module_web_resources_consumed = False
def register_module_web_resource(self, path: str, resource: IResource):
def register_module_web_resource(self, path: str, resource: Resource):
"""Allows a module to register a web resource to be served at the given path.
If multiple modules register a resource for the same path, the module that

View file

@ -412,16 +412,16 @@ class ApplicationServiceTransactionWorkerStore(
)
async def set_type_stream_id_for_appservice(
self, service: ApplicationService, type: str, pos: Optional[int]
self, service: ApplicationService, stream_type: str, pos: Optional[int]
) -> None:
if type not in ("read_receipt", "presence"):
if stream_type not in ("read_receipt", "presence"):
raise ValueError(
"Expected type to be a valid application stream id type, got %s"
% (type,)
% (stream_type,)
)
def set_type_stream_id_for_appservice_txn(txn):
stream_id_type = "%s_stream_id" % type
stream_id_type = "%s_stream_id" % stream_type
txn.execute(
"UPDATE application_services_state SET %s = ? WHERE as_id=?"
% stream_id_type,

View file

@ -134,7 +134,10 @@ class DeviceInboxWorkerStore(SQLBaseStore):
limit: The maximum number of messages to retrieve.
Returns:
A list of messages for the device and where in the stream the messages got to.
A tuple containing:
* A list of messages for the device.
* The max stream token of these messages. There may be more to retrieve
if the given limit was reached.
"""
has_changed = self._device_inbox_stream_cache.has_entity_changed(
user_id, last_stream_id
@ -153,12 +156,19 @@ class DeviceInboxWorkerStore(SQLBaseStore):
txn.execute(
sql, (user_id, device_id, last_stream_id, current_stream_id, limit)
)
messages = []
stream_pos = current_stream_id
for row in txn:
stream_pos = row[0]
messages.append(db_to_json(row[1]))
# If the limit was not reached we know that there's no more data for this
# user/device pair up to current_stream_id.
if len(messages) < limit:
stream_pos = current_stream_id
return messages, stream_pos
return await self.db_pool.runInteraction(
@ -260,13 +270,20 @@ class DeviceInboxWorkerStore(SQLBaseStore):
" LIMIT ?"
)
txn.execute(sql, (destination, last_stream_id, current_stream_id, limit))
messages = []
stream_pos = current_stream_id
for row in txn:
stream_pos = row[0]
messages.append(db_to_json(row[1]))
# If the limit was not reached we know that there's no more data for this
# user/device pair up to current_stream_id.
if len(messages) < limit:
log_kv({"message": "Set stream position to current position"})
stream_pos = current_stream_id
return messages, stream_pos
return await self.db_pool.runInteraction(
@ -372,8 +389,8 @@ class DeviceInboxWorkerStore(SQLBaseStore):
"""Used to send messages from this server.
Args:
local_messages_by_user_and_device:
Dictionary of user_id to device_id to message.
local_messages_by_user_then_device:
Dictionary of recipient user_id to recipient device_id to message.
remote_messages_by_destination:
Dictionary of destination server_name to the EDU JSON to send.

View file

@ -20,7 +20,7 @@ import attr
from synapse.api.constants import RelationTypes
from synapse.events import EventBase
from synapse.storage._base import SQLBaseStore
from synapse.storage.database import LoggingTransaction
from synapse.storage.database import LoggingTransaction, make_in_list_sql_clause
from synapse.storage.databases.main.stream import generate_pagination_where_clause
from synapse.storage.relations import (
AggregationPaginationToken,
@ -334,6 +334,62 @@ class RelationsWorkerStore(SQLBaseStore):
return count, latest_event
async def events_have_relations(
self,
parent_ids: List[str],
relation_senders: Optional[List[str]],
relation_types: Optional[List[str]],
) -> List[str]:
"""Check which events have a relationship from the given senders of the
given types.
Args:
parent_ids: The events being annotated
relation_senders: The relation senders to check.
relation_types: The relation types to check.
Returns:
True if the event has at least one relationship from one of the given senders of the given type.
"""
# If no restrictions are given then the event has the required relations.
if not relation_senders and not relation_types:
return parent_ids
sql = """
SELECT relates_to_id FROM event_relations
INNER JOIN events USING (event_id)
WHERE
%s;
"""
def _get_if_event_has_relations(txn) -> List[str]:
clauses: List[str] = []
clause, args = make_in_list_sql_clause(
txn.database_engine, "relates_to_id", parent_ids
)
clauses.append(clause)
if relation_senders:
clause, temp_args = make_in_list_sql_clause(
txn.database_engine, "sender", relation_senders
)
clauses.append(clause)
args.extend(temp_args)
if relation_types:
clause, temp_args = make_in_list_sql_clause(
txn.database_engine, "relation_type", relation_types
)
clauses.append(clause)
args.extend(temp_args)
txn.execute(sql % " AND ".join(clauses), args)
return [row[0] for row in txn]
return await self.db_pool.runInteraction(
"get_if_event_has_relations", _get_if_event_has_relations
)
async def has_user_annotated_event(
self, parent_id: str, event_type: str, aggregation_key: str, sender: str
) -> bool:

View file

@ -1751,7 +1751,12 @@ class RoomStore(RoomBackgroundUpdateStore, RoomWorkerStore, SearchStore):
)
async def block_room(self, room_id: str, user_id: str) -> None:
"""Marks the room as blocked. Can be called multiple times.
"""Marks the room as blocked.
Can be called multiple times (though we'll only track the last user to
block this room).
Can be called on a room unknown to this homeserver.
Args:
room_id: Room to block

View file

@ -272,31 +272,37 @@ def filter_to_clause(event_filter: Optional[Filter]) -> Tuple[str, List[str]]:
args = []
if event_filter.types:
clauses.append("(%s)" % " OR ".join("type = ?" for _ in event_filter.types))
clauses.append(
"(%s)" % " OR ".join("event.type = ?" for _ in event_filter.types)
)
args.extend(event_filter.types)
for typ in event_filter.not_types:
clauses.append("type != ?")
clauses.append("event.type != ?")
args.append(typ)
if event_filter.senders:
clauses.append("(%s)" % " OR ".join("sender = ?" for _ in event_filter.senders))
clauses.append(
"(%s)" % " OR ".join("event.sender = ?" for _ in event_filter.senders)
)
args.extend(event_filter.senders)
for sender in event_filter.not_senders:
clauses.append("sender != ?")
clauses.append("event.sender != ?")
args.append(sender)
if event_filter.rooms:
clauses.append("(%s)" % " OR ".join("room_id = ?" for _ in event_filter.rooms))
clauses.append(
"(%s)" % " OR ".join("event.room_id = ?" for _ in event_filter.rooms)
)
args.extend(event_filter.rooms)
for room_id in event_filter.not_rooms:
clauses.append("room_id != ?")
clauses.append("event.room_id != ?")
args.append(room_id)
if event_filter.contains_url:
clauses.append("contains_url = ?")
clauses.append("event.contains_url = ?")
args.append(event_filter.contains_url)
# We're only applying the "labels" filter on the database query, because applying the
@ -307,6 +313,23 @@ def filter_to_clause(event_filter: Optional[Filter]) -> Tuple[str, List[str]]:
clauses.append("(%s)" % " OR ".join("label = ?" for _ in event_filter.labels))
args.extend(event_filter.labels)
# Filter on relation_senders / relation types from the joined tables.
if event_filter.relation_senders:
clauses.append(
"(%s)"
% " OR ".join(
"related_event.sender = ?" for _ in event_filter.relation_senders
)
)
args.extend(event_filter.relation_senders)
if event_filter.relation_types:
clauses.append(
"(%s)"
% " OR ".join("relation_type = ?" for _ in event_filter.relation_types)
)
args.extend(event_filter.relation_types)
return " AND ".join(clauses), args
@ -1116,7 +1139,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore, metaclass=abc.ABCMeta):
bounds = generate_pagination_where_clause(
direction=direction,
column_names=("topological_ordering", "stream_ordering"),
column_names=("event.topological_ordering", "event.stream_ordering"),
from_token=from_bound,
to_token=to_bound,
engine=self.database_engine,
@ -1133,32 +1156,51 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore, metaclass=abc.ABCMeta):
select_keywords = "SELECT"
join_clause = ""
# Using DISTINCT in this SELECT query is quite expensive, because it
# requires the engine to sort on the entire (not limited) result set,
# i.e. the entire events table. Only use it in scenarios that could result
# in the same event ID occurring multiple times in the results.
needs_distinct = False
if event_filter and event_filter.labels:
# If we're not filtering on a label, then joining on event_labels will
# return as many row for a single event as the number of labels it has. To
# avoid this, only join if we're filtering on at least one label.
join_clause = """
join_clause += """
LEFT JOIN event_labels
USING (event_id, room_id, topological_ordering)
"""
if len(event_filter.labels) > 1:
# Using DISTINCT in this SELECT query is quite expensive, because it
# requires the engine to sort on the entire (not limited) result set,
# i.e. the entire events table. We only need to use it when we're
# filtering on more than two labels, because that's the only scenario
# in which we can possibly to get multiple times the same event ID in
# the results.
select_keywords += "DISTINCT"
# Multiple labels could cause the same event to appear multiple times.
needs_distinct = True
# If there is a filter on relation_senders and relation_types join to the
# relations table.
if event_filter and (
event_filter.relation_senders or event_filter.relation_types
):
# Filtering by relations could cause the same event to appear multiple
# times (since there's no limit on the number of relations to an event).
needs_distinct = True
join_clause += """
LEFT JOIN event_relations AS relation ON (event.event_id = relation.relates_to_id)
"""
if event_filter.relation_senders:
join_clause += """
LEFT JOIN events AS related_event ON (relation.event_id = related_event.event_id)
"""
if needs_distinct:
select_keywords += " DISTINCT"
sql = """
%(select_keywords)s
event_id, instance_name,
topological_ordering, stream_ordering
FROM events
event.event_id, event.instance_name,
event.topological_ordering, event.stream_ordering
FROM events AS event
%(join_clause)s
WHERE outlier = ? AND room_id = ? AND %(bounds)s
ORDER BY topological_ordering %(order)s,
stream_ordering %(order)s LIMIT ?
WHERE event.outlier = ? AND event.room_id = ? AND %(bounds)s
ORDER BY event.topological_ordering %(order)s,
event.stream_ordering %(order)s LIMIT ?
""" % {
"select_keywords": select_keywords,
"join_clause": join_clause,

View file

@ -38,6 +38,7 @@ from zope.interface import Interface
from twisted.internet.interfaces import (
IReactorCore,
IReactorPluggableNameResolver,
IReactorSSL,
IReactorTCP,
IReactorThreads,
IReactorTime,
@ -66,6 +67,7 @@ JsonDict = Dict[str, Any]
# for mypy-zope to realize it is an interface.
class ISynapseReactor(
IReactorTCP,
IReactorSSL,
IReactorPluggableNameResolver,
IReactorTime,
IReactorCore,

View file

@ -31,13 +31,13 @@ from typing import (
Set,
TypeVar,
Union,
cast,
)
import attr
from typing_extensions import ContextManager
from twisted.internet import defer
from twisted.internet.base import ReactorBase
from twisted.internet.defer import CancelledError
from twisted.internet.interfaces import IReactorTime
from twisted.python import failure
@ -271,8 +271,7 @@ class Linearizer:
if not clock:
from twisted.internet import reactor
assert isinstance(reactor, ReactorBase)
clock = Clock(reactor)
clock = Clock(cast(IReactorTime, reactor))
self._clock = clock
self.max_count = max_count

View file

@ -92,9 +92,9 @@ def _resource_id(resource: Resource, path_seg: bytes) -> str:
the mapping should looks like _resource_id(A,C) = B.
Args:
resource (Resource): The *parent* Resourceb
path_seg (str): The name of the child Resource to be attached.
resource: The *parent* Resourceb
path_seg: The name of the child Resource to be attached.
Returns:
str: A unique string which can be a key to the child Resource.
A unique string which can be a key to the child Resource.
"""
return "%s-%r" % (resource, path_seg)

View file

@ -23,7 +23,7 @@ from twisted.conch.manhole import ColoredManhole, ManholeInterpreter
from twisted.conch.ssh.keys import Key
from twisted.cred import checkers, portal
from twisted.internet import defer
from twisted.internet.protocol import Factory
from twisted.internet.protocol import ServerFactory
from synapse.config.server import ManholeConfig
@ -65,7 +65,7 @@ EddTrx3TNpr1D5m/f+6mnXWrc8u9y1+GNx9yz889xMjIBTBI9KqaaOs=
-----END RSA PRIVATE KEY-----"""
def manhole(settings: ManholeConfig, globals: Dict[str, Any]) -> Factory:
def manhole(settings: ManholeConfig, globals: Dict[str, Any]) -> ServerFactory:
"""Starts a ssh listener with password authentication using
the given username and password. Clients connecting to the ssh
listener will find themselves in a colored python shell with
@ -105,7 +105,8 @@ def manhole(settings: ManholeConfig, globals: Dict[str, Any]) -> Factory:
factory.privateKeys[b"ssh-rsa"] = priv_key # type: ignore[assignment]
factory.publicKeys[b"ssh-rsa"] = pub_key # type: ignore[assignment]
return factory
# ConchFactory is a Factory, not a ServerFactory, but they are identical.
return factory # type: ignore[return-value]
class SynapseManhole(ColoredManhole):

View file

@ -15,6 +15,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from unittest.mock import patch
import jsonschema
from synapse.api.constants import EventContentFields
@ -51,9 +53,8 @@ class FilteringTestCase(unittest.HomeserverTestCase):
{"presence": {"senders": ["@bar;pik.test.com"]}},
]
for filter in invalid_filters:
with self.assertRaises(SynapseError) as check_filter_error:
with self.assertRaises(SynapseError):
self.filtering.check_valid_filter(filter)
self.assertIsInstance(check_filter_error.exception, SynapseError)
def test_valid_filters(self):
valid_filters = [
@ -119,12 +120,12 @@ class FilteringTestCase(unittest.HomeserverTestCase):
definition = {"types": ["m.room.message", "org.matrix.foo.bar"]}
event = MockEvent(sender="@foo:bar", type="m.room.message", room_id="!foo:bar")
self.assertTrue(Filter(definition).check(event))
self.assertTrue(Filter(self.hs, definition)._check(event))
def test_definition_types_works_with_wildcards(self):
definition = {"types": ["m.*", "org.matrix.foo.bar"]}
event = MockEvent(sender="@foo:bar", type="m.room.message", room_id="!foo:bar")
self.assertTrue(Filter(definition).check(event))
self.assertTrue(Filter(self.hs, definition)._check(event))
def test_definition_types_works_with_unknowns(self):
definition = {"types": ["m.room.message", "org.matrix.foo.bar"]}
@ -133,24 +134,24 @@ class FilteringTestCase(unittest.HomeserverTestCase):
type="now.for.something.completely.different",
room_id="!foo:bar",
)
self.assertFalse(Filter(definition).check(event))
self.assertFalse(Filter(self.hs, definition)._check(event))
def test_definition_not_types_works_with_literals(self):
definition = {"not_types": ["m.room.message", "org.matrix.foo.bar"]}
event = MockEvent(sender="@foo:bar", type="m.room.message", room_id="!foo:bar")
self.assertFalse(Filter(definition).check(event))
self.assertFalse(Filter(self.hs, definition)._check(event))
def test_definition_not_types_works_with_wildcards(self):
definition = {"not_types": ["m.room.message", "org.matrix.*"]}
event = MockEvent(
sender="@foo:bar", type="org.matrix.custom.event", room_id="!foo:bar"
)
self.assertFalse(Filter(definition).check(event))
self.assertFalse(Filter(self.hs, definition)._check(event))
def test_definition_not_types_works_with_unknowns(self):
definition = {"not_types": ["m.*", "org.*"]}
event = MockEvent(sender="@foo:bar", type="com.nom.nom.nom", room_id="!foo:bar")
self.assertTrue(Filter(definition).check(event))
self.assertTrue(Filter(self.hs, definition)._check(event))
def test_definition_not_types_takes_priority_over_types(self):
definition = {
@ -158,35 +159,35 @@ class FilteringTestCase(unittest.HomeserverTestCase):
"types": ["m.room.message", "m.room.topic"],
}
event = MockEvent(sender="@foo:bar", type="m.room.topic", room_id="!foo:bar")
self.assertFalse(Filter(definition).check(event))
self.assertFalse(Filter(self.hs, definition)._check(event))
def test_definition_senders_works_with_literals(self):
definition = {"senders": ["@flibble:wibble"]}
event = MockEvent(
sender="@flibble:wibble", type="com.nom.nom.nom", room_id="!foo:bar"
)
self.assertTrue(Filter(definition).check(event))
self.assertTrue(Filter(self.hs, definition)._check(event))
def test_definition_senders_works_with_unknowns(self):
definition = {"senders": ["@flibble:wibble"]}
event = MockEvent(
sender="@challenger:appears", type="com.nom.nom.nom", room_id="!foo:bar"
)
self.assertFalse(Filter(definition).check(event))
self.assertFalse(Filter(self.hs, definition)._check(event))
def test_definition_not_senders_works_with_literals(self):
definition = {"not_senders": ["@flibble:wibble"]}
event = MockEvent(
sender="@flibble:wibble", type="com.nom.nom.nom", room_id="!foo:bar"
)
self.assertFalse(Filter(definition).check(event))
self.assertFalse(Filter(self.hs, definition)._check(event))
def test_definition_not_senders_works_with_unknowns(self):
definition = {"not_senders": ["@flibble:wibble"]}
event = MockEvent(
sender="@challenger:appears", type="com.nom.nom.nom", room_id="!foo:bar"
)
self.assertTrue(Filter(definition).check(event))
self.assertTrue(Filter(self.hs, definition)._check(event))
def test_definition_not_senders_takes_priority_over_senders(self):
definition = {
@ -196,14 +197,14 @@ class FilteringTestCase(unittest.HomeserverTestCase):
event = MockEvent(
sender="@misspiggy:muppets", type="m.room.topic", room_id="!foo:bar"
)
self.assertFalse(Filter(definition).check(event))
self.assertFalse(Filter(self.hs, definition)._check(event))
def test_definition_rooms_works_with_literals(self):
definition = {"rooms": ["!secretbase:unknown"]}
event = MockEvent(
sender="@foo:bar", type="m.room.message", room_id="!secretbase:unknown"
)
self.assertTrue(Filter(definition).check(event))
self.assertTrue(Filter(self.hs, definition)._check(event))
def test_definition_rooms_works_with_unknowns(self):
definition = {"rooms": ["!secretbase:unknown"]}
@ -212,7 +213,7 @@ class FilteringTestCase(unittest.HomeserverTestCase):
type="m.room.message",
room_id="!anothersecretbase:unknown",
)
self.assertFalse(Filter(definition).check(event))
self.assertFalse(Filter(self.hs, definition)._check(event))
def test_definition_not_rooms_works_with_literals(self):
definition = {"not_rooms": ["!anothersecretbase:unknown"]}
@ -221,7 +222,7 @@ class FilteringTestCase(unittest.HomeserverTestCase):
type="m.room.message",
room_id="!anothersecretbase:unknown",
)
self.assertFalse(Filter(definition).check(event))
self.assertFalse(Filter(self.hs, definition)._check(event))
def test_definition_not_rooms_works_with_unknowns(self):
definition = {"not_rooms": ["!secretbase:unknown"]}
@ -230,7 +231,7 @@ class FilteringTestCase(unittest.HomeserverTestCase):
type="m.room.message",
room_id="!anothersecretbase:unknown",
)
self.assertTrue(Filter(definition).check(event))
self.assertTrue(Filter(self.hs, definition)._check(event))
def test_definition_not_rooms_takes_priority_over_rooms(self):
definition = {
@ -240,7 +241,7 @@ class FilteringTestCase(unittest.HomeserverTestCase):
event = MockEvent(
sender="@foo:bar", type="m.room.message", room_id="!secretbase:unknown"
)
self.assertFalse(Filter(definition).check(event))
self.assertFalse(Filter(self.hs, definition)._check(event))
def test_definition_combined_event(self):
definition = {
@ -256,7 +257,7 @@ class FilteringTestCase(unittest.HomeserverTestCase):
type="m.room.message", # yup
room_id="!stage:unknown", # yup
)
self.assertTrue(Filter(definition).check(event))
self.assertTrue(Filter(self.hs, definition)._check(event))
def test_definition_combined_event_bad_sender(self):
definition = {
@ -272,7 +273,7 @@ class FilteringTestCase(unittest.HomeserverTestCase):
type="m.room.message", # yup
room_id="!stage:unknown", # yup
)
self.assertFalse(Filter(definition).check(event))
self.assertFalse(Filter(self.hs, definition)._check(event))
def test_definition_combined_event_bad_room(self):
definition = {
@ -288,7 +289,7 @@ class FilteringTestCase(unittest.HomeserverTestCase):
type="m.room.message", # yup
room_id="!piggyshouse:muppets", # nope
)
self.assertFalse(Filter(definition).check(event))
self.assertFalse(Filter(self.hs, definition)._check(event))
def test_definition_combined_event_bad_type(self):
definition = {
@ -304,7 +305,7 @@ class FilteringTestCase(unittest.HomeserverTestCase):
type="muppets.misspiggy.kisses", # nope
room_id="!stage:unknown", # yup
)
self.assertFalse(Filter(definition).check(event))
self.assertFalse(Filter(self.hs, definition)._check(event))
def test_filter_labels(self):
definition = {"org.matrix.labels": ["#fun"]}
@ -315,7 +316,7 @@ class FilteringTestCase(unittest.HomeserverTestCase):
content={EventContentFields.LABELS: ["#fun"]},
)
self.assertTrue(Filter(definition).check(event))
self.assertTrue(Filter(self.hs, definition)._check(event))
event = MockEvent(
sender="@foo:bar",
@ -324,7 +325,7 @@ class FilteringTestCase(unittest.HomeserverTestCase):
content={EventContentFields.LABELS: ["#notfun"]},
)
self.assertFalse(Filter(definition).check(event))
self.assertFalse(Filter(self.hs, definition)._check(event))
def test_filter_not_labels(self):
definition = {"org.matrix.not_labels": ["#fun"]}
@ -335,7 +336,7 @@ class FilteringTestCase(unittest.HomeserverTestCase):
content={EventContentFields.LABELS: ["#fun"]},
)
self.assertFalse(Filter(definition).check(event))
self.assertFalse(Filter(self.hs, definition)._check(event))
event = MockEvent(
sender="@foo:bar",
@ -344,7 +345,7 @@ class FilteringTestCase(unittest.HomeserverTestCase):
content={EventContentFields.LABELS: ["#notfun"]},
)
self.assertTrue(Filter(definition).check(event))
self.assertTrue(Filter(self.hs, definition)._check(event))
def test_filter_presence_match(self):
user_filter_json = {"presence": {"types": ["m.*"]}}
@ -362,7 +363,7 @@ class FilteringTestCase(unittest.HomeserverTestCase):
)
)
results = user_filter.filter_presence(events=events)
results = self.get_success(user_filter.filter_presence(events=events))
self.assertEquals(events, results)
def test_filter_presence_no_match(self):
@ -386,7 +387,7 @@ class FilteringTestCase(unittest.HomeserverTestCase):
)
)
results = user_filter.filter_presence(events=events)
results = self.get_success(user_filter.filter_presence(events=events))
self.assertEquals([], results)
def test_filter_room_state_match(self):
@ -405,7 +406,7 @@ class FilteringTestCase(unittest.HomeserverTestCase):
)
)
results = user_filter.filter_room_state(events=events)
results = self.get_success(user_filter.filter_room_state(events=events))
self.assertEquals(events, results)
def test_filter_room_state_no_match(self):
@ -426,7 +427,7 @@ class FilteringTestCase(unittest.HomeserverTestCase):
)
)
results = user_filter.filter_room_state(events)
results = self.get_success(user_filter.filter_room_state(events))
self.assertEquals([], results)
def test_filter_rooms(self):
@ -441,10 +442,52 @@ class FilteringTestCase(unittest.HomeserverTestCase):
"!not_included:example.com", # Disallowed because not in rooms.
]
filtered_room_ids = list(Filter(definition).filter_rooms(room_ids))
filtered_room_ids = list(Filter(self.hs, definition).filter_rooms(room_ids))
self.assertEquals(filtered_room_ids, ["!allowed:example.com"])
@unittest.override_config({"experimental_features": {"msc3440_enabled": True}})
def test_filter_relations(self):
events = [
# An event without a relation.
MockEvent(
event_id="$no_relation",
sender="@foo:bar",
type="org.matrix.custom.event",
room_id="!foo:bar",
),
# An event with a relation.
MockEvent(
event_id="$with_relation",
sender="@foo:bar",
type="org.matrix.custom.event",
room_id="!foo:bar",
),
# Non-EventBase objects get passed through.
{},
]
# For the following tests we patch the datastore method (intead of injecting
# events). This is a bit cheeky, but tests the logic of _check_event_relations.
# Filter for a particular sender.
definition = {
"io.element.relation_senders": ["@foo:bar"],
}
async def events_have_relations(*args, **kwargs):
return ["$with_relation"]
with patch.object(
self.datastore, "events_have_relations", new=events_have_relations
):
filtered_events = list(
self.get_success(
Filter(self.hs, definition)._check_event_relations(events)
)
)
self.assertEquals(filtered_events, events[1:])
def test_add_filter(self):
user_filter_json = {"room": {"state": {"types": ["m.*"]}}}

View file

@ -272,7 +272,9 @@ class AppServiceHandlerTestCase(unittest.TestCase):
make_awaitable(([event], None))
)
self.handler.notify_interested_services_ephemeral("receipt_key", 580)
self.handler.notify_interested_services_ephemeral(
"receipt_key", 580, ["@fakerecipient:example.com"]
)
self.mock_scheduler.submit_ephemeral_events_for_as.assert_called_once_with(
interested_service, [event]
)
@ -300,7 +302,9 @@ class AppServiceHandlerTestCase(unittest.TestCase):
make_awaitable(([event], None))
)
self.handler.notify_interested_services_ephemeral("receipt_key", 579)
self.handler.notify_interested_services_ephemeral(
"receipt_key", 580, ["@fakerecipient:example.com"]
)
self.mock_scheduler.submit_ephemeral_events_for_as.assert_not_called()
def _mkservice(self, is_interested, protocols=None):

View file

@ -13,10 +13,11 @@
# limitations under the License.
from typing import Optional
from unittest.mock import Mock
from synapse.api.constants import EventTypes, JoinRules
from synapse.api.errors import Codes, ResourceLimitError
from synapse.api.filtering import DEFAULT_FILTER_COLLECTION
from synapse.api.filtering import Filtering
from synapse.api.room_versions import RoomVersions
from synapse.handlers.sync import SyncConfig
from synapse.rest import admin
@ -197,7 +198,7 @@ def generate_sync_config(
_request_key += 1
return SyncConfig(
user=UserID.from_string(user_id),
filter_collection=DEFAULT_FILTER_COLLECTION,
filter_collection=Filtering(Mock()).DEFAULT_FILTER_COLLECTION,
is_guest=False,
request_key=("request_key", _request_key),
device_id=device_id,

View file

@ -14,9 +14,12 @@
import json
import urllib.parse
from http import HTTPStatus
from typing import List, Optional
from unittest.mock import Mock
from parameterized import parameterized
import synapse.rest.admin
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import Codes
@ -281,6 +284,31 @@ class DeleteRoomTestCase(unittest.HomeserverTestCase):
self._is_blocked(self.room_id, expect=True)
self._has_no_members(self.room_id)
@parameterized.expand([(True,), (False,)])
def test_block_unknown_room(self, purge: bool) -> None:
"""
We can block an unknown room. In this case, the `purge` argument
should be ignored.
"""
room_id = "!unknown:test"
# The room isn't already in the blocked rooms table
self._is_blocked(room_id, expect=False)
# Request the room be blocked.
channel = self.make_request(
"DELETE",
f"/_synapse/admin/v1/rooms/{room_id}",
{"block": True, "purge": purge},
access_token=self.admin_user_tok,
)
# The room is now blocked.
self.assertEqual(
HTTPStatus.OK, int(channel.result["code"]), msg=channel.result["body"]
)
self._is_blocked(room_id)
def test_shutdown_room_consent(self):
"""Test that we can shutdown rooms with local users who have not
yet accepted the privacy policy. This used to fail when we tried to

View file

@ -25,7 +25,12 @@ from urllib import parse as urlparse
from twisted.internet import defer
import synapse.rest.admin
from synapse.api.constants import EventContentFields, EventTypes, Membership
from synapse.api.constants import (
EventContentFields,
EventTypes,
Membership,
RelationTypes,
)
from synapse.api.errors import Codes, HttpResponseException
from synapse.handlers.pagination import PurgeStatus
from synapse.rest import admin
@ -2157,6 +2162,153 @@ class LabelsTestCase(unittest.HomeserverTestCase):
return event_id
class RelationsTestCase(unittest.HomeserverTestCase):
servlets = [
synapse.rest.admin.register_servlets_for_client_rest_resource,
room.register_servlets,
login.register_servlets,
]
def default_config(self):
config = super().default_config()
config["experimental_features"] = {"msc3440_enabled": True}
return config
def prepare(self, reactor, clock, homeserver):
self.user_id = self.register_user("test", "test")
self.tok = self.login("test", "test")
self.room_id = self.helper.create_room_as(self.user_id, tok=self.tok)
self.second_user_id = self.register_user("second", "test")
self.second_tok = self.login("second", "test")
self.helper.join(
room=self.room_id, user=self.second_user_id, tok=self.second_tok
)
self.third_user_id = self.register_user("third", "test")
self.third_tok = self.login("third", "test")
self.helper.join(room=self.room_id, user=self.third_user_id, tok=self.third_tok)
# An initial event with a relation from second user.
res = self.helper.send_event(
room_id=self.room_id,
type=EventTypes.Message,
content={"msgtype": "m.text", "body": "Message 1"},
tok=self.tok,
)
self.event_id_1 = res["event_id"]
self.helper.send_event(
room_id=self.room_id,
type="m.reaction",
content={
"m.relates_to": {
"rel_type": RelationTypes.ANNOTATION,
"event_id": self.event_id_1,
"key": "👍",
}
},
tok=self.second_tok,
)
# Another event with a relation from third user.
res = self.helper.send_event(
room_id=self.room_id,
type=EventTypes.Message,
content={"msgtype": "m.text", "body": "Message 2"},
tok=self.tok,
)
self.event_id_2 = res["event_id"]
self.helper.send_event(
room_id=self.room_id,
type="m.reaction",
content={
"m.relates_to": {
"rel_type": RelationTypes.REFERENCE,
"event_id": self.event_id_2,
}
},
tok=self.third_tok,
)
# An event with no relations.
self.helper.send_event(
room_id=self.room_id,
type=EventTypes.Message,
content={"msgtype": "m.text", "body": "No relations"},
tok=self.tok,
)
def _filter_messages(self, filter: JsonDict) -> List[JsonDict]:
"""Make a request to /messages with a filter, returns the chunk of events."""
channel = self.make_request(
"GET",
"/rooms/%s/messages?filter=%s&dir=b" % (self.room_id, json.dumps(filter)),
access_token=self.tok,
)
self.assertEqual(channel.code, 200, channel.result)
return channel.json_body["chunk"]
def test_filter_relation_senders(self):
# Messages which second user reacted to.
filter = {"io.element.relation_senders": [self.second_user_id]}
chunk = self._filter_messages(filter)
self.assertEqual(len(chunk), 1, chunk)
self.assertEqual(chunk[0]["event_id"], self.event_id_1)
# Messages which third user reacted to.
filter = {"io.element.relation_senders": [self.third_user_id]}
chunk = self._filter_messages(filter)
self.assertEqual(len(chunk), 1, chunk)
self.assertEqual(chunk[0]["event_id"], self.event_id_2)
# Messages which either user reacted to.
filter = {
"io.element.relation_senders": [self.second_user_id, self.third_user_id]
}
chunk = self._filter_messages(filter)
self.assertEqual(len(chunk), 2, chunk)
self.assertCountEqual(
[c["event_id"] for c in chunk], [self.event_id_1, self.event_id_2]
)
def test_filter_relation_type(self):
# Messages which have annotations.
filter = {"io.element.relation_types": [RelationTypes.ANNOTATION]}
chunk = self._filter_messages(filter)
self.assertEqual(len(chunk), 1, chunk)
self.assertEqual(chunk[0]["event_id"], self.event_id_1)
# Messages which have references.
filter = {"io.element.relation_types": [RelationTypes.REFERENCE]}
chunk = self._filter_messages(filter)
self.assertEqual(len(chunk), 1, chunk)
self.assertEqual(chunk[0]["event_id"], self.event_id_2)
# Messages which have either annotations or references.
filter = {
"io.element.relation_types": [
RelationTypes.ANNOTATION,
RelationTypes.REFERENCE,
]
}
chunk = self._filter_messages(filter)
self.assertEqual(len(chunk), 2, chunk)
self.assertCountEqual(
[c["event_id"] for c in chunk], [self.event_id_1, self.event_id_2]
)
def test_filter_relation_senders_and_type(self):
# Messages which second user reacted to.
filter = {
"io.element.relation_senders": [self.second_user_id],
"io.element.relation_types": [RelationTypes.ANNOTATION],
}
chunk = self._filter_messages(filter)
self.assertEqual(len(chunk), 1, chunk)
self.assertEqual(chunk[0]["event_id"], self.event_id_1)
class ContextTestCase(unittest.HomeserverTestCase):
servlets = [

View file

@ -0,0 +1,207 @@
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import List
from synapse.api.constants import EventTypes, RelationTypes
from synapse.api.filtering import Filter
from synapse.events import EventBase
from synapse.rest import admin
from synapse.rest.client import login, room
from synapse.types import JsonDict
from tests.unittest import HomeserverTestCase
class PaginationTestCase(HomeserverTestCase):
"""
Test the pre-filtering done in the pagination code.
This is similar to some of the tests in tests.rest.client.test_rooms but here
we ensure that the filtering done in the database is applied successfully.
"""
servlets = [
admin.register_servlets_for_client_rest_resource,
room.register_servlets,
login.register_servlets,
]
def default_config(self):
config = super().default_config()
config["experimental_features"] = {"msc3440_enabled": True}
return config
def prepare(self, reactor, clock, homeserver):
self.user_id = self.register_user("test", "test")
self.tok = self.login("test", "test")
self.room_id = self.helper.create_room_as(self.user_id, tok=self.tok)
self.second_user_id = self.register_user("second", "test")
self.second_tok = self.login("second", "test")
self.helper.join(
room=self.room_id, user=self.second_user_id, tok=self.second_tok
)
self.third_user_id = self.register_user("third", "test")
self.third_tok = self.login("third", "test")
self.helper.join(room=self.room_id, user=self.third_user_id, tok=self.third_tok)
# An initial event with a relation from second user.
res = self.helper.send_event(
room_id=self.room_id,
type=EventTypes.Message,
content={"msgtype": "m.text", "body": "Message 1"},
tok=self.tok,
)
self.event_id_1 = res["event_id"]
self.helper.send_event(
room_id=self.room_id,
type="m.reaction",
content={
"m.relates_to": {
"rel_type": RelationTypes.ANNOTATION,
"event_id": self.event_id_1,
"key": "👍",
}
},
tok=self.second_tok,
)
# Another event with a relation from third user.
res = self.helper.send_event(
room_id=self.room_id,
type=EventTypes.Message,
content={"msgtype": "m.text", "body": "Message 2"},
tok=self.tok,
)
self.event_id_2 = res["event_id"]
self.helper.send_event(
room_id=self.room_id,
type="m.reaction",
content={
"m.relates_to": {
"rel_type": RelationTypes.REFERENCE,
"event_id": self.event_id_2,
}
},
tok=self.third_tok,
)
# An event with no relations.
self.helper.send_event(
room_id=self.room_id,
type=EventTypes.Message,
content={"msgtype": "m.text", "body": "No relations"},
tok=self.tok,
)
def _filter_messages(self, filter: JsonDict) -> List[EventBase]:
"""Make a request to /messages with a filter, returns the chunk of events."""
from_token = self.get_success(
self.hs.get_event_sources().get_current_token_for_pagination()
)
events, next_key = self.get_success(
self.hs.get_datastore().paginate_room_events(
room_id=self.room_id,
from_key=from_token.room_key,
to_key=None,
direction="b",
limit=10,
event_filter=Filter(self.hs, filter),
)
)
return events
def test_filter_relation_senders(self):
# Messages which second user reacted to.
filter = {"io.element.relation_senders": [self.second_user_id]}
chunk = self._filter_messages(filter)
self.assertEqual(len(chunk), 1, chunk)
self.assertEqual(chunk[0].event_id, self.event_id_1)
# Messages which third user reacted to.
filter = {"io.element.relation_senders": [self.third_user_id]}
chunk = self._filter_messages(filter)
self.assertEqual(len(chunk), 1, chunk)
self.assertEqual(chunk[0].event_id, self.event_id_2)
# Messages which either user reacted to.
filter = {
"io.element.relation_senders": [self.second_user_id, self.third_user_id]
}
chunk = self._filter_messages(filter)
self.assertEqual(len(chunk), 2, chunk)
self.assertCountEqual(
[c.event_id for c in chunk], [self.event_id_1, self.event_id_2]
)
def test_filter_relation_type(self):
# Messages which have annotations.
filter = {"io.element.relation_types": [RelationTypes.ANNOTATION]}
chunk = self._filter_messages(filter)
self.assertEqual(len(chunk), 1, chunk)
self.assertEqual(chunk[0].event_id, self.event_id_1)
# Messages which have references.
filter = {"io.element.relation_types": [RelationTypes.REFERENCE]}
chunk = self._filter_messages(filter)
self.assertEqual(len(chunk), 1, chunk)
self.assertEqual(chunk[0].event_id, self.event_id_2)
# Messages which have either annotations or references.
filter = {
"io.element.relation_types": [
RelationTypes.ANNOTATION,
RelationTypes.REFERENCE,
]
}
chunk = self._filter_messages(filter)
self.assertEqual(len(chunk), 2, chunk)
self.assertCountEqual(
[c.event_id for c in chunk], [self.event_id_1, self.event_id_2]
)
def test_filter_relation_senders_and_type(self):
# Messages which second user reacted to.
filter = {
"io.element.relation_senders": [self.second_user_id],
"io.element.relation_types": [RelationTypes.ANNOTATION],
}
chunk = self._filter_messages(filter)
self.assertEqual(len(chunk), 1, chunk)
self.assertEqual(chunk[0].event_id, self.event_id_1)
def test_duplicate_relation(self):
"""An event should only be returned once if there are multiple relations to it."""
self.helper.send_event(
room_id=self.room_id,
type="m.reaction",
content={
"m.relates_to": {
"rel_type": RelationTypes.ANNOTATION,
"event_id": self.event_id_1,
"key": "A",
}
},
tok=self.second_tok,
)
filter = {"io.element.relation_senders": [self.second_user_id]}
chunk = self._filter_messages(filter)
self.assertEqual(len(chunk), 1, chunk)
self.assertEqual(chunk[0].event_id, self.event_id_1)

View file

@ -81,8 +81,6 @@ class MessageAcceptTests(unittest.HomeserverTestCase):
origin,
event,
context,
state=None,
backfilled=False,
):
return context