0
0
Fork 1
mirror of https://mau.dev/maunium/synapse.git synced 2024-06-02 18:59:04 +02:00

Merge remote-tracking branch 'upstream/release-v1.66'

This commit is contained in:
Tulir Asokan 2022-08-23 12:34:07 +03:00
commit 1167ac5836
145 changed files with 6624 additions and 3866 deletions

View file

@ -53,10 +53,22 @@ jobs:
env:
PULL_REQUEST_NUMBER: ${{ github.event.number }}
lint-pydantic:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- uses: matrix-org/setup-python-poetry@v1
with:
extras: "all"
- run: poetry run scripts-dev/check_pydantic_models.py
# Dummy step to gate other tests on without repeating the whole list
linting-done:
if: ${{ !cancelled() }} # Run this even if prior jobs were skipped
needs: [lint, lint-crlf, lint-newsfile, check-sampleconfig, check-schema-delta]
needs: [lint, lint-crlf, lint-newsfile, lint-pydantic, check-sampleconfig, check-schema-delta]
runs-on: ubuntu-latest
steps:
- run: "true"

View file

@ -1,3 +1,72 @@
Synapse 1.66.0rc1 (2022-08-23)
==============================
Features
--------
- Improve validation of request bodies for the following client-server API endpoints: [`/account/password`](https://spec.matrix.org/v1.3/client-server-api/#post_matrixclientv3accountpassword), [`/account/password/email/requestToken`](https://spec.matrix.org/v1.3/client-server-api/#post_matrixclientv3accountpasswordemailrequesttoken), [`/account/deactivate`](https://spec.matrix.org/v1.3/client-server-api/#post_matrixclientv3accountdeactivate) and [`/account/3pid/email/requestToken`](https://spec.matrix.org/v1.3/client-server-api/#post_matrixclientv3account3pidemailrequesttoken). ([\#13188](https://github.com/matrix-org/synapse/issues/13188), [\#13563](https://github.com/matrix-org/synapse/issues/13563))
- Add forgotten status to [Room Details Admin API](https://matrix-org.github.io/synapse/latest/admin_api/rooms.html#room-details-api). ([\#13503](https://github.com/matrix-org/synapse/issues/13503))
- Add an experimental implementation for [MSC3852 (Expose user agents on `Device`)](https://github.com/matrix-org/matrix-spec-proposals/pull/3852). ([\#13549](https://github.com/matrix-org/synapse/issues/13549))
- Add `org.matrix.msc2716v4` experimental room version with updated content fields. Part of [MSC2716 (Importing history)](https://github.com/matrix-org/matrix-spec-proposals/pull/2716). ([\#13551](https://github.com/matrix-org/synapse/issues/13551))
- Add support for compression to federation responses. ([\#13537](https://github.com/matrix-org/synapse/issues/13537))
- Improve performance of sending messages in rooms with thousands of local users. ([\#13522](https://github.com/matrix-org/synapse/issues/13522), [\#13547](https://github.com/matrix-org/synapse/issues/13547))
Bugfixes
--------
- Faster room joins: make `/joined_members` block whilst the room is partial stated. ([\#13514](https://github.com/matrix-org/synapse/issues/13514))
- Fix a bug introduced in Synapse 1.21.0 where the [`/event_reports` Admin API](https://matrix-org.github.io/synapse/develop/admin_api/event_reports.html) could return a total count which was larger than the number of results you can actually query for. ([\#13525](https://github.com/matrix-org/synapse/issues/13525))
- Fix a bug introduced in Synapse 1.52.0 where sending server notices fails if `max_avatar_size` or `allowed_avatar_mimetypes` is set and not `system_mxid_avatar_url`. ([\#13566](https://github.com/matrix-org/synapse/issues/13566))
- Fix a bug where the `opentracing.force_tracing_for_users` config option would not apply to [`/sendToDevice`](https://spec.matrix.org/v1.3/client-server-api/#put_matrixclientv3sendtodeviceeventtypetxnid) and [`/keys/upload`](https://spec.matrix.org/v1.3/client-server-api/#post_matrixclientv3keysupload) requests. ([\#13574](https://github.com/matrix-org/synapse/issues/13574))
Improved Documentation
----------------------
- Add `openssl` example for generating registration HMAC digest. ([\#13472](https://github.com/matrix-org/synapse/issues/13472))
- Tidy up Synapse's README. ([\#13491](https://github.com/matrix-org/synapse/issues/13491))
- Document that event purging related to the `redaction_retention_period` config option is executed only every 5 minutes. ([\#13492](https://github.com/matrix-org/synapse/issues/13492))
- Add a warning to retention documentation regarding the possibility of database corruption. ([\#13497](https://github.com/matrix-org/synapse/issues/13497))
- Document that the `DOCKER_BUILDKIT=1` flag is needed to build the docker image. ([\#13515](https://github.com/matrix-org/synapse/issues/13515))
- Add missing links in `user_consent` section of configuration manual. ([\#13536](https://github.com/matrix-org/synapse/issues/13536))
- Fix the doc and some warnings that were referring to the nonexistent `custom_templates_directory` setting (instead of `custom_template_directory`). ([\#13538](https://github.com/matrix-org/synapse/issues/13538))
Internal Changes
----------------
### Faster room joins
- Update the rejected state of events during de-partial-stating. ([\#13459](https://github.com/matrix-org/synapse/issues/13459))
- Avoid blocking lazy-loading `/sync`s during partial joins due to remote memberships. Pull remote memberships from auth events instead of the room state. ([\#13477](https://github.com/matrix-org/synapse/issues/13477))
- Refuse to start when faster joins is enabled on a deployment with workers, since worker configurations are not currently supported. ([\#13531](https://github.com/matrix-org/synapse/issues/13531))
### Metrics and tracing
- Allow use of both `@trace` and `@tag_args` stacked on the same function. ([\#13453](https://github.com/matrix-org/synapse/issues/13453))
- Instrument the federation/backfill part of `/messages` for understandable traces in Jaeger. ([\#13489](https://github.com/matrix-org/synapse/issues/13489))
- Instrument `FederationStateIdsServlet` (`/state_ids`) for understandable traces in Jaeger. ([\#13499](https://github.com/matrix-org/synapse/issues/13499), [\#13554](https://github.com/matrix-org/synapse/issues/13554))
- Track HTTP response times over 10 seconds from `/messages` (`synapse_room_message_list_rest_servlet_response_time_seconds`). ([\#13533](https://github.com/matrix-org/synapse/issues/13533))
- Add metrics to track how the rate limiter is affecting requests (sleep/reject). ([\#13534](https://github.com/matrix-org/synapse/issues/13534), [\#13541](https://github.com/matrix-org/synapse/issues/13541))
- Add metrics to time how long it takes us to do backfill processing (`synapse_federation_backfill_processing_before_time_seconds`, `synapse_federation_backfill_processing_after_time_seconds`). ([\#13535](https://github.com/matrix-org/synapse/issues/13535), [\#13584](https://github.com/matrix-org/synapse/issues/13584))
- Add metrics to track rate limiter queue timing (`synapse_rate_limit_queue_wait_time_seconds`). ([\#13544](https://github.com/matrix-org/synapse/issues/13544))
- Update metrics to track `/messages` response time by room size. ([\#13545](https://github.com/matrix-org/synapse/issues/13545))
### Everything else
- Refactor methods in `synapse.api.auth.Auth` to use `Requester` objects everywhere instead of user IDs. ([\#13024](https://github.com/matrix-org/synapse/issues/13024))
- Clean-up tests for notifications. ([\#13471](https://github.com/matrix-org/synapse/issues/13471))
- Add some miscellaneous comments to document sync, especially around `compute_state_delta`. ([\#13474](https://github.com/matrix-org/synapse/issues/13474))
- Use literals in place of `HTTPStatus` constants in tests. ([\#13479](https://github.com/matrix-org/synapse/issues/13479), [\#13488](https://github.com/matrix-org/synapse/issues/13488))
- Add comments about how event push actions are rotated. ([\#13485](https://github.com/matrix-org/synapse/issues/13485))
- Modify HTML template content to better support mobile devices' screen sizes. ([\#13493](https://github.com/matrix-org/synapse/issues/13493))
- Add a linter script which will reject non-strict types in Pydantic models. ([\#13502](https://github.com/matrix-org/synapse/issues/13502))
- Reduce the number of tests using legacy TCP replication. ([\#13543](https://github.com/matrix-org/synapse/issues/13543))
- Allow specifying additional request fields when using the `HomeServerTestCase.login` helper method. ([\#13549](https://github.com/matrix-org/synapse/issues/13549))
- Make `HomeServerTestCase` load any configured homeserver modules automatically. ([\#13558](https://github.com/matrix-org/synapse/issues/13558))
Synapse 1.65.0 (2022-08-16)
===========================
@ -31,7 +100,7 @@ Bugfixes
--------
- Update the version of the LDAP3 auth provider module included in the `matrixdotorg/synapse` DockerHub images and the Debian packages hosted on packages.matrix.org to 0.2.2. This version fixes a regression in the module. ([\#13470](https://github.com/matrix-org/synapse/issues/13470))
- Fix a bug introduced in Synapse v1.41.0 where the `/hierarchy` API returned non-standard information (a `room_id` field under each entry in `children_state`). ([\#13365](https://github.com/matrix-org/synapse/issues/13365))
- Fix a bug introduced in Synapse v1.41.0 where the `/hierarchy` API returned non-standard information (a `room_id` field under each entry in `children_state`) (this was reverted in v1.65.0rc2, see changelog notes above). ([\#13365](https://github.com/matrix-org/synapse/issues/13365))
- Fix a bug introduced in Synapse 0.24.0 that would respond with the wrong error status code to `/joined_members` requests when the requester is not a current member of the room. Contributed by @andrewdoh. ([\#13374](https://github.com/matrix-org/synapse/issues/13374))
- Fix bug in handling of typing events for appservices. Contributed by Nick @ Beeper (@fizzadar). ([\#13392](https://github.com/matrix-org/synapse/issues/13392))
- Fix a bug introduced in Synapse 1.57.0 where rooms listed in `exclude_rooms_from_sync` in the configuration file would not be properly excluded from incremental syncs. ([\#13408](https://github.com/matrix-org/synapse/issues/13408))

View file

@ -2,152 +2,70 @@
Synapse |support| |development| |documentation| |license| |pypi| |python|
=========================================================================
Synapse is an open-source `Matrix <https://matrix.org/>`_ homeserver written and
maintained by the Matrix.org Foundation. We began rapid development began in 2014,
reaching v1.0.0 in 2019. Development on Synapse and the Matrix protocol itself continues
in earnest today.
Briefly, Matrix is an open standard for communications on the internet, supporting
federation, encryption and VoIP. Matrix.org has more to say about the `goals of the
Matrix project <https://matrix.org/docs/guides/introduction>`_, and the `formal specification
<https://spec.matrix.org/>`_ describes the technical details.
.. contents::
Introduction
============
Installing and configuration
============================
Matrix is an ambitious new ecosystem for open federated Instant Messaging and
VoIP. The basics you need to know to get up and running are:
- Everything in Matrix happens in a room. Rooms are distributed and do not
exist on any single server. Rooms can be located using convenience aliases
like ``#matrix:matrix.org`` or ``#test:localhost:8448``.
- Matrix user IDs look like ``@matthew:matrix.org`` (although in the future
you will normally refer to yourself and others using a third party identifier
(3PID): email address, phone number, etc rather than manipulating Matrix user IDs)
The overall architecture is::
client <----> homeserver <=====================> homeserver <----> client
https://somewhere.org/_matrix https://elsewhere.net/_matrix
``#matrix:matrix.org`` is the official support room for Matrix, and can be
accessed by any client from https://matrix.org/docs/projects/try-matrix-now.html or
via IRC bridge at irc://irc.libera.chat/matrix.
Synapse is currently in rapid development, but as of version 0.5 we believe it
is sufficiently stable to be run as an internet-facing service for real usage!
About Matrix
============
Matrix specifies a set of pragmatic RESTful HTTP JSON APIs as an open standard,
which handle:
- Creating and managing fully distributed chat rooms with no
single points of control or failure
- Eventually-consistent cryptographically secure synchronisation of room
state across a global open network of federated servers and services
- Sending and receiving extensible messages in a room with (optional)
end-to-end encryption
- Inviting, joining, leaving, kicking, banning room members
- Managing user accounts (registration, login, logout)
- Using 3rd Party IDs (3PIDs) such as email addresses, phone numbers,
Facebook accounts to authenticate, identify and discover users on Matrix.
- Placing 1:1 VoIP and Video calls
These APIs are intended to be implemented on a wide range of servers, services
and clients, letting developers build messaging and VoIP functionality on top
of the entirely open Matrix ecosystem rather than using closed or proprietary
solutions. The hope is for Matrix to act as the building blocks for a new
generation of fully open and interoperable messaging and VoIP apps for the
internet.
Synapse is a Matrix "homeserver" implementation developed by the matrix.org core
team, written in Python 3/Twisted.
In Matrix, every user runs one or more Matrix clients, which connect through to
a Matrix homeserver. The homeserver stores all their personal chat history and
user account information - much as a mail client connects through to an
IMAP/SMTP server. Just like email, you can either run your own Matrix
homeserver and control and own your own communications and history or use one
hosted by someone else (e.g. matrix.org) - there is no single point of control
or mandatory service provider in Matrix, unlike WhatsApp, Facebook, Hangouts,
etc.
We'd like to invite you to join #matrix:matrix.org (via
https://matrix.org/docs/projects/try-matrix-now.html), run a homeserver, take a look
at the `Matrix spec <https://matrix.org/docs/spec>`_, and experiment with the
`APIs <https://matrix.org/docs/api>`_ and `Client SDKs
<https://matrix.org/docs/projects/try-matrix-now.html#client-sdks>`_.
Thanks for using Matrix!
Support
=======
For support installing or managing Synapse, please join |room|_ (from a matrix.org
account if necessary) and ask questions there. We do not use GitHub issues for
support requests, only for bug reports and feature requests.
Synapse's documentation is `nicely rendered on GitHub Pages <https://matrix-org.github.io/synapse>`_,
with its source available in |docs|_.
.. |room| replace:: ``#synapse:matrix.org``
.. _room: https://matrix.to/#/#synapse:matrix.org
.. |docs| replace:: ``docs``
.. _docs: docs
Synapse Installation
====================
The Synapse documentation describes `how to install Synapse <https://matrix-org.github.io/synapse/latest/setup/installation.html>`_. We recommend using
`Docker images <https://matrix-org.github.io/synapse/latest/setup/installation.html#docker-images-and-ansible-playbooks>`_ or `Debian packages from Matrix.org
<https://matrix-org.github.io/synapse/latest/setup/installation.html#matrixorg-packages>`_.
.. _federation:
* For details on how to install synapse, see
`Installation Instructions <https://matrix-org.github.io/synapse/latest/setup/installation.html>`_.
* For specific details on how to configure Synapse for federation see `docs/federate.md <docs/federate.md>`_
Synapse has a variety of `config options
<https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html>`_
which can be used to customise its behaviour after installation.
There are additional details on how to `configure Synapse for federation here
<https://matrix-org.github.io/synapse/latest/federate.html>`_.
.. _reverse-proxy:
Using a reverse proxy with Synapse
----------------------------------
It is recommended to put a reverse proxy such as
`nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_,
`Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_,
`Caddy <https://caddyserver.com/docs/quick-starts/reverse-proxy>`_,
`HAProxy <https://www.haproxy.org/>`_ or
`relayd <https://man.openbsd.org/relayd.8>`_ in front of Synapse. One advantage of
doing so is that it means that you can expose the default https port (443) to
Matrix clients without needing to run Synapse with root privileges.
For information on configuring one, see `the reverse proxy docs
<https://matrix-org.github.io/synapse/latest/reverse_proxy.html>`_.
Upgrading an existing Synapse
-----------------------------
The instructions for upgrading Synapse are in `the upgrade notes`_.
Please check these instructions as upgrading may require extra steps for some
versions of Synapse.
.. _the upgrade notes: https://matrix-org.github.io/synapse/develop/upgrade.html
Connecting to Synapse from a client
===================================
Platform dependencies
---------------------
The easiest way to try out your new Synapse installation is by connecting to it
from a web client.
Synapse uses a number of platform dependencies such as Python and PostgreSQL,
and aims to follow supported upstream versions. See the
`deprecation policy <https://matrix-org.github.io/synapse/latest/deprecation_policy.html>`_
for more details.
Unless you are running a test instance of Synapse on your local machine, in
general, you will need to enable TLS support before you can successfully
connect from a client: see
`TLS certificates <https://matrix-org.github.io/synapse/latest/setup/installation.html#tls-certificates>`_.
An easy way to get started is to login or register via Element at
https://app.element.io/#/login or https://app.element.io/#/register respectively.
You will need to change the server you are logging into from ``matrix.org``
and instead specify a Homeserver URL of ``https://<server_name>:8448``
(or just ``https://<server_name>`` if you are using a reverse proxy).
If you prefer to use another client, refer to our
`client breakdown <https://matrix.org/docs/projects/clients-matrix>`_.
If all goes well you should at least be able to log in, create a room, and
start sending messages.
.. _`client-user-reg`:
Registering a new user from a client
------------------------------------
By default, registration of new users via Matrix clients is disabled. To enable
it, specify ``enable_registration: true`` in ``homeserver.yaml``. (It is then
recommended to also set up CAPTCHA - see `<docs/CAPTCHA_SETUP.md>`_.)
Once ``enable_registration`` is set to ``true``, it is possible to register a
user via a Matrix client.
Your new user name will be formed partly from the ``server_name``, and partly
from a localpart you specify when you create the account. Your name will take
the form of::
@localpart:my.domain.name
(pronounced "at localpart on my dot domain dot name").
As when logging in, you will need to specify a "Custom server". Specify your
desired ``localpart`` in the 'User name' box.
Security note
=============
-------------
Matrix serves raw, user-supplied data in some APIs -- specifically the `content
repository endpoints`_.
@ -187,30 +105,76 @@ Following this advice ensures that even if an XSS is found in Synapse, the
impact to other applications will be minimal.
Upgrading an existing Synapse
=============================
Testing a new installation
==========================
The instructions for upgrading synapse are in `the upgrade notes`_.
Please check these instructions as upgrading may require extra steps for some
versions of synapse.
The easiest way to try out your new Synapse installation is by connecting to it
from a web client.
.. _the upgrade notes: https://matrix-org.github.io/synapse/develop/upgrade.html
Unless you are running a test instance of Synapse on your local machine, in
general, you will need to enable TLS support before you can successfully
connect from a client: see
`TLS certificates <https://matrix-org.github.io/synapse/latest/setup/installation.html#tls-certificates>`_.
.. _reverse-proxy:
An easy way to get started is to login or register via Element at
https://app.element.io/#/login or https://app.element.io/#/register respectively.
You will need to change the server you are logging into from ``matrix.org``
and instead specify a Homeserver URL of ``https://<server_name>:8448``
(or just ``https://<server_name>`` if you are using a reverse proxy).
If you prefer to use another client, refer to our
`client breakdown <https://matrix.org/docs/projects/clients-matrix>`_.
Using a reverse proxy with Synapse
==================================
If all goes well you should at least be able to log in, create a room, and
start sending messages.
It is recommended to put a reverse proxy such as
`nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_,
`Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_,
`Caddy <https://caddyserver.com/docs/quick-starts/reverse-proxy>`_,
`HAProxy <https://www.haproxy.org/>`_ or
`relayd <https://man.openbsd.org/relayd.8>`_ in front of Synapse. One advantage of
doing so is that it means that you can expose the default https port (443) to
Matrix clients without needing to run Synapse with root privileges.
.. _`client-user-reg`:
For information on configuring one, see `<docs/reverse_proxy.md>`_.
Registering a new user from a client
------------------------------------
By default, registration of new users via Matrix clients is disabled. To enable
it:
1. In the
`registration config section <https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#registration>`_
set ``enable_registration: true`` in ``homeserver.yaml``.
2. Then **either**:
a. set up a `CAPTCHA <https://matrix-org.github.io/synapse/latest/CAPTCHA_SETUP.html>`_, or
b. set ``enable_registration_without_verification: true`` in ``homeserver.yaml``.
We **strongly** recommend using a CAPTCHA, particularly if your homeserver is exposed to
the public internet. Without it, anyone can freely register accounts on your homeserver.
This can be exploited by attackers to create spambots targetting the rest of the Matrix
federation.
Your new user name will be formed partly from the ``server_name``, and partly
from a localpart you specify when you create the account. Your name will take
the form of::
@localpart:my.domain.name
(pronounced "at localpart on my dot domain dot name").
As when logging in, you will need to specify a "Custom server". Specify your
desired ``localpart`` in the 'User name' box.
Troubleshooting and support
===========================
The `Admin FAQ <https://matrix-org.github.io/synapse/latest/usage/administration/admin_faq.html>`_
includes tips on dealing with some common problems. For more details, see
`Synapse's wider documentation <https://matrix-org.github.io/synapse/latest/>`_.
For additional support installing or managing Synapse, please ask in the community
support room |room|_ (from a matrix.org account if necessary). We do not use GitHub
issues for support requests, only for bug reports and feature requests.
.. |room| replace:: ``#synapse:matrix.org``
.. _room: https://matrix.to/#/#synapse:matrix.org
.. |docs| replace:: ``docs``
.. _docs: docs
Identity Servers
================
@ -242,34 +206,15 @@ an email address with your account, or send an invite to another user via their
email address.
Password reset
==============
Users can reset their password through their client. Alternatively, a server admin
can reset a users password using the `admin API <docs/admin_api/user_admin_api.md#reset-password>`_
or by directly editing the database as shown below.
First calculate the hash of the new password::
$ ~/synapse/env/bin/hash_password
Password:
Confirm password:
$2a$12$xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Then update the ``users`` table in the database::
UPDATE users SET password_hash='$2a$12$xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
WHERE name='@test:test.com';
Synapse Development
===================
Development
===========
We welcome contributions to Synapse from the community!
The best place to get started is our
`guide for contributors <https://matrix-org.github.io/synapse/latest/development/contributing_guide.html>`_.
This is part of our larger `documentation <https://matrix-org.github.io/synapse/latest>`_, which includes
information for synapse developers as well as synapse administrators.
information for Synapse developers as well as Synapse administrators.
Developers might be particularly interested in:
* `Synapse's database schema <https://matrix-org.github.io/synapse/latest/development/database_schema.html>`_,
@ -280,187 +225,6 @@ Alongside all that, join our developer community on Matrix:
`#synapse-dev:matrix.org <https://matrix.to/#/#synapse-dev:matrix.org>`_, featuring real humans!
Quick start
-----------
Before setting up a development environment for synapse, make sure you have the
system dependencies (such as the python header files) installed - see
`Platform-specific prerequisites <https://matrix-org.github.io/synapse/latest/setup/installation.html#platform-specific-prerequisites>`_.
To check out a synapse for development, clone the git repo into a working
directory of your choice::
git clone https://github.com/matrix-org/synapse.git
cd synapse
Synapse has a number of external dependencies. We maintain a fixed development
environment using `Poetry <https://python-poetry.org/>`_. First, install poetry. We recommend::
pip install --user pipx
pipx install poetry
as described `here <https://python-poetry.org/docs/#installing-with-pipx>`_.
(See `poetry's installation docs <https://python-poetry.org/docs/#installation>`_
for other installation methods.) Then ask poetry to create a virtual environment
from the project and install Synapse's dependencies::
poetry install --extras "all test"
This will run a process of downloading and installing all the needed
dependencies into a virtual env.
We recommend using the demo which starts 3 federated instances running on ports `8080` - `8082`::
poetry run ./demo/start.sh
(to stop, you can use ``poetry run ./demo/stop.sh``)
See the `demo documentation <https://matrix-org.github.io/synapse/develop/development/demo.html>`_
for more information.
If you just want to start a single instance of the app and run it directly::
# Create the homeserver.yaml config once
poetry run synapse_homeserver \
--server-name my.domain.name \
--config-path homeserver.yaml \
--generate-config \
--report-stats=[yes|no]
# Start the app
poetry run synapse_homeserver --config-path homeserver.yaml
Running the unit tests
----------------------
After getting up and running, you may wish to run Synapse's unit tests to
check that everything is installed correctly::
poetry run trial tests
This should end with a 'PASSED' result (note that exact numbers will
differ)::
Ran 1337 tests in 716.064s
PASSED (skips=15, successes=1322)
For more tips on running the unit tests, like running a specific test or
to see the logging output, see the `CONTRIBUTING doc <CONTRIBUTING.md#run-the-unit-tests>`_.
Running the Integration Tests
-----------------------------
Synapse is accompanied by `SyTest <https://github.com/matrix-org/sytest>`_,
a Matrix homeserver integration testing suite, which uses HTTP requests to
access the API as a Matrix client would. It is able to run Synapse directly from
the source tree, so installation of the server is not required.
Testing with SyTest is recommended for verifying that changes related to the
Client-Server API are functioning correctly. See the `SyTest installation
instructions <https://github.com/matrix-org/sytest#installing>`_ for details.
Platform dependencies
=====================
Synapse uses a number of platform dependencies such as Python and PostgreSQL,
and aims to follow supported upstream versions. See the
`<docs/deprecation_policy.md>`_ document for more details.
Troubleshooting
===============
Need help? Join our community support room on Matrix:
`#synapse:matrix.org <https://matrix.to/#/#synapse:matrix.org>`_
Running out of File Handles
---------------------------
If synapse runs out of file handles, it typically fails badly - live-locking
at 100% CPU, and/or failing to accept new TCP connections (blocking the
connecting client). Matrix currently can legitimately use a lot of file handles,
thanks to busy rooms like #matrix:matrix.org containing hundreds of participating
servers. The first time a server talks in a room it will try to connect
simultaneously to all participating servers, which could exhaust the available
file descriptors between DNS queries & HTTPS sockets, especially if DNS is slow
to respond. (We need to improve the routing algorithm used to be better than
full mesh, but as of March 2019 this hasn't happened yet).
If you hit this failure mode, we recommend increasing the maximum number of
open file handles to be at least 4096 (assuming a default of 1024 or 256).
This is typically done by editing ``/etc/security/limits.conf``
Separately, Synapse may leak file handles if inbound HTTP requests get stuck
during processing - e.g. blocked behind a lock or talking to a remote server etc.
This is best diagnosed by matching up the 'Received request' and 'Processed request'
log lines and looking for any 'Processed request' lines which take more than
a few seconds to execute. Please let us know at #synapse:matrix.org if
you see this failure mode so we can help debug it, however.
Help!! Synapse is slow and eats all my RAM/CPU!
-----------------------------------------------
First, ensure you are running the latest version of Synapse, using Python 3
with a PostgreSQL database.
Synapse's architecture is quite RAM hungry currently - we deliberately
cache a lot of recent room data and metadata in RAM in order to speed up
common requests. We'll improve this in the future, but for now the easiest
way to either reduce the RAM usage (at the risk of slowing things down)
is to set the almost-undocumented ``SYNAPSE_CACHE_FACTOR`` environment
variable. The default is 0.5, which can be decreased to reduce RAM usage
in memory constrained enviroments, or increased if performance starts to
degrade.
However, degraded performance due to a low cache factor, common on
machines with slow disks, often leads to explosions in memory use due
backlogged requests. In this case, reducing the cache factor will make
things worse. Instead, try increasing it drastically. 2.0 is a good
starting value.
Using `libjemalloc <http://jemalloc.net/>`_ can also yield a significant
improvement in overall memory use, and especially in terms of giving back
RAM to the OS. To use it, the library must simply be put in the
LD_PRELOAD environment variable when launching Synapse. On Debian, this
can be done by installing the ``libjemalloc1`` package and adding this
line to ``/etc/default/matrix-synapse``::
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1
This can make a significant difference on Python 2.7 - it's unclear how
much of an improvement it provides on Python 3.x.
If you're encountering high CPU use by the Synapse process itself, you
may be affected by a bug with presence tracking that leads to a
massive excess of outgoing federation requests (see `discussion
<https://github.com/matrix-org/synapse/issues/3971>`_). If metrics
indicate that your server is also issuing far more outgoing federation
requests than can be accounted for by your users' activity, this is a
likely cause. The misbehavior can be worked around by setting
the following in the Synapse config file:
.. code-block:: yaml
presence:
enabled: false
People can't accept room invitations from me
--------------------------------------------
The typical failure mode here is that you send an invitation to someone
to join a room or direct chat, but when they go to accept it, they get an
error (typically along the lines of "Invalid signature"). They might see
something like the following in their logs::
2019-09-11 19:32:04,271 - synapse.federation.transport.server - 288 - WARNING - GET-11752 - authenticate_request failed: 401: Invalid signature for server <server> with key ed25519:a_EqML: Unable to verify signature for <server>
This is normally caused by a misconfiguration in your reverse-proxy. See
`<docs/reverse_proxy.md>`_ and double-check that your settings are correct.
.. |support| image:: https://img.shields.io/matrix/synapse:matrix.org?label=support&logo=matrix
:alt: (get support on #synapse:matrix.org)
:target: https://matrix.to/#/#synapse:matrix.org

File diff suppressed because it is too large Load diff

6
debian/changelog vendored
View file

@ -1,3 +1,9 @@
matrix-synapse-py3 (1.66.0~rc1) stable; urgency=medium
* New Synapse release 1.66.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 23 Aug 2022 09:48:55 +0100
matrix-synapse-py3 (1.65.0) stable; urgency=medium
* New Synapse release 1.65.0.

View file

@ -191,7 +191,7 @@ If you need to build the image from a Synapse checkout, use the following `docke
build` command from the repo's root:
```
docker build -t matrixdotorg/synapse -f docker/Dockerfile .
DOCKER_BUILDKIT=1 docker build -t matrixdotorg/synapse -f docker/Dockerfile .
```
You can choose to build a different docker image by changing the value of the `-f` flag to

View file

@ -46,7 +46,24 @@ As an example:
The MAC is the hex digest output of the HMAC-SHA1 algorithm, with the key being
the shared secret and the content being the nonce, user, password, either the
string "admin" or "notadmin", and optionally the user_type
each separated by NULs. For an example of generation in Python:
each separated by NULs.
Here is an easy way to generate the HMAC digest if you have Bash and OpenSSL:
```bash
# Update these values and then paste this code block into a bash terminal
nonce='thisisanonce'
username='pepper_roni'
password='pizza'
admin='admin'
secret='shared_secret'
printf '%s\0%s\0%s\0%s' "$nonce" "$username" "$password" "$admin" |
openssl sha1 -hmac "$secret" |
awk '{print $2}'
```
For an example of generation in Python:
```python
import hmac, hashlib
@ -70,4 +87,4 @@ def generate_mac(nonce, user, password, admin=False, user_type=None):
mac.update(user_type.encode('utf8'))
return mac.hexdigest()
```
```

View file

@ -302,6 +302,8 @@ The following fields are possible in the JSON response body:
* `state_events` - Total number of state_events of a room. Complexity of the room.
* `room_type` - The type of the room taken from the room's creation event; for example "m.space" if the room is a space.
If the room does not define a type, the value will be `null`.
* `forgotten` - Whether all local users have
[forgotten](https://spec.matrix.org/latest/client-server-api/#leaving-rooms) the room.
The API is:
@ -330,10 +332,13 @@ A response body like the following is returned:
"guest_access": null,
"history_visibility": "shared",
"state_events": 93534,
"room_type": "m.space"
"room_type": "m.space",
"forgotten": false
}
```
_Changed in Synapse 1.66:_ Added the `forgotten` key to the response body.
# Room Members API
The Room Members admin API allows server admins to get a list of all members of a room.

View file

@ -753,6 +753,7 @@ A response body like the following is returned:
"device_id": "QBUAZIFURK",
"display_name": "android",
"last_seen_ip": "1.2.3.4",
"last_seen_user_agent": "Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0",
"last_seen_ts": 1474491775024,
"user_id": "<user_id>"
},
@ -760,6 +761,7 @@ A response body like the following is returned:
"device_id": "AUIECTSRND",
"display_name": "ios",
"last_seen_ip": "1.2.3.5",
"last_seen_user_agent": "Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0",
"last_seen_ts": 1474491775025,
"user_id": "<user_id>"
}
@ -786,6 +788,8 @@ The following fields are returned in the JSON response body:
Absent if no name has been set.
- `last_seen_ip` - The IP address where this device was last seen.
(May be a few minutes out of date, for efficiency reasons).
- `last_seen_user_agent` - The user agent of the device when it was last seen.
(May be a few minutes out of date, for efficiency reasons).
- `last_seen_ts` - The timestamp (in milliseconds since the unix epoch) when this
devices was last seen. (May be a few minutes out of date, for efficiency reasons).
- `user_id` - Owner of device.
@ -837,6 +841,7 @@ A response body like the following is returned:
"device_id": "<device_id>",
"display_name": "android",
"last_seen_ip": "1.2.3.4",
"last_seen_user_agent": "Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0",
"last_seen_ts": 1474491775024,
"user_id": "<user_id>"
}
@ -858,6 +863,8 @@ The following fields are returned in the JSON response body:
Absent if no name has been set.
- `last_seen_ip` - The IP address where this device was last seen.
(May be a few minutes out of date, for efficiency reasons).
- `last_seen_user_agent` - The user agent of the device when it was last seen.
(May be a few minutes out of date, for efficiency reasons).
- `last_seen_ts` - The timestamp (in milliseconds since the unix epoch) when this
devices was last seen. (May be a few minutes out of date, for efficiency reasons).
- `user_id` - Owner of device.

View file

@ -8,7 +8,8 @@ and allow server and room admins to configure how long messages should
be kept in a homeserver's database before being purged from it.
**Please note that, as this feature isn't part of the Matrix
specification yet, this implementation is to be considered as
experimental.**
experimental. There are known bugs which may cause database corruption.
Proceed with caution.**
A message retention policy is mainly defined by its `max_lifetime`
parameter, which defines how long a message can be kept around after

View file

@ -9,7 +9,7 @@ in, allowing them to specify custom templates:
```yaml
templates:
custom_templates_directory: /path/to/custom/templates/
custom_template_directory: /path/to/custom/templates/
```
If this setting is not set, or the files named below are not found within the directory,

View file

@ -2,9 +2,9 @@
How do I become a server admin?
---
If your server already has an admin account you should use the user admin API to promote other accounts to become admins. See [User Admin API](../../admin_api/user_admin_api.md#Change-whether-a-user-is-a-server-administrator-or-not)
If your server already has an admin account you should use the [User Admin API](../../admin_api/user_admin_api.md#Change-whether-a-user-is-a-server-administrator-or-not) to promote other accounts to become admins.
If you don't have any admin accounts yet you won't be able to use the admin API so you'll have to edit the database manually. Manually editing the database is generally not recommended so once you have an admin account, use the admin APIs to make further changes.
If you don't have any admin accounts yet you won't be able to use the admin API, so you'll have to edit the database manually. Manually editing the database is generally not recommended so once you have an admin account: use the admin APIs to make further changes.
```sql
UPDATE users SET admin = 1 WHERE name = '@foo:bar.com';
@ -32,9 +32,11 @@ What users are registered on my server?
SELECT NAME from users;
```
Manually resetting passwords:
Manually resetting passwords
---
See https://github.com/matrix-org/synapse/blob/master/README.rst#password-reset
Users can reset their password through their client. Alternatively, a server admin
can reset a user's password using the [admin API](../../admin_api/user_admin_api.md#reset-password).
I have a problem with my server. Can I just delete my database and start again?
---
@ -101,3 +103,83 @@ LIMIT 10;
You can also use the [List Room API](../../admin_api/rooms.md#list-room-api)
and `order_by` `state_events`.
People can't accept room invitations from me
---
The typical failure mode here is that you send an invitation to someone
to join a room or direct chat, but when they go to accept it, they get an
error (typically along the lines of "Invalid signature"). They might see
something like the following in their logs:
2019-09-11 19:32:04,271 - synapse.federation.transport.server - 288 - WARNING - GET-11752 - authenticate_request failed: 401: Invalid signature for server <server> with key ed25519:a_EqML: Unable to verify signature for <server>
This is normally caused by a misconfiguration in your reverse-proxy. See [the reverse proxy docs](docs/reverse_proxy.md) and double-check that your settings are correct.
Help!! Synapse is slow and eats all my RAM/CPU!
-----------------------------------------------
First, ensure you are running the latest version of Synapse, using Python 3
with a [PostgreSQL database](../../postgres.md).
Synapse's architecture is quite RAM hungry currently - we deliberately
cache a lot of recent room data and metadata in RAM in order to speed up
common requests. We'll improve this in the future, but for now the easiest
way to either reduce the RAM usage (at the risk of slowing things down)
is to set the almost-undocumented ``SYNAPSE_CACHE_FACTOR`` environment
variable. The default is 0.5, which can be decreased to reduce RAM usage
in memory constrained environments, or increased if performance starts to
degrade.
However, degraded performance due to a low cache factor, common on
machines with slow disks, often leads to explosions in memory use due
backlogged requests. In this case, reducing the cache factor will make
things worse. Instead, try increasing it drastically. 2.0 is a good
starting value.
Using [libjemalloc](https://jemalloc.net) can also yield a significant
improvement in overall memory use, and especially in terms of giving back
RAM to the OS. To use it, the library must simply be put in the
LD_PRELOAD environment variable when launching Synapse. On Debian, this
can be done by installing the `libjemalloc1` package and adding this
line to `/etc/default/matrix-synapse`:
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1
This made a significant difference on Python 2.7 - it's unclear how
much of an improvement it provides on Python 3.x.
If you're encountering high CPU use by the Synapse process itself, you
may be affected by a bug with presence tracking that leads to a
massive excess of outgoing federation requests (see [discussion](https://github.com/matrix-org/synapse/issues/3971)). If metrics
indicate that your server is also issuing far more outgoing federation
requests than can be accounted for by your users' activity, this is a
likely cause. The misbehavior can be worked around by disabling presence
in the Synapse config file: [see here](../configuration/config_documentation.md#presence).
Running out of File Handles
---------------------------
If Synapse runs out of file handles, it typically fails badly - live-locking
at 100% CPU, and/or failing to accept new TCP connections (blocking the
connecting client). Matrix currently can legitimately use a lot of file handles,
thanks to busy rooms like `#matrix:matrix.org` containing hundreds of participating
servers. The first time a server talks in a room it will try to connect
simultaneously to all participating servers, which could exhaust the available
file descriptors between DNS queries & HTTPS sockets, especially if DNS is slow
to respond. (We need to improve the routing algorithm used to be better than
full mesh, but as of March 2019 this hasn't happened yet).
If you hit this failure mode, we recommend increasing the maximum number of
open file handles to be at least 4096 (assuming a default of 1024 or 256).
This is typically done by editing ``/etc/security/limits.conf``
Separately, Synapse may leak file handles if inbound HTTP requests get stuck
during processing - e.g. blocked behind a lock or talking to a remote server etc.
This is best diagnosed by matching up the 'Received request' and 'Processed request'
log lines and looking for any 'Processed request' lines which take more than
a few seconds to execute. Please let us know at [`#synapse:matrix.org`](https://matrix.to/#/#synapse-dev:matrix.org) if
you see this failure mode so we can help debug it, however.

View file

@ -444,7 +444,7 @@ Sub-options for each listener include:
* `names`: a list of names of HTTP resources. See below for a list of valid resource names.
* `compress`: set to true to enable gzip compression on HTTP bodies for this resource. This is currently only supported with the
`client`, `consent` and `metrics` resources.
`client`, `consent`, `metrics` and `federation` resources.
* `additional_resources`: Only valid for an 'http' listener. A map of
additional endpoints which should be loaded via dynamic modules.
@ -759,6 +759,10 @@ allowed_avatar_mimetypes: ["image/png", "image/jpeg", "image/gif"]
How long to keep redacted events in unredacted form in the database. After
this period redacted events get replaced with their redacted form in the DB.
Synapse will check whether the rentention period has concluded for redacted
events every 5 minutes. Thus, even if this option is set to `0`, Synapse may
still take up to 5 minutes to purge redacted events from the database.
Defaults to `7d`. Set to `null` to disable.
Example configuration:
@ -845,7 +849,11 @@ which are older than the room's maximum retention period. Synapse will also
filter events received over federation so that events that should have been
purged are ignored and not stored again.
The message retention policies feature is disabled by default.
The message retention policies feature is disabled by default. Please be advised
that enabling this feature carries some risk. There are known bugs with the implementation
which can cause database corruption. Setting retention to delete older history
is less risky than deleting newer history but in general caution is advised when enabling this
experimental feature. You can read more about this feature [here](../../message_retention_policies.md).
This setting has the following sub-options:
* `default_policy`: Default retention policy. If set, Synapse will apply it to rooms that lack the
@ -3344,7 +3352,7 @@ user_directory:
For detailed instructions on user consent configuration, see [here](../../consent_tracking.md).
Parts of this section are required if enabling the `consent` resource under
`listeners`, in particular `template_dir` and `version`. # TODO: link `listeners`
[`listeners`](#listeners), in particular `template_dir` and `version`.
* `template_dir`: gives the location of the templates for the HTML forms.
This directory should contain one subdirectory per language (eg, `en`, `fr`),
@ -3356,7 +3364,7 @@ Parts of this section are required if enabling the `consent` resource under
parameter.
* `server_notice_content`: if enabled, will send a user a "Server Notice"
asking them to consent to the privacy policy. The `server_notices` section ##TODO: link
asking them to consent to the privacy policy. The [`server_notices` section](#server_notices)
must also be configured for this to work. Notices will *not* be sent to
guest users unless `send_server_notice_to_guests` is set to true.

View file

@ -1,6 +1,6 @@
[mypy]
namespace_packages = True
plugins = mypy_zope:plugin, scripts-dev/mypy_synapse_plugin.py
plugins = pydantic.mypy, mypy_zope:plugin, scripts-dev/mypy_synapse_plugin.py
follow_imports = normal
check_untyped_defs = True
show_error_codes = True

54
poetry.lock generated
View file

@ -778,6 +778,21 @@ category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pydantic"
version = "1.9.1"
description = "Data validation and settings management using python type hints"
category = "main"
optional = false
python-versions = ">=3.6.1"
[package.dependencies]
typing-extensions = ">=3.7.4.3"
[package.extras]
dotenv = ["python-dotenv (>=0.10.4)"]
email = ["email-validator (>=1.0.3)"]
[[package]]
name = "pyflakes"
version = "2.4.0"
@ -1563,7 +1578,7 @@ url_preview = ["lxml"]
[metadata]
lock-version = "1.1"
python-versions = "^3.7.1"
content-hash = "c24bbcee7e86dbbe7cdbf49f91a25b310bf21095452641e7440129f59b077f78"
content-hash = "7de518bf27967b3547eab8574342cfb67f87d6b47b4145c13de11112141dbf2d"
[metadata.files]
attrs = [
@ -2260,6 +2275,43 @@ pycparser = [
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
]
pydantic = [
{file = "pydantic-1.9.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c8098a724c2784bf03e8070993f6d46aa2eeca031f8d8a048dff277703e6e193"},
{file = "pydantic-1.9.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c320c64dd876e45254bdd350f0179da737463eea41c43bacbee9d8c9d1021f11"},
{file = "pydantic-1.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:18f3e912f9ad1bdec27fb06b8198a2ccc32f201e24174cec1b3424dda605a310"},
{file = "pydantic-1.9.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c11951b404e08b01b151222a1cb1a9f0a860a8153ce8334149ab9199cd198131"},
{file = "pydantic-1.9.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:8bc541a405423ce0e51c19f637050acdbdf8feca34150e0d17f675e72d119580"},
{file = "pydantic-1.9.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:e565a785233c2d03724c4dc55464559639b1ba9ecf091288dd47ad9c629433bd"},
{file = "pydantic-1.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:a4a88dcd6ff8fd47c18b3a3709a89adb39a6373f4482e04c1b765045c7e282fd"},
{file = "pydantic-1.9.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:447d5521575f18e18240906beadc58551e97ec98142266e521c34968c76c8761"},
{file = "pydantic-1.9.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:985ceb5d0a86fcaa61e45781e567a59baa0da292d5ed2e490d612d0de5796918"},
{file = "pydantic-1.9.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:059b6c1795170809103a1538255883e1983e5b831faea6558ef873d4955b4a74"},
{file = "pydantic-1.9.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:d12f96b5b64bec3f43c8e82b4aab7599d0157f11c798c9f9c528a72b9e0b339a"},
{file = "pydantic-1.9.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:ae72f8098acb368d877b210ebe02ba12585e77bd0db78ac04a1ee9b9f5dd2166"},
{file = "pydantic-1.9.1-cp36-cp36m-win_amd64.whl", hash = "sha256:79b485767c13788ee314669008d01f9ef3bc05db9ea3298f6a50d3ef596a154b"},
{file = "pydantic-1.9.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:494f7c8537f0c02b740c229af4cb47c0d39840b829ecdcfc93d91dcbb0779892"},
{file = "pydantic-1.9.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f0f047e11febe5c3198ed346b507e1d010330d56ad615a7e0a89fae604065a0e"},
{file = "pydantic-1.9.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:969dd06110cb780da01336b281f53e2e7eb3a482831df441fb65dd30403f4608"},
{file = "pydantic-1.9.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:177071dfc0df6248fd22b43036f936cfe2508077a72af0933d0c1fa269b18537"},
{file = "pydantic-1.9.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:9bcf8b6e011be08fb729d110f3e22e654a50f8a826b0575c7196616780683380"},
{file = "pydantic-1.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a955260d47f03df08acf45689bd163ed9df82c0e0124beb4251b1290fa7ae728"},
{file = "pydantic-1.9.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:9ce157d979f742a915b75f792dbd6aa63b8eccaf46a1005ba03aa8a986bde34a"},
{file = "pydantic-1.9.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0bf07cab5b279859c253d26a9194a8906e6f4a210063b84b433cf90a569de0c1"},
{file = "pydantic-1.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d93d4e95eacd313d2c765ebe40d49ca9dd2ed90e5b37d0d421c597af830c195"},
{file = "pydantic-1.9.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1542636a39c4892c4f4fa6270696902acb186a9aaeac6f6cf92ce6ae2e88564b"},
{file = "pydantic-1.9.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a9af62e9b5b9bc67b2a195ebc2c2662fdf498a822d62f902bf27cccb52dbbf49"},
{file = "pydantic-1.9.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:fe4670cb32ea98ffbf5a1262f14c3e102cccd92b1869df3bb09538158ba90fe6"},
{file = "pydantic-1.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:9f659a5ee95c8baa2436d392267988fd0f43eb774e5eb8739252e5a7e9cf07e0"},
{file = "pydantic-1.9.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b83ba3825bc91dfa989d4eed76865e71aea3a6ca1388b59fc801ee04c4d8d0d6"},
{file = "pydantic-1.9.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1dd8fecbad028cd89d04a46688d2fcc14423e8a196d5b0a5c65105664901f810"},
{file = "pydantic-1.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:02eefd7087268b711a3ff4db528e9916ac9aa18616da7bca69c1871d0b7a091f"},
{file = "pydantic-1.9.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7eb57ba90929bac0b6cc2af2373893d80ac559adda6933e562dcfb375029acee"},
{file = "pydantic-1.9.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:4ce9ae9e91f46c344bec3b03d6ee9612802682c1551aaf627ad24045ce090761"},
{file = "pydantic-1.9.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:72ccb318bf0c9ab97fc04c10c37683d9eea952ed526707fabf9ac5ae59b701fd"},
{file = "pydantic-1.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:61b6760b08b7c395975d893e0b814a11cf011ebb24f7d869e7118f5a339a82e1"},
{file = "pydantic-1.9.1-py3-none-any.whl", hash = "sha256:4988c0f13c42bfa9ddd2fe2f569c9d54646ce84adc5de84228cfe83396f3bd58"},
{file = "pydantic-1.9.1.tar.gz", hash = "sha256:1ed987c3ff29fff7fd8c3ea3a3ea877ad310aae2ef9889a119e22d3f2db0691a"},
]
pyflakes = [
{file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"},
{file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},

View file

@ -54,7 +54,7 @@ skip_gitignore = true
[tool.poetry]
name = "matrix-synapse"
version = "1.65.0"
version = "1.66.0rc1"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0"
@ -158,6 +158,9 @@ packaging = ">=16.1"
# At the time of writing, we only use functions from the version `importlib.metadata`
# which shipped in Python 3.8. This corresponds to version 1.4 of the backport.
importlib_metadata = { version = ">=1.4", python = "<3.8" }
# This is the most recent version of Pydantic with available on common distros.
pydantic = ">=1.7.4"
# Optional Dependencies

View file

@ -497,6 +497,42 @@ pyasn1==0.4.8 \
pycparser==2.21 ; python_full_version >= "3.6.7" and platform_python_implementation == "PyPy" \
--hash=sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9 \
--hash=sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206
pydantic==1.9.1 \
--hash=sha256:c8098a724c2784bf03e8070993f6d46aa2eeca031f8d8a048dff277703e6e193 \
--hash=sha256:c320c64dd876e45254bdd350f0179da737463eea41c43bacbee9d8c9d1021f11 \
--hash=sha256:18f3e912f9ad1bdec27fb06b8198a2ccc32f201e24174cec1b3424dda605a310 \
--hash=sha256:c11951b404e08b01b151222a1cb1a9f0a860a8153ce8334149ab9199cd198131 \
--hash=sha256:8bc541a405423ce0e51c19f637050acdbdf8feca34150e0d17f675e72d119580 \
--hash=sha256:e565a785233c2d03724c4dc55464559639b1ba9ecf091288dd47ad9c629433bd \
--hash=sha256:a4a88dcd6ff8fd47c18b3a3709a89adb39a6373f4482e04c1b765045c7e282fd \
--hash=sha256:447d5521575f18e18240906beadc58551e97ec98142266e521c34968c76c8761 \
--hash=sha256:985ceb5d0a86fcaa61e45781e567a59baa0da292d5ed2e490d612d0de5796918 \
--hash=sha256:059b6c1795170809103a1538255883e1983e5b831faea6558ef873d4955b4a74 \
--hash=sha256:d12f96b5b64bec3f43c8e82b4aab7599d0157f11c798c9f9c528a72b9e0b339a \
--hash=sha256:ae72f8098acb368d877b210ebe02ba12585e77bd0db78ac04a1ee9b9f5dd2166 \
--hash=sha256:79b485767c13788ee314669008d01f9ef3bc05db9ea3298f6a50d3ef596a154b \
--hash=sha256:494f7c8537f0c02b740c229af4cb47c0d39840b829ecdcfc93d91dcbb0779892 \
--hash=sha256:f0f047e11febe5c3198ed346b507e1d010330d56ad615a7e0a89fae604065a0e \
--hash=sha256:969dd06110cb780da01336b281f53e2e7eb3a482831df441fb65dd30403f4608 \
--hash=sha256:177071dfc0df6248fd22b43036f936cfe2508077a72af0933d0c1fa269b18537 \
--hash=sha256:9bcf8b6e011be08fb729d110f3e22e654a50f8a826b0575c7196616780683380 \
--hash=sha256:a955260d47f03df08acf45689bd163ed9df82c0e0124beb4251b1290fa7ae728 \
--hash=sha256:9ce157d979f742a915b75f792dbd6aa63b8eccaf46a1005ba03aa8a986bde34a \
--hash=sha256:0bf07cab5b279859c253d26a9194a8906e6f4a210063b84b433cf90a569de0c1 \
--hash=sha256:5d93d4e95eacd313d2c765ebe40d49ca9dd2ed90e5b37d0d421c597af830c195 \
--hash=sha256:1542636a39c4892c4f4fa6270696902acb186a9aaeac6f6cf92ce6ae2e88564b \
--hash=sha256:a9af62e9b5b9bc67b2a195ebc2c2662fdf498a822d62f902bf27cccb52dbbf49 \
--hash=sha256:fe4670cb32ea98ffbf5a1262f14c3e102cccd92b1869df3bb09538158ba90fe6 \
--hash=sha256:9f659a5ee95c8baa2436d392267988fd0f43eb774e5eb8739252e5a7e9cf07e0 \
--hash=sha256:b83ba3825bc91dfa989d4eed76865e71aea3a6ca1388b59fc801ee04c4d8d0d6 \
--hash=sha256:1dd8fecbad028cd89d04a46688d2fcc14423e8a196d5b0a5c65105664901f810 \
--hash=sha256:02eefd7087268b711a3ff4db528e9916ac9aa18616da7bca69c1871d0b7a091f \
--hash=sha256:7eb57ba90929bac0b6cc2af2373893d80ac559adda6933e562dcfb375029acee \
--hash=sha256:4ce9ae9e91f46c344bec3b03d6ee9612802682c1551aaf627ad24045ce090761 \
--hash=sha256:72ccb318bf0c9ab97fc04c10c37683d9eea952ed526707fabf9ac5ae59b701fd \
--hash=sha256:61b6760b08b7c395975d893e0b814a11cf011ebb24f7d869e7118f5a339a82e1 \
--hash=sha256:4988c0f13c42bfa9ddd2fe2f569c9d54646ce84adc5de84228cfe83396f3bd58 \
--hash=sha256:1ed987c3ff29fff7fd8c3ea3a3ea877ad310aae2ef9889a119e22d3f2db0691a
pymacaroons==0.13.0 \
--hash=sha256:3e14dff6a262fdbf1a15e769ce635a8aea72e6f8f91e408f9a97166c53b91907 \
--hash=sha256:1e6bba42a5f66c245adf38a5a4006a99dcc06a0703786ea636098667d42903b8

View file

@ -0,0 +1,425 @@
#! /usr/bin/env python
# Copyright 2022 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
A script which enforces that Synapse always uses strict types when defining a Pydantic
model.
Pydantic does not yet offer a strict mode, but it is planned for pydantic v2. See
https://github.com/pydantic/pydantic/issues/1098
https://pydantic-docs.helpmanual.io/blog/pydantic-v2/#strict-mode
until then, this script is a best effort to stop us from introducing type coersion bugs
(like the infamous stringy power levels fixed in room version 10).
"""
import argparse
import contextlib
import functools
import importlib
import logging
import os
import pkgutil
import sys
import textwrap
import traceback
import unittest.mock
from contextlib import contextmanager
from typing import Any, Callable, Dict, Generator, List, Set, Type, TypeVar
from parameterized import parameterized
from pydantic import BaseModel as PydanticBaseModel, conbytes, confloat, conint, constr
from pydantic.typing import get_args
from typing_extensions import ParamSpec
logger = logging.getLogger(__name__)
CONSTRAINED_TYPE_FACTORIES_WITH_STRICT_FLAG: List[Callable] = [
constr,
conbytes,
conint,
confloat,
]
TYPES_THAT_PYDANTIC_WILL_COERCE_TO = [
str,
bytes,
int,
float,
bool,
]
P = ParamSpec("P")
R = TypeVar("R")
class ModelCheckerException(Exception):
"""Dummy exception. Allows us to detect unwanted types during a module import."""
class MissingStrictInConstrainedTypeException(ModelCheckerException):
factory_name: str
def __init__(self, factory_name: str):
self.factory_name = factory_name
class FieldHasUnwantedTypeException(ModelCheckerException):
message: str
def __init__(self, message: str):
self.message = message
def make_wrapper(factory: Callable[P, R]) -> Callable[P, R]:
"""We patch `constr` and friends with wrappers that enforce strict=True."""
@functools.wraps(factory)
def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
# type-ignore: should be redundant once we can use https://github.com/python/mypy/pull/12668
if "strict" not in kwargs: # type: ignore[attr-defined]
raise MissingStrictInConstrainedTypeException(factory.__name__)
if not kwargs["strict"]: # type: ignore[index]
raise MissingStrictInConstrainedTypeException(factory.__name__)
return factory(*args, **kwargs)
return wrapper
def field_type_unwanted(type_: Any) -> bool:
"""Very rough attempt to detect if a type is unwanted as a Pydantic annotation.
At present, we exclude types which will coerce, or any generic type involving types
which will coerce."""
logger.debug("Is %s unwanted?")
if type_ in TYPES_THAT_PYDANTIC_WILL_COERCE_TO:
logger.debug("yes")
return True
logger.debug("Maybe. Subargs are %s", get_args(type_))
rv = any(field_type_unwanted(t) for t in get_args(type_))
logger.debug("Conclusion: %s %s unwanted", type_, "is" if rv else "is not")
return rv
class PatchedBaseModel(PydanticBaseModel):
"""A patched version of BaseModel that inspects fields after models are defined.
We complain loudly if we see an unwanted type.
Beware: ModelField.type_ is presumably private; this is likely to be very brittle.
"""
@classmethod
def __init_subclass__(cls: Type[PydanticBaseModel], **kwargs: object):
for field in cls.__fields__.values():
# Note that field.type_ and field.outer_type are computed based on the
# annotation type, see pydantic.fields.ModelField._type_analysis
if field_type_unwanted(field.outer_type_):
# TODO: this only reports the first bad field. Can we find all bad ones
# and report them all?
raise FieldHasUnwantedTypeException(
f"{cls.__module__}.{cls.__qualname__} has field '{field.name}' "
f"with unwanted type `{field.outer_type_}`"
)
@contextmanager
def monkeypatch_pydantic() -> Generator[None, None, None]:
"""Patch pydantic with our snooping versions of BaseModel and the con* functions.
If the snooping functions see something they don't like, they'll raise a
ModelCheckingException instance.
"""
with contextlib.ExitStack() as patches:
# Most Synapse code ought to import the patched objects directly from
# `pydantic`. But we also patch their containing modules `pydantic.main` and
# `pydantic.types` for completeness.
patch_basemodel1 = unittest.mock.patch(
"pydantic.BaseModel", new=PatchedBaseModel
)
patch_basemodel2 = unittest.mock.patch(
"pydantic.main.BaseModel", new=PatchedBaseModel
)
patches.enter_context(patch_basemodel1)
patches.enter_context(patch_basemodel2)
for factory in CONSTRAINED_TYPE_FACTORIES_WITH_STRICT_FLAG:
wrapper: Callable = make_wrapper(factory)
patch1 = unittest.mock.patch(f"pydantic.{factory.__name__}", new=wrapper)
patch2 = unittest.mock.patch(
f"pydantic.types.{factory.__name__}", new=wrapper
)
patches.enter_context(patch1)
patches.enter_context(patch2)
yield
def format_model_checker_exception(e: ModelCheckerException) -> str:
"""Work out which line of code caused e. Format the line in a human-friendly way."""
# TODO. FieldHasUnwantedTypeException gives better error messages. Can we ditch the
# patches of constr() etc, and instead inspect fields to look for ConstrainedStr
# with strict=False? There is some difficulty with the inheritance hierarchy
# because StrictStr < ConstrainedStr < str.
if isinstance(e, FieldHasUnwantedTypeException):
return e.message
elif isinstance(e, MissingStrictInConstrainedTypeException):
frame_summary = traceback.extract_tb(e.__traceback__)[-2]
return (
f"Missing `strict=True` from {e.factory_name}() call \n"
+ traceback.format_list([frame_summary])[0].lstrip()
)
else:
raise ValueError(f"Unknown exception {e}") from e
def lint() -> int:
"""Try to import all of Synapse and see if we spot any Pydantic type coercions.
Print any problems, then return a status code suitable for sys.exit."""
failures = do_lint()
if failures:
print(f"Found {len(failures)} problem(s)")
for failure in sorted(failures):
print(failure)
return os.EX_DATAERR if failures else os.EX_OK
def do_lint() -> Set[str]:
"""Try to import all of Synapse and see if we spot any Pydantic type coercions."""
failures = set()
with monkeypatch_pydantic():
logger.debug("Importing synapse")
try:
# TODO: make "synapse" an argument so we can target this script at
# a subpackage
module = importlib.import_module("synapse")
except ModelCheckerException as e:
logger.warning("Bad annotation found when importing synapse")
failures.add(format_model_checker_exception(e))
return failures
try:
logger.debug("Fetching subpackages")
module_infos = list(
pkgutil.walk_packages(module.__path__, f"{module.__name__}.")
)
except ModelCheckerException as e:
logger.warning("Bad annotation found when looking for modules to import")
failures.add(format_model_checker_exception(e))
return failures
for module_info in module_infos:
logger.debug("Importing %s", module_info.name)
try:
importlib.import_module(module_info.name)
except ModelCheckerException as e:
logger.warning(
f"Bad annotation found when importing {module_info.name}"
)
failures.add(format_model_checker_exception(e))
return failures
def run_test_snippet(source: str) -> None:
"""Exec a snippet of source code in an isolated environment."""
# To emulate `source` being called at the top level of the module,
# the globals and locals we provide apparently have to be the same mapping.
#
# > Remember that at the module level, globals and locals are the same dictionary.
# > If exec gets two separate objects as globals and locals, the code will be
# > executed as if it were embedded in a class definition.
globals_: Dict[str, object]
locals_: Dict[str, object]
globals_ = locals_ = {}
exec(textwrap.dedent(source), globals_, locals_)
class TestConstrainedTypesPatch(unittest.TestCase):
def test_expression_without_strict_raises(self) -> None:
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
run_test_snippet(
"""
from pydantic import constr
constr()
"""
)
def test_called_as_module_attribute_raises(self) -> None:
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
run_test_snippet(
"""
import pydantic
pydantic.constr()
"""
)
def test_wildcard_import_raises(self) -> None:
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
run_test_snippet(
"""
from pydantic import *
constr()
"""
)
def test_alternative_import_raises(self) -> None:
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
run_test_snippet(
"""
from pydantic.types import constr
constr()
"""
)
def test_alternative_import_attribute_raises(self) -> None:
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
run_test_snippet(
"""
import pydantic.types
pydantic.types.constr()
"""
)
def test_kwarg_but_no_strict_raises(self) -> None:
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
run_test_snippet(
"""
from pydantic import constr
constr(min_length=10)
"""
)
def test_kwarg_strict_False_raises(self) -> None:
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
run_test_snippet(
"""
from pydantic import constr
constr(strict=False)
"""
)
def test_kwarg_strict_True_doesnt_raise(self) -> None:
with monkeypatch_pydantic():
run_test_snippet(
"""
from pydantic import constr
constr(strict=True)
"""
)
def test_annotation_without_strict_raises(self) -> None:
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
run_test_snippet(
"""
from pydantic import constr
x: constr()
"""
)
def test_field_annotation_without_strict_raises(self) -> None:
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
run_test_snippet(
"""
from pydantic import BaseModel, conint
class C:
x: conint()
"""
)
class TestFieldTypeInspection(unittest.TestCase):
@parameterized.expand(
[
("str",),
("bytes"),
("int",),
("float",),
("bool"),
("Optional[str]",),
("Union[None, str]",),
("List[str]",),
("List[List[str]]",),
("Dict[StrictStr, str]",),
("Dict[str, StrictStr]",),
("TypedDict('D', x=int)",),
]
)
def test_field_holding_unwanted_type_raises(self, annotation: str) -> None:
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
run_test_snippet(
f"""
from typing import *
from pydantic import *
class C(BaseModel):
f: {annotation}
"""
)
@parameterized.expand(
[
("StrictStr",),
("StrictBytes"),
("StrictInt",),
("StrictFloat",),
("StrictBool"),
("constr(strict=True, min_length=10)",),
("Optional[StrictStr]",),
("Union[None, StrictStr]",),
("List[StrictStr]",),
("List[List[StrictStr]]",),
("Dict[StrictStr, StrictStr]",),
("TypedDict('D', x=StrictInt)",),
]
)
def test_field_holding_accepted_type_doesnt_raise(self, annotation: str) -> None:
with monkeypatch_pydantic():
run_test_snippet(
f"""
from typing import *
from pydantic import *
class C(BaseModel):
f: {annotation}
"""
)
def test_field_holding_str_raises_with_alternative_import(self) -> None:
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
run_test_snippet(
"""
from pydantic.main import BaseModel
class C(BaseModel):
f: str
"""
)
parser = argparse.ArgumentParser()
parser.add_argument("mode", choices=["lint", "test"], default="lint", nargs="?")
parser.add_argument("-v", "--verbose", action="store_true")
if __name__ == "__main__":
args = parser.parse_args(sys.argv[1:])
logging.basicConfig(
format="%(asctime)s %(name)s:%(lineno)d %(levelname)s %(message)s",
level=logging.DEBUG if args.verbose else logging.INFO,
)
# suppress logs we don't care about
logging.getLogger("xmlschema").setLevel(logging.WARNING)
if args.mode == "lint":
sys.exit(lint())
elif args.mode == "test":
unittest.main(argv=sys.argv[:1])

View file

@ -106,4 +106,5 @@ isort "${files[@]}"
python3 -m black "${files[@]}"
./scripts-dev/config-lint.sh
flake8 "${files[@]}"
./scripts-dev/check_pydantic_models.py lint
mypy

View file

@ -37,8 +37,7 @@ from synapse.logging.opentracing import (
start_active_span,
trace,
)
from synapse.storage.databases.main.registration import TokenLookupResult
from synapse.types import Requester, UserID, create_requester
from synapse.types import Requester, create_requester
if TYPE_CHECKING:
from synapse.server import HomeServer
@ -70,14 +69,14 @@ class Auth:
async def check_user_in_room(
self,
room_id: str,
user_id: str,
requester: Requester,
allow_departed_users: bool = False,
) -> Tuple[str, Optional[str]]:
"""Check if the user is in the room, or was at some point.
Args:
room_id: The room to check.
user_id: The user to check.
requester: The user making the request, according to the access token.
current_state: Optional map of the current state of the room.
If provided then that map is used to check whether they are a
@ -94,6 +93,7 @@ class Auth:
membership event ID of the user.
"""
user_id = requester.user.to_string()
(
membership,
member_event_id,
@ -182,96 +182,69 @@ class Auth:
access_token = self.get_access_token_from_request(request)
(
user_id,
device_id,
app_service,
) = await self._get_appservice_user_id_and_device_id(request)
if user_id and app_service:
if ip_addr and self._track_appservice_user_ips:
await self.store.insert_client_ip(
user_id=user_id,
access_token=access_token,
ip=ip_addr,
user_agent=user_agent,
device_id="dummy-device"
if device_id is None
else device_id, # stubbed
)
requester = create_requester(
user_id, app_service=app_service, device_id=device_id
# First check if it could be a request from an appservice
requester = await self._get_appservice_user(request)
if not requester:
# If not, it should be from a regular user
requester = await self.get_user_by_access_token(
access_token, allow_expired=allow_expired
)
request.requester = user_id
return requester
# Deny the request if the user account has expired.
# This check is only done for regular users, not appservice ones.
if not allow_expired:
if await self._account_validity_handler.is_user_expired(
requester.user.to_string()
):
# Raise the error if either an account validity module has determined
# the account has expired, or the legacy account validity
# implementation is enabled and determined the account has expired
raise AuthError(
403,
"User account has expired",
errcode=Codes.EXPIRED_ACCOUNT,
)
user_info = await self.get_user_by_access_token(
access_token, allow_expired=allow_expired
)
token_id = user_info.token_id
is_guest = user_info.is_guest
shadow_banned = user_info.shadow_banned
# Deny the request if the user account has expired.
if not allow_expired:
if await self._account_validity_handler.is_user_expired(
user_info.user_id
):
# Raise the error if either an account validity module has determined
# the account has expired, or the legacy account validity
# implementation is enabled and determined the account has expired
raise AuthError(
403,
"User account has expired",
errcode=Codes.EXPIRED_ACCOUNT,
)
device_id = user_info.device_id
if access_token and ip_addr:
if ip_addr and (
not requester.app_service or self._track_appservice_user_ips
):
# XXX(quenting): I'm 95% confident that we could skip setting the
# device_id to "dummy-device" for appservices, and that the only impact
# would be some rows which whould not deduplicate in the 'user_ips'
# table during the transition
recorded_device_id = (
"dummy-device"
if requester.device_id is None and requester.app_service is not None
else requester.device_id
)
await self.store.insert_client_ip(
user_id=user_info.token_owner,
user_id=requester.authenticated_entity,
access_token=access_token,
ip=ip_addr,
user_agent=user_agent,
device_id=device_id,
device_id=recorded_device_id,
)
# Track also the puppeted user client IP if enabled and the user is puppeting
if (
user_info.user_id != user_info.token_owner
requester.user.to_string() != requester.authenticated_entity
and self._track_puppeted_user_ips
):
await self.store.insert_client_ip(
user_id=user_info.user_id,
user_id=requester.user.to_string(),
access_token=access_token,
ip=ip_addr,
user_agent=user_agent,
device_id=device_id,
device_id=requester.device_id,
)
if is_guest and not allow_guest:
if requester.is_guest and not allow_guest:
raise AuthError(
403,
"Guest access not allowed",
errcode=Codes.GUEST_ACCESS_FORBIDDEN,
)
# Mark the token as used. This is used to invalidate old refresh
# tokens after some time.
if not user_info.token_used and token_id is not None:
await self.store.mark_access_token_as_used(token_id)
requester = create_requester(
user_info.user_id,
token_id,
is_guest,
shadow_banned,
device_id,
app_service=app_service,
authenticated_entity=user_info.token_owner,
)
request.requester = requester
return requester
except KeyError:
@ -309,9 +282,7 @@ class Auth:
403, "Application service has not registered this user (%s)" % user_id
)
async def _get_appservice_user_id_and_device_id(
self, request: Request
) -> Tuple[Optional[str], Optional[str], Optional[ApplicationService]]:
async def _get_appservice_user(self, request: Request) -> Optional[Requester]:
"""
Given a request, reads the request parameters to determine:
- whether it's an application service that's making this request
@ -326,15 +297,13 @@ class Auth:
Must use `org.matrix.msc3202.device_id` in place of `device_id` for now.
Returns:
3-tuple of
(user ID?, device ID?, application service?)
the application service `Requester` of that request
Postconditions:
- If an application service is returned, so is a user ID
- A user ID is never returned without an application service
- A device ID is never returned without a user ID or an application service
- The returned application service, if present, is permitted to control the
returned user ID.
- The `app_service` field in the returned `Requester` is set
- The `user_id` field in the returned `Requester` is either the application
service sender or the controlled user set by the `user_id` URI parameter
- The returned application service is permitted to control the returned user ID.
- The returned device ID, if present, has been checked to be a valid device ID
for the returned user ID.
"""
@ -344,12 +313,12 @@ class Auth:
self.get_access_token_from_request(request)
)
if app_service is None:
return None, None, None
return None
if app_service.ip_range_whitelist:
ip_address = IPAddress(request.getClientAddress().host)
if ip_address not in app_service.ip_range_whitelist:
return None, None, None
return None
# This will always be set by the time Twisted calls us.
assert request.args is not None
@ -383,13 +352,15 @@ class Auth:
Codes.EXCLUSIVE,
)
return effective_user_id, effective_device_id, app_service
return create_requester(
effective_user_id, app_service=app_service, device_id=effective_device_id
)
async def get_user_by_access_token(
self,
token: str,
allow_expired: bool = False,
) -> TokenLookupResult:
) -> Requester:
"""Validate access token and get user_id from it
Args:
@ -406,9 +377,9 @@ class Auth:
# First look in the database to see if the access token is present
# as an opaque token.
r = await self.store.get_user_by_access_token(token)
if r:
valid_until_ms = r.valid_until_ms
user_info = await self.store.get_user_by_access_token(token)
if user_info:
valid_until_ms = user_info.valid_until_ms
if (
not allow_expired
and valid_until_ms is not None
@ -420,7 +391,20 @@ class Auth:
msg="Access token has expired", soft_logout=True
)
return r
# Mark the token as used. This is used to invalidate old refresh
# tokens after some time.
await self.store.mark_access_token_as_used(user_info.token_id)
requester = create_requester(
user_id=user_info.user_id,
access_token_id=user_info.token_id,
is_guest=user_info.is_guest,
shadow_banned=user_info.shadow_banned,
device_id=user_info.device_id,
authenticated_entity=user_info.token_owner,
)
return requester
# If the token isn't found in the database, then it could still be a
# macaroon for a guest, so we check that here.
@ -446,11 +430,12 @@ class Auth:
"Guest access token used for regular user"
)
return TokenLookupResult(
return create_requester(
user_id=user_id,
is_guest=True,
# all guests get the same device id
device_id=GUEST_DEVICE_ID,
authenticated_entity=user_id,
)
except (
pymacaroons.exceptions.MacaroonException,
@ -473,32 +458,33 @@ class Auth:
request.requester = create_requester(service.sender, app_service=service)
return service
async def is_server_admin(self, user: UserID) -> bool:
async def is_server_admin(self, requester: Requester) -> bool:
"""Check if the given user is a local server admin.
Args:
user: user to check
requester: The user making the request, according to the access token.
Returns:
True if the user is an admin
"""
return await self.store.is_server_admin(user)
return await self.store.is_server_admin(requester.user)
async def check_can_change_room_list(self, room_id: str, user: UserID) -> bool:
async def check_can_change_room_list(
self, room_id: str, requester: Requester
) -> bool:
"""Determine whether the user is allowed to edit the room's entry in the
published room list.
Args:
room_id
user
room_id: The room to check.
requester: The user making the request, according to the access token.
"""
is_admin = await self.is_server_admin(user)
is_admin = await self.is_server_admin(requester)
if is_admin:
return True
user_id = user.to_string()
await self.check_user_in_room(room_id, user_id)
await self.check_user_in_room(room_id, requester)
# We currently require the user is a "moderator" in the room. We do this
# by checking if they would (theoretically) be able to change the
@ -517,7 +503,9 @@ class Auth:
send_level = event_auth.get_send_level(
EventTypes.CanonicalAlias, "", power_level_event
)
user_level = event_auth.get_user_power_level(user_id, auth_events)
user_level = event_auth.get_user_power_level(
requester.user.to_string(), auth_events
)
return user_level >= send_level
@ -575,16 +563,16 @@ class Auth:
@trace
async def check_user_in_room_or_world_readable(
self, room_id: str, user_id: str, allow_departed_users: bool = False
self, room_id: str, requester: Requester, allow_departed_users: bool = False
) -> Tuple[str, Optional[str]]:
"""Checks that the user is or was in the room or the room is world
readable. If it isn't then an exception is raised.
Args:
room_id: room to check
user_id: user to check
allow_departed_users: if True, accept users that were previously
members but have now departed
room_id: The room to check.
requester: The user making the request, according to the access token.
allow_departed_users: If True, accept users that were previously
members but have now departed.
Returns:
Resolves to the current membership of the user in the room and the
@ -599,7 +587,7 @@ class Auth:
# * The user is a guest user, and has joined the room
# else it will throw.
return await self.check_user_in_room(
room_id, user_id, allow_departed_users=allow_departed_users
room_id, requester, allow_departed_users=allow_departed_users
)
except AuthError:
visibility = await self._storage_controllers.state.get_current_state_event(
@ -614,6 +602,6 @@ class Auth:
raise UnstableSpecAuthError(
403,
"User %s not in room %s, and room previews are disabled"
% (user_id, room_id),
% (requester.user, room_id),
errcode=Codes.NOT_JOINED,
)

View file

@ -216,11 +216,11 @@ class EventContentFields:
MSC2716_HISTORICAL: Final = "org.matrix.msc2716.historical"
# For "insertion" events to indicate what the next batch ID should be in
# order to connect to it
MSC2716_NEXT_BATCH_ID: Final = "org.matrix.msc2716.next_batch_id"
MSC2716_NEXT_BATCH_ID: Final = "next_batch_id"
# Used on "batch" events to indicate which insertion event it connects to
MSC2716_BATCH_ID: Final = "org.matrix.msc2716.batch_id"
MSC2716_BATCH_ID: Final = "batch_id"
# For "marker" events
MSC2716_MARKER_INSERTION: Final = "org.matrix.msc2716.marker.insertion"
MSC2716_INSERTION_EVENT_REFERENCE: Final = "insertion_event_reference"
# The authorising user for joining a restricted room.
AUTHORISING_USER: Final = "join_authorised_via_users_server"

View file

@ -269,24 +269,6 @@ class RoomVersions:
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
MSC2716v3 = RoomVersion(
"org.matrix.msc2716v3",
RoomDisposition.UNSTABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=True,
msc2716_historical=True,
msc2716_redactions=True,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
MSC3787 = RoomVersion(
"org.matrix.msc3787",
RoomDisposition.UNSTABLE,
@ -323,6 +305,24 @@ class RoomVersions:
msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=True,
)
MSC2716v4 = RoomVersion(
"org.matrix.msc2716v4",
RoomDisposition.UNSTABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=True,
msc2716_historical=True,
msc2716_redactions=True,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
KNOWN_ROOM_VERSIONS: Dict[str, RoomVersion] = {
@ -338,9 +338,9 @@ KNOWN_ROOM_VERSIONS: Dict[str, RoomVersion] = {
RoomVersions.V7,
RoomVersions.V8,
RoomVersions.V9,
RoomVersions.MSC2716v3,
RoomVersions.MSC3787,
RoomVersions.V10,
RoomVersions.MSC2716v4,
)
}

View file

@ -441,6 +441,13 @@ def start(config_options: List[str]) -> None:
"synapse.app.user_dir",
)
if config.experimental.faster_joins_enabled:
raise ConfigError(
"You have enabled the experimental `faster_joins` config option, but it is "
"not compatible with worker deployments yet. Please disable `faster_joins` "
"or run Synapse as a single process deployment instead."
)
synapse.events.USE_FROZEN_DICTS = config.server.use_frozen_dicts
synapse.util.caches.TRACK_MEMORY_USAGE = config.caches.track_memory_usage

View file

@ -220,7 +220,10 @@ class SynapseHomeServer(HomeServer):
resources.update({"/_matrix/consent": consent_resource})
if name == "federation":
resources.update({FEDERATION_PREFIX: TransportLayerServer(self)})
federation_resource: Resource = TransportLayerServer(self)
if compress:
federation_resource = gz_wrap(federation_resource)
resources.update({FEDERATION_PREFIX: federation_resource})
if name == "openid":
resources.update(

View file

@ -23,7 +23,7 @@ LEGACY_TEMPLATE_DIR_WARNING = """
This server's configuration file is using the deprecated 'template_dir' setting in the
'account_validity' section. Support for this setting has been deprecated and will be
removed in a future version of Synapse. Server admins should instead use the new
'custom_templates_directory' setting documented here:
'custom_template_directory' setting documented here:
https://matrix-org.github.io/synapse/latest/templates.html
---------------------------------------------------------------------------------------"""

View file

@ -53,7 +53,7 @@ LEGACY_TEMPLATE_DIR_WARNING = """
This server's configuration file is using the deprecated 'template_dir' setting in the
'email' section. Support for this setting has been deprecated and will be removed in a
future version of Synapse. Server admins should instead use the new
'custom_templates_directory' setting documented here:
'custom_template_directory' setting documented here:
https://matrix-org.github.io/synapse/latest/templates.html
---------------------------------------------------------------------------------------"""

View file

@ -90,3 +90,6 @@ class ExperimentalConfig(Config):
# MSC3848: Introduce errcodes for specific event sending failures
self.msc3848_enabled: bool = experimental.get("msc3848_enabled", False)
# MSC3852: Expose last seen user agent field on /_matrix/client/v3/devices.
self.msc3852_enabled: bool = experimental.get("msc3852_enabled", False)

View file

@ -26,7 +26,7 @@ LEGACY_TEMPLATE_DIR_WARNING = """
This server's configuration file is using the deprecated 'template_dir' setting in the
'sso' section. Support for this setting has been deprecated and will be removed in a
future version of Synapse. Server admins should instead use the new
'custom_templates_directory' setting documented here:
'custom_template_directory' setting documented here:
https://matrix-org.github.io/synapse/latest/templates.html
---------------------------------------------------------------------------------------"""

View file

@ -161,7 +161,7 @@ def prune_event_dict(room_version: RoomVersion, event_dict: JsonDict) -> JsonDic
elif room_version.msc2716_redactions and event_type == EventTypes.MSC2716_BATCH:
add_fields(EventContentFields.MSC2716_BATCH_ID)
elif room_version.msc2716_redactions and event_type == EventTypes.MSC2716_MARKER:
add_fields(EventContentFields.MSC2716_MARKER_INSERTION)
add_fields(EventContentFields.MSC2716_INSERTION_EVENT_REFERENCE)
allowed_fields = {k: v for k, v in event_dict.items() if k in allowed_keys}

View file

@ -61,7 +61,7 @@ from synapse.federation.federation_base import (
)
from synapse.federation.transport.client import SendJoinResponse
from synapse.http.types import QueryParams
from synapse.logging.opentracing import trace
from synapse.logging.opentracing import SynapseTags, set_tag, tag_args, trace
from synapse.types import JsonDict, UserID, get_domain_from_id
from synapse.util.async_helpers import concurrently_execute
from synapse.util.caches.expiringcache import ExpiringCache
@ -235,6 +235,7 @@ class FederationClient(FederationBase):
)
@trace
@tag_args
async def backfill(
self, dest: str, room_id: str, limit: int, extremities: Collection[str]
) -> Optional[List[EventBase]]:
@ -337,6 +338,8 @@ class FederationClient(FederationBase):
return None
@trace
@tag_args
async def get_pdu(
self,
destinations: Iterable[str],
@ -448,6 +451,8 @@ class FederationClient(FederationBase):
return event_copy
@trace
@tag_args
async def get_room_state_ids(
self, destination: str, room_id: str, event_id: str
) -> Tuple[List[str], List[str]]:
@ -467,6 +472,23 @@ class FederationClient(FederationBase):
state_event_ids = result["pdu_ids"]
auth_event_ids = result.get("auth_chain_ids", [])
set_tag(
SynapseTags.RESULT_PREFIX + "state_event_ids",
str(state_event_ids),
)
set_tag(
SynapseTags.RESULT_PREFIX + "state_event_ids.length",
str(len(state_event_ids)),
)
set_tag(
SynapseTags.RESULT_PREFIX + "auth_event_ids",
str(auth_event_ids),
)
set_tag(
SynapseTags.RESULT_PREFIX + "auth_event_ids.length",
str(len(auth_event_ids)),
)
if not isinstance(state_event_ids, list) or not isinstance(
auth_event_ids, list
):
@ -474,6 +496,8 @@ class FederationClient(FederationBase):
return state_event_ids, auth_event_ids
@trace
@tag_args
async def get_room_state(
self,
destination: str,
@ -533,6 +557,7 @@ class FederationClient(FederationBase):
return valid_state_events, valid_auth_events
@trace
async def _check_sigs_and_hash_and_fetch(
self,
origin: str,

View file

@ -61,7 +61,12 @@ from synapse.logging.context import (
nested_logging_context,
run_in_background,
)
from synapse.logging.opentracing import log_kv, start_active_span_from_edu, trace
from synapse.logging.opentracing import (
log_kv,
start_active_span_from_edu,
tag_args,
trace,
)
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.replication.http.federation import (
ReplicationFederationSendEduRestServlet,
@ -547,6 +552,8 @@ class FederationServer(FederationBase):
return 200, resp
@trace
@tag_args
async def on_state_ids_request(
self, origin: str, room_id: str, event_id: str
) -> Tuple[int, JsonDict]:
@ -569,6 +576,8 @@ class FederationServer(FederationBase):
return 200, resp
@trace
@tag_args
async def _on_state_ids_request_compute(
self, room_id: str, event_id: str
) -> JsonDict:

View file

@ -280,7 +280,7 @@ class AuthHandler:
that it isn't stolen by re-authenticating them.
Args:
requester: The user, as given by the access token
requester: The user making the request, according to the access token.
request: The request sent by the client.
@ -1435,20 +1435,25 @@ class AuthHandler:
access_token: access token to be deleted
"""
user_info = await self.auth.get_user_by_access_token(access_token)
token = await self.store.get_user_by_access_token(access_token)
if not token:
# At this point, the token should already have been fetched once by
# the caller, so this should not happen, unless of a race condition
# between two delete requests
raise SynapseError(HTTPStatus.UNAUTHORIZED, "Unrecognised access token")
await self.store.delete_access_token(access_token)
# see if any modules want to know about this
await self.password_auth_provider.on_logged_out(
user_id=user_info.user_id,
device_id=user_info.device_id,
user_id=token.user_id,
device_id=token.device_id,
access_token=access_token,
)
# delete pushers associated with this access token
if user_info.token_id is not None:
if token.token_id is not None:
await self.hs.get_pusherpool().remove_pushers_by_access_token(
user_info.user_id, (user_info.token_id,)
token.user_id, (token.token_id,)
)
async def delete_access_tokens_for_user(

View file

@ -74,6 +74,7 @@ class DeviceWorkerHandler:
self._state_storage = hs.get_storage_controllers().state
self._auth_handler = hs.get_auth_handler()
self.server_name = hs.hostname
self._msc3852_enabled = hs.config.experimental.msc3852_enabled
@trace
async def get_devices_by_user(self, user_id: str) -> List[JsonDict]:
@ -747,7 +748,13 @@ def _update_device_from_client_ips(
device: JsonDict, client_ips: Mapping[Tuple[str, str], Mapping[str, Any]]
) -> None:
ip = client_ips.get((device["user_id"], device["device_id"]), {})
device.update({"last_seen_ts": ip.get("last_seen"), "last_seen_ip": ip.get("ip")})
device.update(
{
"last_seen_user_agent": ip.get("user_agent"),
"last_seen_ts": ip.get("last_seen"),
"last_seen_ip": ip.get("ip"),
}
)
class DeviceListUpdater:

View file

@ -30,7 +30,7 @@ from synapse.api.errors import (
from synapse.appservice import ApplicationService
from synapse.module_api import NOT_SPAM
from synapse.storage.databases.main.directory import RoomAliasMapping
from synapse.types import JsonDict, Requester, RoomAlias, UserID, get_domain_from_id
from synapse.types import JsonDict, Requester, RoomAlias, get_domain_from_id
if TYPE_CHECKING:
from synapse.server import HomeServer
@ -135,7 +135,7 @@ class DirectoryHandler:
else:
# Server admins are not subject to the same constraints as normal
# users when creating an alias (e.g. being in the room).
is_admin = await self.auth.is_server_admin(requester.user)
is_admin = await self.auth.is_server_admin(requester)
if (self.require_membership and check_membership) and not is_admin:
rooms_for_user = await self.store.get_rooms_for_user(user_id)
@ -199,7 +199,7 @@ class DirectoryHandler:
user_id = requester.user.to_string()
try:
can_delete = await self._user_can_delete_alias(room_alias, user_id)
can_delete = await self._user_can_delete_alias(room_alias, requester)
except StoreError as e:
if e.code == 404:
raise NotFoundError("Unknown room alias")
@ -402,7 +402,9 @@ class DirectoryHandler:
# either no interested services, or no service with an exclusive lock
return True
async def _user_can_delete_alias(self, alias: RoomAlias, user_id: str) -> bool:
async def _user_can_delete_alias(
self, alias: RoomAlias, requester: Requester
) -> bool:
"""Determine whether a user can delete an alias.
One of the following must be true:
@ -415,7 +417,7 @@ class DirectoryHandler:
"""
creator = await self.store.get_room_alias_creator(alias.to_string())
if creator == user_id:
if creator == requester.user.to_string():
return True
# Resolve the alias to the corresponding room.
@ -424,9 +426,7 @@ class DirectoryHandler:
if not room_id:
return False
return await self.auth.check_can_change_room_list(
room_id, UserID.from_string(user_id)
)
return await self.auth.check_can_change_room_list(room_id, requester)
async def edit_published_room_list(
self, requester: Requester, room_id: str, visibility: str
@ -465,7 +465,7 @@ class DirectoryHandler:
raise SynapseError(400, "Unknown room")
can_change_room_list = await self.auth.check_can_change_room_list(
room_id, requester.user
room_id, requester
)
if not can_change_room_list:
raise AuthError(
@ -530,10 +530,8 @@ class DirectoryHandler:
Get a list of the aliases that currently point to this room on this server
"""
# allow access to server admins and current members of the room
is_admin = await self.auth.is_server_admin(requester.user)
is_admin = await self.auth.is_server_admin(requester)
if not is_admin:
await self.auth.check_user_in_room_or_world_readable(
room_id, requester.user.to_string()
)
await self.auth.check_user_in_room_or_world_readable(room_id, requester)
return await self.store.get_aliases_for_room(room_id)

View file

@ -32,6 +32,7 @@ from typing import (
)
import attr
from prometheus_client import Histogram
from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json
from unpaddedbase64 import decode_base64
@ -59,7 +60,7 @@ from synapse.events.validator import EventValidator
from synapse.federation.federation_client import InvalidResponseError
from synapse.http.servlet import assert_params_in_dict
from synapse.logging.context import nested_logging_context
from synapse.logging.opentracing import trace
from synapse.logging.opentracing import SynapseTags, set_tag, tag_args, trace
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.module_api import NOT_SPAM
from synapse.replication.http.federation import (
@ -79,6 +80,29 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
# Added to debug performance and track progress on optimizations
backfill_processing_before_timer = Histogram(
"synapse_federation_backfill_processing_before_time_seconds",
"sec",
[],
buckets=(
0.1,
0.5,
1.0,
2.5,
5.0,
7.5,
10.0,
15.0,
20.0,
30.0,
40.0,
60.0,
80.0,
"+Inf",
),
)
def get_domains_from_state(state: StateMap[EventBase]) -> List[Tuple[str, int]]:
"""Get joined domains from state
@ -138,6 +162,7 @@ class FederationHandler:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.clock = hs.get_clock()
self.store = hs.get_datastores().main
self._storage_controllers = hs.get_storage_controllers()
self._state_storage_controller = self._storage_controllers.state
@ -197,12 +222,39 @@ class FederationHandler:
return. This is used as part of the heuristic to decide if we
should back paginate.
"""
# Starting the processing time here so we can include the room backfill
# linearizer lock queue in the timing
processing_start_time = self.clock.time_msec()
async with self._room_backfill.queue(room_id):
return await self._maybe_backfill_inner(room_id, current_depth, limit)
return await self._maybe_backfill_inner(
room_id,
current_depth,
limit,
processing_start_time=processing_start_time,
)
async def _maybe_backfill_inner(
self, room_id: str, current_depth: int, limit: int
self,
room_id: str,
current_depth: int,
limit: int,
*,
processing_start_time: int,
) -> bool:
"""
Checks whether the `current_depth` is at or approaching any backfill
points in the room and if so, will backfill. We only care about
checking backfill points that happened before the `current_depth`
(meaning less than or equal to the `current_depth`).
Args:
room_id: The room to backfill in.
current_depth: The depth to check at for any upcoming backfill points.
limit: The max number of events to request from the remote federated server.
processing_start_time: The time when `maybe_backfill` started
processing. Only used for timing.
"""
backwards_extremities = [
_BackfillPoint(event_id, depth, _BackfillPointType.BACKWARDS_EXTREMITY)
for event_id, depth in await self.store.get_oldest_event_ids_with_depth_in_room(
@ -370,6 +422,14 @@ class FederationHandler:
logger.debug(
"_maybe_backfill_inner: extremities_to_request %s", extremities_to_request
)
set_tag(
SynapseTags.RESULT_PREFIX + "extremities_to_request",
str(extremities_to_request),
)
set_tag(
SynapseTags.RESULT_PREFIX + "extremities_to_request.length",
str(len(extremities_to_request)),
)
# Now we need to decide which hosts to hit first.
@ -425,6 +485,11 @@ class FederationHandler:
return False
processing_end_time = self.clock.time_msec()
backfill_processing_before_timer.observe(
(processing_end_time - processing_start_time) / 1000
)
success = await try_backfill(likely_domains)
if success:
return True
@ -1081,6 +1146,8 @@ class FederationHandler:
return event
@trace
@tag_args
async def get_state_ids_for_pdu(self, room_id: str, event_id: str) -> List[str]:
"""Returns the state at the event. i.e. not including said event."""
event = await self.store.get_event(event_id, check_room_id=room_id)

View file

@ -29,7 +29,7 @@ from typing import (
Tuple,
)
from prometheus_client import Counter
from prometheus_client import Counter, Histogram
from synapse import event_auth
from synapse.api.constants import (
@ -59,7 +59,13 @@ from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.federation.federation_client import InvalidResponseError
from synapse.logging.context import nested_logging_context
from synapse.logging.opentracing import trace
from synapse.logging.opentracing import (
SynapseTags,
set_tag,
start_active_span,
tag_args,
trace,
)
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
from synapse.replication.http.federation import (
@ -92,6 +98,36 @@ soft_failed_event_counter = Counter(
"Events received over federation that we marked as soft_failed",
)
# Added to debug performance and track progress on optimizations
backfill_processing_after_timer = Histogram(
"synapse_federation_backfill_processing_after_time_seconds",
"sec",
[],
buckets=(
0.1,
0.25,
0.5,
1.0,
2.5,
5.0,
7.5,
10.0,
15.0,
20.0,
25.0,
30.0,
40.0,
50.0,
60.0,
80.0,
100.0,
120.0,
150.0,
180.0,
"+Inf",
),
)
class FederationEventHandler:
"""Handles events that originated from federation.
@ -410,6 +446,7 @@ class FederationEventHandler:
prev_member_event,
)
@trace
async def process_remote_join(
self,
origin: str,
@ -597,20 +634,21 @@ class FederationEventHandler:
if not events:
return
# if there are any events in the wrong room, the remote server is buggy and
# should not be trusted.
for ev in events:
if ev.room_id != room_id:
raise InvalidResponseError(
f"Remote server {dest} returned event {ev.event_id} which is in "
f"room {ev.room_id}, when we were backfilling in {room_id}"
)
with backfill_processing_after_timer.time():
# if there are any events in the wrong room, the remote server is buggy and
# should not be trusted.
for ev in events:
if ev.room_id != room_id:
raise InvalidResponseError(
f"Remote server {dest} returned event {ev.event_id} which is in "
f"room {ev.room_id}, when we were backfilling in {room_id}"
)
await self._process_pulled_events(
dest,
events,
backfilled=True,
)
await self._process_pulled_events(
dest,
events,
backfilled=True,
)
@trace
async def _get_missing_events_for_pdu(
@ -715,7 +753,7 @@ class FederationEventHandler:
@trace
async def _process_pulled_events(
self, origin: str, events: Iterable[EventBase], backfilled: bool
self, origin: str, events: Collection[EventBase], backfilled: bool
) -> None:
"""Process a batch of events we have pulled from a remote server
@ -730,6 +768,15 @@ class FederationEventHandler:
backfilled: True if this is part of a historical batch of events (inhibits
notification to clients, and validation of device keys.)
"""
set_tag(
SynapseTags.FUNC_ARG_PREFIX + "event_ids",
str([event.event_id for event in events]),
)
set_tag(
SynapseTags.FUNC_ARG_PREFIX + "event_ids.length",
str(len(events)),
)
set_tag(SynapseTags.FUNC_ARG_PREFIX + "backfilled", str(backfilled))
logger.debug(
"processing pulled backfilled=%s events=%s",
backfilled,
@ -753,6 +800,7 @@ class FederationEventHandler:
await self._process_pulled_event(origin, ev, backfilled=backfilled)
@trace
@tag_args
async def _process_pulled_event(
self, origin: str, event: EventBase, backfilled: bool
) -> None:
@ -854,6 +902,7 @@ class FederationEventHandler:
else:
raise
@trace
async def _compute_event_context_with_maybe_missing_prevs(
self, dest: str, event: EventBase
) -> EventContext:
@ -970,6 +1019,8 @@ class FederationEventHandler:
event, state_ids_before_event=state_map, partial_state=partial_state
)
@trace
@tag_args
async def _get_state_ids_after_missing_prev_event(
self,
destination: str,
@ -1009,10 +1060,10 @@ class FederationEventHandler:
logger.debug("Fetching %i events from cache/store", len(desired_events))
have_events = await self._store.have_seen_events(room_id, desired_events)
missing_desired_events = desired_events - have_events
missing_desired_event_ids = desired_events - have_events
logger.debug(
"We are missing %i events (got %i)",
len(missing_desired_events),
len(missing_desired_event_ids),
len(have_events),
)
@ -1024,13 +1075,30 @@ class FederationEventHandler:
# already have a bunch of the state events. It would be nice if the
# federation api gave us a way of finding out which we actually need.
missing_auth_events = set(auth_event_ids) - have_events
missing_auth_events.difference_update(
await self._store.have_seen_events(room_id, missing_auth_events)
missing_auth_event_ids = set(auth_event_ids) - have_events
missing_auth_event_ids.difference_update(
await self._store.have_seen_events(room_id, missing_auth_event_ids)
)
logger.debug("We are also missing %i auth events", len(missing_auth_events))
logger.debug("We are also missing %i auth events", len(missing_auth_event_ids))
missing_events = missing_desired_events | missing_auth_events
missing_event_ids = missing_desired_event_ids | missing_auth_event_ids
set_tag(
SynapseTags.RESULT_PREFIX + "missing_auth_event_ids",
str(missing_auth_event_ids),
)
set_tag(
SynapseTags.RESULT_PREFIX + "missing_auth_event_ids.length",
str(len(missing_auth_event_ids)),
)
set_tag(
SynapseTags.RESULT_PREFIX + "missing_desired_event_ids",
str(missing_desired_event_ids),
)
set_tag(
SynapseTags.RESULT_PREFIX + "missing_desired_event_ids.length",
str(len(missing_desired_event_ids)),
)
# Making an individual request for each of 1000s of events has a lot of
# overhead. On the other hand, we don't really want to fetch all of the events
@ -1041,13 +1109,13 @@ class FederationEventHandler:
#
# TODO: might it be better to have an API which lets us do an aggregate event
# request
if (len(missing_events) * 10) >= len(auth_event_ids) + len(state_event_ids):
if (len(missing_event_ids) * 10) >= len(auth_event_ids) + len(state_event_ids):
logger.debug("Requesting complete state from remote")
await self._get_state_and_persist(destination, room_id, event_id)
else:
logger.debug("Fetching %i events from remote", len(missing_events))
logger.debug("Fetching %i events from remote", len(missing_event_ids))
await self._get_events_and_persist(
destination=destination, room_id=room_id, event_ids=missing_events
destination=destination, room_id=room_id, event_ids=missing_event_ids
)
# We now need to fill out the state map, which involves fetching the
@ -1104,6 +1172,14 @@ class FederationEventHandler:
event_id,
failed_to_fetch,
)
set_tag(
SynapseTags.RESULT_PREFIX + "failed_to_fetch",
str(failed_to_fetch),
)
set_tag(
SynapseTags.RESULT_PREFIX + "failed_to_fetch.length",
str(len(failed_to_fetch)),
)
if remote_event.is_state() and remote_event.rejected_reason is None:
state_map[
@ -1112,6 +1188,8 @@ class FederationEventHandler:
return state_map
@trace
@tag_args
async def _get_state_and_persist(
self, destination: str, room_id: str, event_id: str
) -> None:
@ -1133,6 +1211,7 @@ class FederationEventHandler:
destination=destination, room_id=room_id, event_ids=(event_id,)
)
@trace
async def _process_received_pdu(
self,
origin: str,
@ -1283,6 +1362,7 @@ class FederationEventHandler:
except Exception:
logger.exception("Failed to resync device for %s", sender)
@trace
async def _handle_marker_event(self, origin: str, marker_event: EventBase) -> None:
"""Handles backfilling the insertion event when we receive a marker
event that points to one.
@ -1314,7 +1394,7 @@ class FederationEventHandler:
logger.debug("_handle_marker_event: received %s", marker_event)
insertion_event_id = marker_event.content.get(
EventContentFields.MSC2716_MARKER_INSERTION
EventContentFields.MSC2716_INSERTION_EVENT_REFERENCE
)
if insertion_event_id is None:
@ -1414,6 +1494,8 @@ class FederationEventHandler:
return event_from_response
@trace
@tag_args
async def _get_events_and_persist(
self, destination: str, room_id: str, event_ids: Collection[str]
) -> None:
@ -1459,6 +1541,7 @@ class FederationEventHandler:
logger.info("Fetched %i events of %i requested", len(events), len(event_ids))
await self._auth_and_persist_outliers(room_id, events)
@trace
async def _auth_and_persist_outliers(
self, room_id: str, events: Iterable[EventBase]
) -> None:
@ -1477,6 +1560,16 @@ class FederationEventHandler:
"""
event_map = {event.event_id: event for event in events}
event_ids = event_map.keys()
set_tag(
SynapseTags.FUNC_ARG_PREFIX + "event_ids",
str(event_ids),
)
set_tag(
SynapseTags.FUNC_ARG_PREFIX + "event_ids.length",
str(len(event_ids)),
)
# filter out any events we have already seen. This might happen because
# the events were eagerly pushed to us (eg, during a room join), or because
# another thread has raced against us since we decided to request the event.
@ -1593,6 +1686,7 @@ class FederationEventHandler:
backfilled=True,
)
@trace
async def _check_event_auth(
self, origin: Optional[str], event: EventBase, context: EventContext
) -> None:
@ -1631,6 +1725,14 @@ class FederationEventHandler:
claimed_auth_events = await self._load_or_fetch_auth_events_for_event(
origin, event
)
set_tag(
SynapseTags.RESULT_PREFIX + "claimed_auth_events",
str([ev.event_id for ev in claimed_auth_events]),
)
set_tag(
SynapseTags.RESULT_PREFIX + "claimed_auth_events.length",
str(len(claimed_auth_events)),
)
# ... and check that the event passes auth at those auth events.
# https://spec.matrix.org/v1.3/server-server-api/#checks-performed-on-receipt-of-a-pdu:
@ -1728,6 +1830,7 @@ class FederationEventHandler:
)
context.rejected = RejectedReason.AUTH_ERROR
@trace
async def _maybe_kick_guest_users(self, event: EventBase) -> None:
if event.type != EventTypes.GuestAccess:
return
@ -1935,6 +2038,8 @@ class FederationEventHandler:
# instead we raise an AuthError, which will make the caller ignore it.
raise AuthError(code=HTTPStatus.FORBIDDEN, msg="Auth events could not be found")
@trace
@tag_args
async def _get_remote_auth_chain_for_event(
self, destination: str, room_id: str, event_id: str
) -> None:
@ -1963,6 +2068,7 @@ class FederationEventHandler:
await self._auth_and_persist_outliers(room_id, remote_auth_events)
@trace
async def _run_push_actions_and_persist_event(
self, event: EventBase, context: EventContext, backfilled: bool = False
) -> None:
@ -2071,8 +2177,17 @@ class FederationEventHandler:
self._message_handler.maybe_schedule_expiry(event)
if not backfilled: # Never notify for backfilled events
for event in events:
await self._notify_persisted_event(event, max_stream_token)
with start_active_span("notify_persisted_events"):
set_tag(
SynapseTags.RESULT_PREFIX + "event_ids",
str([ev.event_id for ev in events]),
)
set_tag(
SynapseTags.RESULT_PREFIX + "event_ids.length",
str(len(events)),
)
for event in events:
await self._notify_persisted_event(event, max_stream_token)
return max_stream_token.stream

View file

@ -309,18 +309,18 @@ class InitialSyncHandler:
if blocked:
raise SynapseError(403, "This room has been blocked on this server")
user_id = requester.user.to_string()
(
membership,
member_event_id,
) = await self.auth.check_user_in_room_or_world_readable(
room_id,
user_id,
requester,
allow_departed_users=True,
)
is_peeking = member_event_id is None
user_id = requester.user.to_string()
if membership == Membership.JOIN:
result = await self._room_initial_sync_joined(
user_id, room_id, pagin_config, membership, is_peeking

View file

@ -104,7 +104,7 @@ class MessageHandler:
async def get_room_data(
self,
user_id: str,
requester: Requester,
room_id: str,
event_type: str,
state_key: str,
@ -112,7 +112,7 @@ class MessageHandler:
"""Get data from a room.
Args:
user_id
requester: The user who did the request.
room_id
event_type
state_key
@ -125,7 +125,7 @@ class MessageHandler:
membership,
membership_event_id,
) = await self.auth.check_user_in_room_or_world_readable(
room_id, user_id, allow_departed_users=True
room_id, requester, allow_departed_users=True
)
if membership == Membership.JOIN:
@ -161,11 +161,10 @@ class MessageHandler:
async def get_state_events(
self,
user_id: str,
requester: Requester,
room_id: str,
state_filter: Optional[StateFilter] = None,
at_token: Optional[StreamToken] = None,
is_guest: bool = False,
) -> List[dict]:
"""Retrieve all state events for a given room. If the user is
joined to the room then return the current state. If the user has
@ -174,14 +173,13 @@ class MessageHandler:
visible.
Args:
user_id: The user requesting state events.
requester: The user requesting state events.
room_id: The room ID to get all state events from.
state_filter: The state filter used to fetch state from the database.
at_token: the stream token of the at which we are requesting
the stats. If the user is not allowed to view the state as of that
stream token, we raise a 403 SynapseError. If None, returns the current
state based on the current_state_events table.
is_guest: whether this user is a guest
Returns:
A list of dicts representing state events. [{}, {}, {}]
Raises:
@ -191,6 +189,7 @@ class MessageHandler:
members of this room.
"""
state_filter = state_filter or StateFilter.all()
user_id = requester.user.to_string()
if at_token:
last_event_id = (
@ -223,7 +222,7 @@ class MessageHandler:
membership,
membership_event_id,
) = await self.auth.check_user_in_room_or_world_readable(
room_id, user_id, allow_departed_users=True
room_id, requester, allow_departed_users=True
)
if membership == Membership.JOIN:
@ -317,12 +316,11 @@ class MessageHandler:
Returns:
A dict of user_id to profile info
"""
user_id = requester.user.to_string()
if not requester.app_service:
# We check AS auth after fetching the room membership, as it
# requires us to pull out all joined members anyway.
membership, _ = await self.auth.check_user_in_room_or_world_readable(
room_id, user_id, allow_departed_users=True
room_id, requester, allow_departed_users=True
)
if membership != Membership.JOIN:
raise SynapseError(
@ -331,12 +329,19 @@ class MessageHandler:
msg="Getting joined members while not being a current member of the room is forbidden.",
)
users_with_profile = await self.store.get_users_in_room_with_profiles(room_id)
users_with_profile = (
await self._state_storage_controller.get_users_in_room_with_profiles(
room_id
)
)
# If this is an AS, double check that they are allowed to see the members.
# This can either be because the AS user is in the room or because there
# is a user in the room that the AS is "interested in"
if requester.app_service and user_id not in users_with_profile:
if (
requester.app_service
and requester.user.to_string() not in users_with_profile
):
for uid in users_with_profile:
if requester.app_service.is_interested_in_user(uid):
break

View file

@ -464,7 +464,7 @@ class PaginationHandler:
membership,
member_event_id,
) = await self.auth.check_user_in_room_or_world_readable(
room_id, user_id, allow_departed_users=True
room_id, requester, allow_departed_users=True
)
if pagin_config.direction == "b":

View file

@ -29,7 +29,13 @@ from synapse.api.constants import (
JoinRules,
LoginType,
)
from synapse.api.errors import AuthError, Codes, ConsentNotGivenError, SynapseError
from synapse.api.errors import (
AuthError,
Codes,
ConsentNotGivenError,
InvalidClientTokenError,
SynapseError,
)
from synapse.appservice import ApplicationService
from synapse.config.server import is_threepid_reserved
from synapse.http.servlet import assert_params_in_dict
@ -185,10 +191,7 @@ class RegistrationHandler:
)
if guest_access_token:
user_data = await self.auth.get_user_by_access_token(guest_access_token)
if (
not user_data.is_guest
or UserID.from_string(user_data.user_id).localpart != localpart
):
if not user_data.is_guest or user_data.user.localpart != localpart:
raise AuthError(
403,
"Cannot register taken user ID without valid guest "
@ -625,7 +628,7 @@ class RegistrationHandler:
user_id = user.to_string()
service = self.store.get_app_service_by_token(as_token)
if not service:
raise AuthError(403, "Invalid application service token.")
raise InvalidClientTokenError()
if not service.is_interested_in_user(user_id):
raise SynapseError(
400,

View file

@ -103,7 +103,7 @@ class RelationsHandler:
# TODO Properly handle a user leaving a room.
(_, member_event_id) = await self._auth.check_user_in_room_or_world_readable(
room_id, user_id, allow_departed_users=True
room_id, requester, allow_departed_users=True
)
# This gets the original event and checks that a) the event exists and

View file

@ -721,7 +721,7 @@ class RoomCreationHandler:
# allow the server notices mxid to create rooms
is_requester_admin = True
else:
is_requester_admin = await self.auth.is_server_admin(requester.user)
is_requester_admin = await self.auth.is_server_admin(requester)
# Let the third party rules modify the room creation config if needed, or abort
# the room creation entirely with an exception.
@ -1291,7 +1291,7 @@ class RoomContextHandler:
"""
user = requester.user
if use_admin_priviledge:
await assert_user_is_admin(self.auth, requester.user)
await assert_user_is_admin(self.auth, requester)
before_limit = math.floor(limit / 2.0)
after_limit = limit - before_limit

View file

@ -179,7 +179,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
"""Try and join a room that this server is not in
Args:
requester
requester: The user making the request, according to the access token.
remote_room_hosts: List of servers that can be used to join via.
room_id: Room that we are trying to join
user: User who is trying to join
@ -689,7 +689,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
errcode=Codes.BAD_JSON,
)
if "avatar_url" in content:
if "avatar_url" in content and content.get("avatar_url") is not None:
if not await self.profile_handler.check_avatar_size_and_mime_type(
content["avatar_url"],
):
@ -744,7 +744,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
is_requester_admin = True
else:
is_requester_admin = await self.auth.is_server_admin(requester.user)
is_requester_admin = await self.auth.is_server_admin(requester)
if not is_requester_admin:
if self.config.server.block_non_admin_invites:
@ -868,7 +868,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
bypass_spam_checker = True
else:
bypass_spam_checker = await self.auth.is_server_admin(requester.user)
bypass_spam_checker = await self.auth.is_server_admin(requester)
inviter = await self._get_inviter(target.to_string(), room_id)
if (
@ -1410,7 +1410,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
ShadowBanError if the requester has been shadow-banned.
"""
if self.config.server.block_non_admin_invites:
is_requester_admin = await self.auth.is_server_admin(requester.user)
is_requester_admin = await self.auth.is_server_admin(requester)
if not is_requester_admin:
raise SynapseError(
403, "Invites have been disabled on this server", Codes.FORBIDDEN
@ -1693,7 +1693,7 @@ class RoomMemberMasterHandler(RoomMemberHandler):
check_complexity
and self.hs.config.server.limit_remote_rooms.admins_can_join
):
check_complexity = not await self.auth.is_server_admin(user)
check_complexity = not await self.store.is_server_admin(user)
if check_complexity:
# Fetch the room complexity

View file

@ -13,7 +13,19 @@
# limitations under the License.
import itertools
import logging
from typing import TYPE_CHECKING, Any, Dict, FrozenSet, List, Optional, Set, Tuple
from typing import (
TYPE_CHECKING,
Any,
Collection,
Dict,
FrozenSet,
List,
Mapping,
Optional,
Sequence,
Set,
Tuple,
)
import attr
from prometheus_client import Counter
@ -89,7 +101,7 @@ class SyncConfig:
@attr.s(slots=True, frozen=True, auto_attribs=True)
class TimelineBatch:
prev_batch: StreamToken
events: List[EventBase]
events: Sequence[EventBase]
limited: bool
# A mapping of event ID to the bundled aggregations for the above events.
# This is only calculated if limited is true.
@ -507,10 +519,17 @@ class SyncHandler:
# ensure that we always include current state in the timeline
current_state_ids: FrozenSet[str] = frozenset()
if any(e.is_state() for e in recents):
# FIXME(faster_joins): We use the partial state here as
# we don't want to block `/sync` on finishing a lazy join.
# Which should be fine once
# https://github.com/matrix-org/synapse/issues/12989 is resolved,
# since we shouldn't reach here anymore?
# Note that we use the current state as a whitelist for filtering
# `recents`, so partial state is only a problem when a membership
# event turns up in `recents` but has not made it into the current
# state.
current_state_ids_map = (
await self._state_storage_controller.get_current_state_ids(
room_id
)
await self.store.get_partial_current_state_ids(room_id)
)
current_state_ids = frozenset(current_state_ids_map.values())
@ -579,7 +598,13 @@ class SyncHandler:
if any(e.is_state() for e in loaded_recents):
# FIXME(faster_joins): We use the partial state here as
# we don't want to block `/sync` on finishing a lazy join.
# Is this the correct way of doing it?
# Which should be fine once
# https://github.com/matrix-org/synapse/issues/12989 is resolved,
# since we shouldn't reach here anymore?
# Note that we use the current state as a whitelist for filtering
# `loaded_recents`, so partial state is only a problem when a
# membership event turns up in `loaded_recents` but has not made it
# into the current state.
current_state_ids_map = (
await self.store.get_partial_current_state_ids(room_id)
)
@ -627,7 +652,10 @@ class SyncHandler:
)
async def get_state_after_event(
self, event_id: str, state_filter: Optional[StateFilter] = None
self,
event_id: str,
state_filter: Optional[StateFilter] = None,
await_full_state: bool = True,
) -> StateMap[str]:
"""
Get the room state after the given event
@ -635,9 +663,14 @@ class SyncHandler:
Args:
event_id: event of interest
state_filter: The state filter used to fetch state from the database.
await_full_state: if `True`, will block if we do not yet have complete state
at the event and `state_filter` is not satisfied by partial state.
Defaults to `True`.
"""
state_ids = await self._state_storage_controller.get_state_ids_for_event(
event_id, state_filter=state_filter or StateFilter.all()
event_id,
state_filter=state_filter or StateFilter.all(),
await_full_state=await_full_state,
)
# using get_metadata_for_events here (instead of get_event) sidesteps an issue
@ -660,6 +693,7 @@ class SyncHandler:
room_id: str,
stream_position: StreamToken,
state_filter: Optional[StateFilter] = None,
await_full_state: bool = True,
) -> StateMap[str]:
"""Get the room state at a particular stream position
@ -667,6 +701,9 @@ class SyncHandler:
room_id: room for which to get state
stream_position: point at which to get state
state_filter: The state filter used to fetch state from the database.
await_full_state: if `True`, will block if we do not yet have complete state
at the last event in the room before `stream_position` and
`state_filter` is not satisfied by partial state. Defaults to `True`.
"""
# FIXME: This gets the state at the latest event before the stream ordering,
# which might not be the same as the "current state" of the room at the time
@ -678,7 +715,9 @@ class SyncHandler:
if last_event_id:
state = await self.get_state_after_event(
last_event_id, state_filter=state_filter or StateFilter.all()
last_event_id,
state_filter=state_filter or StateFilter.all(),
await_full_state=await_full_state,
)
else:
@ -852,16 +891,26 @@ class SyncHandler:
now_token: StreamToken,
full_state: bool,
) -> MutableStateMap[EventBase]:
"""Works out the difference in state between the start of the timeline
and the previous sync.
"""Works out the difference in state between the end of the previous sync and
the start of the timeline.
Args:
room_id:
batch: The timeline batch for the room that will be sent to the user.
sync_config:
since_token: Token of the end of the previous batch. May be None.
since_token: Token of the end of the previous batch. May be `None`.
now_token: Token of the end of the current batch.
full_state: Whether to force returning the full state.
`lazy_load_members` still applies when `full_state` is `True`.
Returns:
The state to return in the sync response for the room.
Clients will overlay this onto the state at the end of the previous sync to
arrive at the state at the start of the timeline.
Clients will then overlay state events in the timeline to arrive at the
state at the end of the timeline, in preparation for the next sync.
"""
# TODO(mjark) Check if the state events were received by the server
# after the previous sync, since we need to include those state
@ -869,8 +918,17 @@ class SyncHandler:
# TODO(mjark) Check for new redactions in the state events.
with Measure(self.clock, "compute_state_delta"):
# The memberships needed for events in the timeline.
# Only calculated when `lazy_load_members` is on.
members_to_fetch: Optional[Set[str]] = None
members_to_fetch = None
# A dictionary mapping user IDs to the first event in the timeline sent by
# them. Only calculated when `lazy_load_members` is on.
first_event_by_sender_map: Optional[Dict[str, EventBase]] = None
# The contribution to the room state from state events in the timeline.
# Only contains the last event for any given state key.
timeline_state: StateMap[str]
lazy_load_members = sync_config.filter_collection.lazy_load_members()
include_redundant_members = (
@ -881,10 +939,23 @@ class SyncHandler:
# We only request state for the members needed to display the
# timeline:
members_to_fetch = {
event.sender # FIXME: we also care about invite targets etc.
for event in batch.events
}
timeline_state = {}
members_to_fetch = set()
first_event_by_sender_map = {}
for event in batch.events:
# Build the map from user IDs to the first timeline event they sent.
if event.sender not in first_event_by_sender_map:
first_event_by_sender_map[event.sender] = event
# We need the event's sender, unless their membership was in a
# previous timeline event.
if (EventTypes.Member, event.sender) not in timeline_state:
members_to_fetch.add(event.sender)
# FIXME: we also care about invite targets etc.
if event.is_state():
timeline_state[(event.type, event.state_key)] = event.event_id
if full_state:
# always make sure we LL ourselves so we know we're in the room
@ -894,55 +965,80 @@ class SyncHandler:
members_to_fetch.add(sync_config.user.to_string())
state_filter = StateFilter.from_lazy_load_member_list(members_to_fetch)
else:
state_filter = StateFilter.all()
timeline_state = {
(event.type, event.state_key): event.event_id
for event in batch.events
if event.is_state()
}
# We are happy to use partial state to compute the `/sync` response.
# Since partial state may not include the lazy-loaded memberships we
# require, we fix up the state response afterwards with memberships from
# auth events.
await_full_state = False
else:
timeline_state = {
(event.type, event.state_key): event.event_id
for event in batch.events
if event.is_state()
}
state_filter = StateFilter.all()
await_full_state = True
# Now calculate the state to return in the sync response for the room.
# This is more or less the change in state between the end of the previous
# sync's timeline and the start of the current sync's timeline.
# See the docstring above for details.
state_ids: StateMap[str]
if full_state:
if batch:
current_state_ids = (
state_at_timeline_end = (
await self._state_storage_controller.get_state_ids_for_event(
batch.events[-1].event_id, state_filter=state_filter
batch.events[-1].event_id,
state_filter=state_filter,
await_full_state=await_full_state,
)
)
state_ids = (
state_at_timeline_start = (
await self._state_storage_controller.get_state_ids_for_event(
batch.events[0].event_id, state_filter=state_filter
batch.events[0].event_id,
state_filter=state_filter,
await_full_state=await_full_state,
)
)
else:
current_state_ids = await self.get_state_at(
room_id, stream_position=now_token, state_filter=state_filter
state_at_timeline_end = await self.get_state_at(
room_id,
stream_position=now_token,
state_filter=state_filter,
await_full_state=await_full_state,
)
state_ids = current_state_ids
state_at_timeline_start = state_at_timeline_end
state_ids = _calculate_state(
timeline_contains=timeline_state,
timeline_start=state_ids,
previous={},
current=current_state_ids,
timeline_start=state_at_timeline_start,
timeline_end=state_at_timeline_end,
previous_timeline_end={},
lazy_load_members=lazy_load_members,
)
elif batch.limited:
if batch:
state_at_timeline_start = (
await self._state_storage_controller.get_state_ids_for_event(
batch.events[0].event_id, state_filter=state_filter
batch.events[0].event_id,
state_filter=state_filter,
await_full_state=await_full_state,
)
)
else:
# We can get here if the user has ignored the senders of all
# the recent events.
state_at_timeline_start = await self.get_state_at(
room_id, stream_position=now_token, state_filter=state_filter
room_id,
stream_position=now_token,
state_filter=state_filter,
await_full_state=await_full_state,
)
# for now, we disable LL for gappy syncs - see
@ -964,28 +1060,35 @@ class SyncHandler:
# is indeed the case.
assert since_token is not None
state_at_previous_sync = await self.get_state_at(
room_id, stream_position=since_token, state_filter=state_filter
room_id,
stream_position=since_token,
state_filter=state_filter,
await_full_state=await_full_state,
)
if batch:
current_state_ids = (
state_at_timeline_end = (
await self._state_storage_controller.get_state_ids_for_event(
batch.events[-1].event_id, state_filter=state_filter
batch.events[-1].event_id,
state_filter=state_filter,
await_full_state=await_full_state,
)
)
else:
# Its not clear how we get here, but empirically we do
# (#5407). Logging has been added elsewhere to try and
# figure out where this state comes from.
current_state_ids = await self.get_state_at(
room_id, stream_position=now_token, state_filter=state_filter
# We can get here if the user has ignored the senders of all
# the recent events.
state_at_timeline_end = await self.get_state_at(
room_id,
stream_position=now_token,
state_filter=state_filter,
await_full_state=await_full_state,
)
state_ids = _calculate_state(
timeline_contains=timeline_state,
timeline_start=state_at_timeline_start,
previous=state_at_previous_sync,
current=current_state_ids,
timeline_end=state_at_timeline_end,
previous_timeline_end=state_at_previous_sync,
# we have to include LL members in case LL initial sync missed them
lazy_load_members=lazy_load_members,
)
@ -1008,8 +1111,30 @@ class SyncHandler:
(EventTypes.Member, member)
for member in members_to_fetch
),
await_full_state=False,
)
# If we only have partial state for the room, `state_ids` may be missing the
# memberships we wanted. We attempt to find some by digging through the auth
# events of timeline events.
if lazy_load_members and await self.store.is_partial_state_room(room_id):
assert members_to_fetch is not None
assert first_event_by_sender_map is not None
additional_state_ids = (
await self._find_missing_partial_state_memberships(
room_id, members_to_fetch, first_event_by_sender_map, state_ids
)
)
state_ids = {**state_ids, **additional_state_ids}
# At this point, if `lazy_load_members` is enabled, `state_ids` includes
# the memberships of all event senders in the timeline. This is because we
# may not have sent the memberships in a previous sync.
# When `include_redundant_members` is on, we send all the lazy-loaded
# memberships of event senders. Otherwise we make an effort to limit the set
# of memberships we send to those that we have not already sent to this client.
if lazy_load_members and not include_redundant_members:
cache_key = (sync_config.user.to_string(), sync_config.device_id)
cache = self.get_lazy_loaded_members_cache(cache_key)
@ -1050,6 +1175,99 @@ class SyncHandler:
)
}
async def _find_missing_partial_state_memberships(
self,
room_id: str,
members_to_fetch: Collection[str],
events_with_membership_auth: Mapping[str, EventBase],
found_state_ids: StateMap[str],
) -> StateMap[str]:
"""Finds missing memberships from a set of auth events and returns them as a
state map.
Args:
room_id: The partial state room to find the remaining memberships for.
members_to_fetch: The memberships to find.
events_with_membership_auth: A mapping from user IDs to events whose auth
events are known to contain their membership.
found_state_ids: A dict from (type, state_key) -> state_event_id, containing
memberships that have been previously found. Entries in
`members_to_fetch` that have a membership in `found_state_ids` are
ignored.
Returns:
A dict from ("m.room.member", state_key) -> state_event_id, containing the
memberships missing from `found_state_ids`.
Raises:
KeyError: if `events_with_membership_auth` does not have an entry for a
missing membership. Memberships in `found_state_ids` do not need an
entry in `events_with_membership_auth`.
"""
additional_state_ids: MutableStateMap[str] = {}
# Tracks the missing members for logging purposes.
missing_members = set()
# Identify memberships missing from `found_state_ids` and pick out the auth
# events in which to look for them.
auth_event_ids: Set[str] = set()
for member in members_to_fetch:
if (EventTypes.Member, member) in found_state_ids:
continue
missing_members.add(member)
event_with_membership_auth = events_with_membership_auth[member]
auth_event_ids.update(event_with_membership_auth.auth_event_ids())
auth_events = await self.store.get_events(auth_event_ids)
# Run through the missing memberships once more, picking out the memberships
# from the pile of auth events we have just fetched.
for member in members_to_fetch:
if (EventTypes.Member, member) in found_state_ids:
continue
event_with_membership_auth = events_with_membership_auth[member]
# Dig through the auth events to find the desired membership.
for auth_event_id in event_with_membership_auth.auth_event_ids():
# We only store events once we have all their auth events,
# so the auth event must be in the pile we have just
# fetched.
auth_event = auth_events[auth_event_id]
if (
auth_event.type == EventTypes.Member
and auth_event.state_key == member
):
missing_members.remove(member)
additional_state_ids[
(EventTypes.Member, member)
] = auth_event.event_id
break
if missing_members:
# There really shouldn't be any missing memberships now. Either:
# * we couldn't find an auth event, which shouldn't happen because we do
# not persist events with persisting their auth events first, or
# * the set of auth events did not contain a membership we wanted, which
# means our caller didn't compute the events in `members_to_fetch`
# correctly, or we somehow accepted an event whose auth events were
# dodgy.
logger.error(
"Failed to find memberships for %s in partial state room "
"%s in the auth events of %s.",
missing_members,
room_id,
[
events_with_membership_auth[member].event_id
for member in missing_members
],
)
return additional_state_ids
async def unread_notifs_for_room_id(
self, room_id: str, sync_config: SyncConfig
) -> NotifCounts:
@ -1694,7 +1912,11 @@ class SyncHandler:
continue
if room_id in sync_result_builder.joined_room_ids or has_join:
old_state_ids = await self.get_state_at(room_id, since_token)
old_state_ids = await self.get_state_at(
room_id,
since_token,
state_filter=StateFilter.from_types([(EventTypes.Member, user_id)]),
)
old_mem_ev_id = old_state_ids.get((EventTypes.Member, user_id), None)
old_mem_ev = None
if old_mem_ev_id:
@ -1720,7 +1942,13 @@ class SyncHandler:
newly_left_rooms.append(room_id)
else:
if not old_state_ids:
old_state_ids = await self.get_state_at(room_id, since_token)
old_state_ids = await self.get_state_at(
room_id,
since_token,
state_filter=StateFilter.from_types(
[(EventTypes.Member, user_id)]
),
)
old_mem_ev_id = old_state_ids.get(
(EventTypes.Member, user_id), None
)
@ -2215,8 +2443,8 @@ def _action_has_highlight(actions: List[JsonDict]) -> bool:
def _calculate_state(
timeline_contains: StateMap[str],
timeline_start: StateMap[str],
previous: StateMap[str],
current: StateMap[str],
timeline_end: StateMap[str],
previous_timeline_end: StateMap[str],
lazy_load_members: bool,
) -> StateMap[str]:
"""Works out what state to include in a sync response.
@ -2224,45 +2452,50 @@ def _calculate_state(
Args:
timeline_contains: state in the timeline
timeline_start: state at the start of the timeline
previous: state at the end of the previous sync (or empty dict
timeline_end: state at the end of the timeline
previous_timeline_end: state at the end of the previous sync (or empty dict
if this is an initial sync)
current: state at the end of the timeline
lazy_load_members: whether to return members from timeline_start
or not. assumes that timeline_start has already been filtered to
include only the members the client needs to know about.
"""
event_id_to_key = {
e: key
for key, e in itertools.chain(
event_id_to_state_key = {
event_id: state_key
for state_key, event_id in itertools.chain(
timeline_contains.items(),
previous.items(),
timeline_start.items(),
current.items(),
timeline_end.items(),
previous_timeline_end.items(),
)
}
c_ids = set(current.values())
ts_ids = set(timeline_start.values())
p_ids = set(previous.values())
tc_ids = set(timeline_contains.values())
timeline_end_ids = set(timeline_end.values())
timeline_start_ids = set(timeline_start.values())
previous_timeline_end_ids = set(previous_timeline_end.values())
timeline_contains_ids = set(timeline_contains.values())
# If we are lazyloading room members, we explicitly add the membership events
# for the senders in the timeline into the state block returned by /sync,
# as we may not have sent them to the client before. We find these membership
# events by filtering them out of timeline_start, which has already been filtered
# to only include membership events for the senders in the timeline.
# In practice, we can do this by removing them from the p_ids list,
# which is the list of relevant state we know we have already sent to the client.
# In practice, we can do this by removing them from the previous_timeline_end_ids
# list, which is the list of relevant state we know we have already sent to the
# client.
# see https://github.com/matrix-org/synapse/pull/2970/files/efcdacad7d1b7f52f879179701c7e0d9b763511f#r204732809
if lazy_load_members:
p_ids.difference_update(
previous_timeline_end_ids.difference_update(
e for t, e in timeline_start.items() if t[0] == EventTypes.Member
)
state_ids = ((c_ids | ts_ids) - p_ids) - tc_ids
state_ids = (
(timeline_end_ids | timeline_start_ids)
- previous_timeline_end_ids
- timeline_contains_ids
)
return {event_id_to_key[e]: e for e in state_ids}
return {event_id_to_state_key[e]: e for e in state_ids}
@attr.s(slots=True, auto_attribs=True)

View file

@ -253,12 +253,11 @@ class TypingWriterHandler(FollowerTypingHandler):
self, target_user: UserID, requester: Requester, room_id: str, timeout: int
) -> None:
target_user_id = target_user.to_string()
auth_user_id = requester.user.to_string()
if not self.is_mine_id(target_user_id):
raise SynapseError(400, "User is not hosted on this homeserver")
if target_user_id != auth_user_id:
if target_user != requester.user:
raise AuthError(400, "Cannot set another user's typing state")
if requester.shadow_banned:
@ -266,7 +265,7 @@ class TypingWriterHandler(FollowerTypingHandler):
await self.clock.sleep(random.randint(1, 10))
raise ShadowBanError()
await self.auth.check_user_in_room(room_id, target_user_id)
await self.auth.check_user_in_room(room_id, requester)
logger.debug("%s has started typing in %s", target_user_id, room_id)
@ -289,12 +288,11 @@ class TypingWriterHandler(FollowerTypingHandler):
self, target_user: UserID, requester: Requester, room_id: str
) -> None:
target_user_id = target_user.to_string()
auth_user_id = requester.user.to_string()
if not self.is_mine_id(target_user_id):
raise SynapseError(400, "User is not hosted on this homeserver")
if target_user_id != auth_user_id:
if target_user != requester.user:
raise AuthError(400, "Cannot set another user's typing state")
if requester.shadow_banned:
@ -302,7 +300,7 @@ class TypingWriterHandler(FollowerTypingHandler):
await self.clock.sleep(random.randint(1, 10))
raise ShadowBanError()
await self.auth.check_user_in_room(room_id, target_user_id)
await self.auth.check_user_in_room(room_id, requester)
logger.debug("%s has stopped typing in %s", target_user_id, room_id)

View file

@ -23,9 +23,12 @@ from typing import (
Optional,
Sequence,
Tuple,
Type,
TypeVar,
overload,
)
from pydantic import BaseModel, ValidationError
from typing_extensions import Literal
from twisted.web.server import Request
@ -694,6 +697,28 @@ def parse_json_object_from_request(
return content
Model = TypeVar("Model", bound=BaseModel)
def parse_and_validate_json_object_from_request(
request: Request, model_type: Type[Model]
) -> Model:
"""Parse a JSON object from the body of a twisted HTTP request, then deserialise and
validate using the given pydantic model.
Raises:
SynapseError if the request body couldn't be decoded as JSON or
if it wasn't a JSON object.
"""
content = parse_json_object_from_request(request, allow_empty_body=False)
try:
instance = model_type.parse_obj(content)
except ValidationError as e:
raise SynapseError(HTTPStatus.BAD_REQUEST, str(e), errcode=Codes.BAD_JSON)
return instance
def assert_params_in_dict(body: JsonDict, required: Iterable[str]) -> None:
absent = []
for k in required:

View file

@ -226,7 +226,7 @@ class SynapseRequest(Request):
# If this is a request where the target user doesn't match the user who
# authenticated (e.g. and admin is puppetting a user) then we return both.
if self._requester.user.to_string() != authenticated_entity:
if requester != authenticated_entity:
return requester, authenticated_entity
return requester, None

View file

@ -173,6 +173,7 @@ from typing import (
Any,
Callable,
Collection,
ContextManager,
Dict,
Generator,
Iterable,
@ -309,6 +310,19 @@ class SynapseTags:
# The name of the external cache
CACHE_NAME = "cache.name"
# Used to tag function arguments
#
# Tag a named arg. The name of the argument should be appended to this prefix.
FUNC_ARG_PREFIX = "ARG."
# Tag extra variadic number of positional arguments (`def foo(first, second, *extras)`)
FUNC_ARGS = "args"
# Tag keyword args
FUNC_KWARGS = "kwargs"
# Some intermediate result that's interesting to the function. The label for
# the result should be appended to this prefix.
RESULT_PREFIX = "RESULT."
class SynapseBaggage:
FORCE_TRACING = "synapse-force-tracing"
@ -823,75 +837,117 @@ def extract_text_map(carrier: Dict[str, str]) -> Optional["opentracing.SpanConte
# Tracing decorators
def trace_with_opname(opname: str) -> Callable[[Callable[P, R]], Callable[P, R]]:
def _custom_sync_async_decorator(
func: Callable[P, R],
wrapping_logic: Callable[[Callable[P, R], Any, Any], ContextManager[None]],
) -> Callable[P, R]:
"""
Decorates a function that is sync or async (coroutines), or that returns a Twisted
`Deferred`. The custom business logic of the decorator goes in `wrapping_logic`.
Example usage:
```py
# Decorator to time the function and log it out
def duration(func: Callable[P, R]) -> Callable[P, R]:
@contextlib.contextmanager
def _wrapping_logic(func: Callable[P, R], *args: P.args, **kwargs: P.kwargs) -> Generator[None, None, None]:
start_ts = time.time()
try:
yield
finally:
end_ts = time.time()
duration = end_ts - start_ts
logger.info("%s took %s seconds", func.__name__, duration)
return _custom_sync_async_decorator(func, _wrapping_logic)
```
Args:
func: The function to be decorated
wrapping_logic: The business logic of your custom decorator.
This should be a ContextManager so you are able to run your logic
before/after the function as desired.
"""
if inspect.iscoroutinefunction(func):
@wraps(func)
async def _wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
with wrapping_logic(func, *args, **kwargs):
return await func(*args, **kwargs) # type: ignore[misc]
else:
# The other case here handles both sync functions and those
# decorated with inlineDeferred.
@wraps(func)
def _wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
scope = wrapping_logic(func, *args, **kwargs)
scope.__enter__()
try:
result = func(*args, **kwargs)
if isinstance(result, defer.Deferred):
def call_back(result: R) -> R:
scope.__exit__(None, None, None)
return result
def err_back(result: R) -> R:
scope.__exit__(None, None, None)
return result
result.addCallbacks(call_back, err_back)
else:
if inspect.isawaitable(result):
logger.error(
"@trace may not have wrapped %s correctly! "
"The function is not async but returned a %s.",
func.__qualname__,
type(result).__name__,
)
scope.__exit__(None, None, None)
return result
except Exception as e:
scope.__exit__(type(e), None, e.__traceback__)
raise
return _wrapper # type: ignore[return-value]
def trace_with_opname(
opname: str,
*,
tracer: Optional["opentracing.Tracer"] = None,
) -> Callable[[Callable[P, R]], Callable[P, R]]:
"""
Decorator to trace a function with a custom opname.
See the module's doc string for usage examples.
"""
def decorator(func: Callable[P, R]) -> Callable[P, R]:
if opentracing is None:
return func # type: ignore[unreachable]
# type-ignore: mypy bug, see https://github.com/python/mypy/issues/12909
@contextlib.contextmanager # type: ignore[arg-type]
def _wrapping_logic(
func: Callable[P, R], *args: P.args, **kwargs: P.kwargs
) -> Generator[None, None, None]:
with start_active_span(opname, tracer=tracer):
yield
if inspect.iscoroutinefunction(func):
def _decorator(func: Callable[P, R]) -> Callable[P, R]:
if not opentracing:
return func
@wraps(func)
async def _trace_inner(*args: P.args, **kwargs: P.kwargs) -> R:
with start_active_span(opname):
return await func(*args, **kwargs) # type: ignore[misc]
return _custom_sync_async_decorator(func, _wrapping_logic)
else:
# The other case here handles both sync functions and those
# decorated with inlineDeferred.
@wraps(func)
def _trace_inner(*args: P.args, **kwargs: P.kwargs) -> R:
scope = start_active_span(opname)
scope.__enter__()
try:
result = func(*args, **kwargs)
if isinstance(result, defer.Deferred):
def call_back(result: R) -> R:
scope.__exit__(None, None, None)
return result
def err_back(result: R) -> R:
scope.__exit__(None, None, None)
return result
result.addCallbacks(call_back, err_back)
else:
if inspect.isawaitable(result):
logger.error(
"@trace may not have wrapped %s correctly! "
"The function is not async but returned a %s.",
func.__qualname__,
type(result).__name__,
)
scope.__exit__(None, None, None)
return result
except Exception as e:
scope.__exit__(type(e), None, e.__traceback__)
raise
return _trace_inner # type: ignore[return-value]
return decorator
return _decorator
def trace(func: Callable[P, R]) -> Callable[P, R]:
"""
Decorator to trace a function.
Sets the operation name to that of the function's name.
See the module's doc string for usage examples.
"""
@ -900,7 +956,7 @@ def trace(func: Callable[P, R]) -> Callable[P, R]:
def tag_args(func: Callable[P, R]) -> Callable[P, R]:
"""
Tags all of the args to the active span.
Decorator to tag all of the args to the active span.
Args:
func: `func` is assumed to be a method taking a `self` parameter, or a
@ -911,22 +967,25 @@ def tag_args(func: Callable[P, R]) -> Callable[P, R]:
if not opentracing:
return func
@wraps(func)
def _tag_args_inner(*args: P.args, **kwargs: P.kwargs) -> R:
# type-ignore: mypy bug, see https://github.com/python/mypy/issues/12909
@contextlib.contextmanager # type: ignore[arg-type]
def _wrapping_logic(
func: Callable[P, R], *args: P.args, **kwargs: P.kwargs
) -> Generator[None, None, None]:
argspec = inspect.getfullargspec(func)
# We use `[1:]` to skip the `self` object reference and `start=1` to
# make the index line up with `argspec.args`.
#
# FIXME: We could update this handle any type of function by ignoring the
# FIXME: We could update this to handle any type of function by ignoring the
# first argument only if it's named `self` or `cls`. This isn't fool-proof
# but handles the idiomatic cases.
for i, arg in enumerate(args[1:], start=1): # type: ignore[index]
set_tag("ARG_" + argspec.args[i], str(arg))
set_tag("args", str(args[len(argspec.args) :])) # type: ignore[index]
set_tag("kwargs", str(kwargs))
return func(*args, **kwargs)
set_tag(SynapseTags.FUNC_ARG_PREFIX + argspec.args[i], str(arg))
set_tag(SynapseTags.FUNC_ARGS, str(args[len(argspec.args) :])) # type: ignore[index]
set_tag(SynapseTags.FUNC_KWARGS, str(kwargs))
yield
return _tag_args_inner
return _custom_sync_async_decorator(func, _wrapping_logic)
@contextlib.contextmanager

View file

@ -14,128 +14,235 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from typing import Any, Dict, List
"""
Push rules is the system used to determine which events trigger a push (and a
bump in notification counts).
from synapse.push.rulekinds import PRIORITY_CLASS_INVERSE_MAP, PRIORITY_CLASS_MAP
This consists of a list of "push rules" for each user, where a push rule is a
pair of "conditions" and "actions". When a user receives an event Synapse
iterates over the list of push rules until it finds one where all the conditions
match the event, at which point "actions" describe the outcome (e.g. notify,
highlight, etc).
Push rules are split up into 5 different "kinds" (aka "priority classes"), which
are run in order:
1. Override highest priority rules, e.g. always ignore notices
2. Content content specific rules, e.g. @ notifications
3. Room per room rules, e.g. enable/disable notifications for all messages
in a room
4. Sender per sender rules, e.g. never notify for messages from a given
user
5. Underride the lowest priority "default" rules, e.g. notify for every
message.
The set of "base rules" are the list of rules that every user has by default. A
user can modify their copy of the push rules in one of three ways:
1. Adding a new push rule of a certain kind
2. Changing the actions of a base rule
3. Enabling/disabling a base rule.
The base rules are split into whether they come before or after a particular
kind, so the order of push rule evaluation would be: base rules for before
"override" kind, user defined "override" rules, base rules after "override"
kind, etc, etc.
"""
import itertools
import logging
from typing import Dict, Iterator, List, Mapping, Sequence, Tuple, Union
import attr
from synapse.config.experimental import ExperimentalConfig
from synapse.push.rulekinds import PRIORITY_CLASS_MAP
logger = logging.getLogger(__name__)
def list_with_base_rules(rawrules: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""Combine the list of rules set by the user with the default push rules
@attr.s(auto_attribs=True, slots=True, frozen=True)
class PushRule:
"""A push rule
Args:
rawrules: The rules the user has modified or set.
Returns:
A new list with the rules set by the user combined with the defaults.
Attributes:
rule_id: a unique ID for this rule
priority_class: what "kind" of push rule this is (see
`PRIORITY_CLASS_MAP` for mapping between int and kind)
conditions: the sequence of conditions that all need to match
actions: the actions to apply if all conditions are met
default: is this a base rule?
default_enabled: is this enabled by default?
"""
ruleslist = []
# Grab the base rules that the user has modified.
# The modified base rules have a priority_class of -1.
modified_base_rules = {r["rule_id"]: r for r in rawrules if r["priority_class"] < 0}
rule_id: str
priority_class: int
conditions: Sequence[Mapping[str, str]]
actions: Sequence[Union[str, Mapping]]
default: bool = False
default_enabled: bool = True
# Remove the modified base rules from the list, They'll be added back
# in the default positions in the list.
rawrules = [r for r in rawrules if r["priority_class"] >= 0]
# shove the server default rules for each kind onto the end of each
current_prio_class = list(PRIORITY_CLASS_INVERSE_MAP)[-1]
@attr.s(auto_attribs=True, slots=True, frozen=True, weakref_slot=False)
class PushRules:
"""A collection of push rules for an account.
ruleslist.extend(
make_base_prepend_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
Can be iterated over, producing push rules in priority order.
"""
# A mapping from rule ID to push rule that overrides a base rule. These will
# be returned instead of the base rule.
overriden_base_rules: Dict[str, PushRule] = attr.Factory(dict)
# The following stores the custom push rules at each priority class.
#
# We keep these separate (rather than combining into one big list) to avoid
# copying the base rules around all the time.
override: List[PushRule] = attr.Factory(list)
content: List[PushRule] = attr.Factory(list)
room: List[PushRule] = attr.Factory(list)
sender: List[PushRule] = attr.Factory(list)
underride: List[PushRule] = attr.Factory(list)
def __iter__(self) -> Iterator[PushRule]:
# When iterating over the push rules we need to return the base rules
# interspersed at the correct spots.
for rule in itertools.chain(
BASE_PREPEND_OVERRIDE_RULES,
self.override,
BASE_APPEND_OVERRIDE_RULES,
self.content,
BASE_APPEND_CONTENT_RULES,
self.room,
self.sender,
self.underride,
BASE_APPEND_UNDERRIDE_RULES,
):
# Check if a base rule has been overriden by a custom rule. If so
# return that instead.
override_rule = self.overriden_base_rules.get(rule.rule_id)
if override_rule:
yield override_rule
else:
yield rule
def __len__(self) -> int:
# The length is mostly used by caches to get a sense of "size" / amount
# of memory this object is using, so we only count the number of custom
# rules.
return (
len(self.overriden_base_rules)
+ len(self.override)
+ len(self.content)
+ len(self.room)
+ len(self.sender)
+ len(self.underride)
)
)
for r in rawrules:
if r["priority_class"] < current_prio_class:
while r["priority_class"] < current_prio_class:
ruleslist.extend(
make_base_append_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
modified_base_rules,
)
)
current_prio_class -= 1
if current_prio_class > 0:
ruleslist.extend(
make_base_prepend_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
modified_base_rules,
)
)
ruleslist.append(r)
@attr.s(auto_attribs=True, slots=True, frozen=True, weakref_slot=False)
class FilteredPushRules:
"""A wrapper around `PushRules` that filters out disabled experimental push
rules, and includes the "enabled" state for each rule when iterated over.
"""
while current_prio_class > 0:
ruleslist.extend(
make_base_append_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
push_rules: PushRules
enabled_map: Dict[str, bool]
experimental_config: ExperimentalConfig
def __iter__(self) -> Iterator[Tuple[PushRule, bool]]:
for rule in self.push_rules:
if not _is_experimental_rule_enabled(
rule.rule_id, self.experimental_config
):
continue
enabled = self.enabled_map.get(rule.rule_id, rule.default_enabled)
yield rule, enabled
def __len__(self) -> int:
return len(self.push_rules)
DEFAULT_EMPTY_PUSH_RULES = PushRules()
def compile_push_rules(rawrules: List[PushRule]) -> PushRules:
"""Given a set of custom push rules return a `PushRules` instance (which
includes the base rules).
"""
if not rawrules:
# Fast path to avoid allocating empty lists when there are no custom
# rules for the user.
return DEFAULT_EMPTY_PUSH_RULES
rules = PushRules()
for rule in rawrules:
# We need to decide which bucket each custom push rule goes into.
# If it has the same ID as a base rule then it overrides that...
overriden_base_rule = BASE_RULES_BY_ID.get(rule.rule_id)
if overriden_base_rule:
rules.overriden_base_rules[rule.rule_id] = attr.evolve(
overriden_base_rule, actions=rule.actions
)
)
current_prio_class -= 1
if current_prio_class > 0:
ruleslist.extend(
make_base_prepend_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
)
continue
# ... otherwise it gets added to the appropriate priority class bucket
collection: List[PushRule]
if rule.priority_class == 5:
collection = rules.override
elif rule.priority_class == 4:
collection = rules.content
elif rule.priority_class == 3:
collection = rules.room
elif rule.priority_class == 2:
collection = rules.sender
elif rule.priority_class == 1:
collection = rules.underride
elif rule.priority_class <= 0:
logger.info(
"Got rule with priority class less than zero, but doesn't override a base rule: %s",
rule,
)
continue
else:
# We log and continue here so as not to break event sending
logger.error("Unknown priority class: %", rule.priority_class)
continue
return ruleslist
def make_base_append_rules(
kind: str, modified_base_rules: Dict[str, Dict[str, Any]]
) -> List[Dict[str, Any]]:
rules = []
if kind == "override":
rules = BASE_APPEND_OVERRIDE_RULES
elif kind == "underride":
rules = BASE_APPEND_UNDERRIDE_RULES
elif kind == "content":
rules = BASE_APPEND_CONTENT_RULES
# Copy the rules before modifying them
rules = copy.deepcopy(rules)
for r in rules:
# Only modify the actions, keep the conditions the same.
assert isinstance(r["rule_id"], str)
modified = modified_base_rules.get(r["rule_id"])
if modified:
r["actions"] = modified["actions"]
collection.append(rule)
return rules
def make_base_prepend_rules(
kind: str,
modified_base_rules: Dict[str, Dict[str, Any]],
) -> List[Dict[str, Any]]:
rules = []
if kind == "override":
rules = BASE_PREPEND_OVERRIDE_RULES
# Copy the rules before modifying them
rules = copy.deepcopy(rules)
for r in rules:
# Only modify the actions, keep the conditions the same.
assert isinstance(r["rule_id"], str)
modified = modified_base_rules.get(r["rule_id"])
if modified:
r["actions"] = modified["actions"]
return rules
def _is_experimental_rule_enabled(
rule_id: str, experimental_config: ExperimentalConfig
) -> bool:
"""Used by `FilteredPushRules` to filter out experimental rules when they
have not been enabled.
"""
if (
rule_id == "global/override/.org.matrix.msc3786.rule.room.server_acl"
and not experimental_config.msc3786_enabled
):
return False
if (
rule_id == "global/underride/.org.matrix.msc3772.thread_reply"
and not experimental_config.msc3772_enabled
):
return False
return True
# We have to annotate these types, otherwise mypy infers them as
# `List[Dict[str, Sequence[Collection[str]]]]`.
BASE_APPEND_CONTENT_RULES: List[Dict[str, Any]] = [
{
"rule_id": "global/content/.m.rule.contains_user_name",
"conditions": [
BASE_APPEND_CONTENT_RULES = [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["content"],
rule_id="global/content/.m.rule.contains_user_name",
conditions=[
{
"kind": "event_match",
"key": "content.body",
@ -143,29 +250,33 @@ BASE_APPEND_CONTENT_RULES: List[Dict[str, Any]] = [
"pattern_type": "user_localpart",
}
],
"actions": [
actions=[
"notify",
{"set_tweak": "sound", "value": "default"},
{"set_tweak": "highlight"},
],
}
)
]
BASE_PREPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
{
"rule_id": "global/override/.m.rule.master",
"enabled": False,
"conditions": [],
"actions": ["dont_notify"],
}
BASE_PREPEND_OVERRIDE_RULES = [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["override"],
rule_id="global/override/.m.rule.master",
default_enabled=False,
conditions=[],
actions=["dont_notify"],
)
]
BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
{
"rule_id": "global/override/.m.rule.suppress_notices",
"conditions": [
BASE_APPEND_OVERRIDE_RULES = [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["override"],
rule_id="global/override/.m.rule.suppress_notices",
conditions=[
{
"kind": "event_match",
"key": "content.msgtype",
@ -173,13 +284,15 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_suppress_notices",
}
],
"actions": ["dont_notify"],
},
actions=["dont_notify"],
),
# NB. .m.rule.invite_for_me must be higher prio than .m.rule.member_event
# otherwise invites will be matched by .m.rule.member_event
{
"rule_id": "global/override/.m.rule.invite_for_me",
"conditions": [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["override"],
rule_id="global/override/.m.rule.invite_for_me",
conditions=[
{
"kind": "event_match",
"key": "type",
@ -195,21 +308,23 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
# Match the requester's MXID.
{"kind": "event_match", "key": "state_key", "pattern_type": "user_id"},
],
"actions": [
actions=[
"notify",
{"set_tweak": "sound", "value": "default"},
{"set_tweak": "highlight", "value": False},
],
},
),
# Will we sometimes want to know about people joining and leaving?
# Perhaps: if so, this could be expanded upon. Seems the most usual case
# is that we don't though. We add this override rule so that even if
# the room rule is set to notify, we don't get notifications about
# join/leave/avatar/displayname events.
# See also: https://matrix.org/jira/browse/SYN-607
{
"rule_id": "global/override/.m.rule.member_event",
"conditions": [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["override"],
rule_id="global/override/.m.rule.member_event",
conditions=[
{
"kind": "event_match",
"key": "type",
@ -217,24 +332,28 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_member",
}
],
"actions": ["dont_notify"],
},
actions=["dont_notify"],
),
# This was changed from underride to override so it's closer in priority
# to the content rules where the user name highlight rule lives. This
# way a room rule is lower priority than both but a custom override rule
# is higher priority than both.
{
"rule_id": "global/override/.m.rule.contains_display_name",
"conditions": [{"kind": "contains_display_name"}],
"actions": [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["override"],
rule_id="global/override/.m.rule.contains_display_name",
conditions=[{"kind": "contains_display_name"}],
actions=[
"notify",
{"set_tweak": "sound", "value": "default"},
{"set_tweak": "highlight"},
],
},
{
"rule_id": "global/override/.m.rule.roomnotif",
"conditions": [
),
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["override"],
rule_id="global/override/.m.rule.roomnotif",
conditions=[
{
"kind": "event_match",
"key": "content.body",
@ -247,11 +366,13 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_roomnotif_pl",
},
],
"actions": ["notify", {"set_tweak": "highlight", "value": True}],
},
{
"rule_id": "global/override/.m.rule.tombstone",
"conditions": [
actions=["notify", {"set_tweak": "highlight", "value": True}],
),
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["override"],
rule_id="global/override/.m.rule.tombstone",
conditions=[
{
"kind": "event_match",
"key": "type",
@ -265,11 +386,13 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_tombstone_statekey",
},
],
"actions": ["notify", {"set_tweak": "highlight", "value": True}],
},
{
"rule_id": "global/override/.m.rule.reaction",
"conditions": [
actions=["notify", {"set_tweak": "highlight", "value": True}],
),
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["override"],
rule_id="global/override/.m.rule.reaction",
conditions=[
{
"kind": "event_match",
"key": "type",
@ -277,14 +400,16 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_reaction",
}
],
"actions": ["dont_notify"],
},
actions=["dont_notify"],
),
# XXX: This is an experimental rule that is only enabled if msc3786_enabled
# is enabled, if it is not the rule gets filtered out in _load_rules() in
# PushRulesWorkerStore
{
"rule_id": "global/override/.org.matrix.msc3786.rule.room.server_acl",
"conditions": [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["override"],
rule_id="global/override/.org.matrix.msc3786.rule.room.server_acl",
conditions=[
{
"kind": "event_match",
"key": "type",
@ -298,15 +423,17 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_room_server_acl_state_key",
},
],
"actions": [],
},
actions=[],
),
]
BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
{
"rule_id": "global/underride/.m.rule.call",
"conditions": [
BASE_APPEND_UNDERRIDE_RULES = [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["underride"],
rule_id="global/underride/.m.rule.call",
conditions=[
{
"kind": "event_match",
"key": "type",
@ -314,17 +441,19 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_call",
}
],
"actions": [
actions=[
"notify",
{"set_tweak": "sound", "value": "ring"},
{"set_tweak": "highlight", "value": False},
],
},
),
# XXX: once m.direct is standardised everywhere, we should use it to detect
# a DM from the user's perspective rather than this heuristic.
{
"rule_id": "global/underride/.m.rule.room_one_to_one",
"conditions": [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["underride"],
rule_id="global/underride/.m.rule.room_one_to_one",
conditions=[
{"kind": "room_member_count", "is": "2", "_cache_key": "member_count"},
{
"kind": "event_match",
@ -333,17 +462,19 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_message",
},
],
"actions": [
actions=[
"notify",
{"set_tweak": "sound", "value": "default"},
{"set_tweak": "highlight", "value": False},
],
},
),
# XXX: this is going to fire for events which aren't m.room.messages
# but are encrypted (e.g. m.call.*)...
{
"rule_id": "global/underride/.m.rule.encrypted_room_one_to_one",
"conditions": [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["underride"],
rule_id="global/underride/.m.rule.encrypted_room_one_to_one",
conditions=[
{"kind": "room_member_count", "is": "2", "_cache_key": "member_count"},
{
"kind": "event_match",
@ -352,15 +483,17 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_encrypted",
},
],
"actions": [
actions=[
"notify",
{"set_tweak": "sound", "value": "default"},
{"set_tweak": "highlight", "value": False},
],
},
{
"rule_id": "global/underride/.org.matrix.msc3772.thread_reply",
"conditions": [
),
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["underride"],
rule_id="global/underride/.org.matrix.msc3772.thread_reply",
conditions=[
{
"kind": "org.matrix.msc3772.relation_match",
"rel_type": "m.thread",
@ -368,11 +501,13 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
"sender_type": "user_id",
}
],
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
},
{
"rule_id": "global/underride/.m.rule.message",
"conditions": [
actions=["notify", {"set_tweak": "highlight", "value": False}],
),
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["underride"],
rule_id="global/underride/.m.rule.message",
conditions=[
{
"kind": "event_match",
"key": "type",
@ -380,13 +515,15 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_message",
}
],
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
},
actions=["notify", {"set_tweak": "highlight", "value": False}],
),
# XXX: this is going to fire for events which aren't m.room.messages
# but are encrypted (e.g. m.call.*)...
{
"rule_id": "global/underride/.m.rule.encrypted",
"conditions": [
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["underride"],
rule_id="global/underride/.m.rule.encrypted",
conditions=[
{
"kind": "event_match",
"key": "type",
@ -394,11 +531,13 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_encrypted",
}
],
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
},
{
"rule_id": "global/underride/.im.vector.jitsi",
"conditions": [
actions=["notify", {"set_tweak": "highlight", "value": False}],
),
PushRule(
default=True,
priority_class=PRIORITY_CLASS_MAP["underride"],
rule_id="global/underride/.im.vector.jitsi",
conditions=[
{
"kind": "event_match",
"key": "type",
@ -418,29 +557,27 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
"_cache_key": "_is_state_event",
},
],
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
},
actions=["notify", {"set_tweak": "highlight", "value": False}],
),
]
BASE_RULE_IDS = set()
BASE_RULES_BY_ID: Dict[str, PushRule] = {}
for r in BASE_APPEND_CONTENT_RULES:
r["priority_class"] = PRIORITY_CLASS_MAP["content"]
r["default"] = True
BASE_RULE_IDS.add(r["rule_id"])
BASE_RULE_IDS.add(r.rule_id)
BASE_RULES_BY_ID[r.rule_id] = r
for r in BASE_PREPEND_OVERRIDE_RULES:
r["priority_class"] = PRIORITY_CLASS_MAP["override"]
r["default"] = True
BASE_RULE_IDS.add(r["rule_id"])
BASE_RULE_IDS.add(r.rule_id)
BASE_RULES_BY_ID[r.rule_id] = r
for r in BASE_APPEND_OVERRIDE_RULES:
r["priority_class"] = PRIORITY_CLASS_MAP["override"]
r["default"] = True
BASE_RULE_IDS.add(r["rule_id"])
BASE_RULE_IDS.add(r.rule_id)
BASE_RULES_BY_ID[r.rule_id] = r
for r in BASE_APPEND_UNDERRIDE_RULES:
r["priority_class"] = PRIORITY_CLASS_MAP["underride"]
r["default"] = True
BASE_RULE_IDS.add(r["rule_id"])
BASE_RULE_IDS.add(r.rule_id)
BASE_RULES_BY_ID[r.rule_id] = r

View file

@ -15,7 +15,18 @@
import itertools
import logging
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Set, Tuple, Union
from typing import (
TYPE_CHECKING,
Collection,
Dict,
Iterable,
List,
Mapping,
Optional,
Set,
Tuple,
Union,
)
from prometheus_client import Counter
@ -30,6 +41,7 @@ from synapse.util.caches import register_cache
from synapse.util.metrics import measure_func
from synapse.visibility import filter_event_for_clients_with_state
from .baserules import FilteredPushRules, PushRule
from .push_rule_evaluator import PushRuleEvaluatorForEvent
if TYPE_CHECKING:
@ -112,7 +124,7 @@ class BulkPushRuleEvaluator:
async def _get_rules_for_event(
self,
event: EventBase,
) -> Dict[str, List[Dict[str, Any]]]:
) -> Dict[str, FilteredPushRules]:
"""Get the push rules for all users who may need to be notified about
the event.
@ -186,7 +198,7 @@ class BulkPushRuleEvaluator:
return pl_event.content if pl_event else {}, sender_level
async def _get_mutual_relations(
self, event: EventBase, rules: Iterable[Dict[str, Any]]
self, event: EventBase, rules: Iterable[Tuple[PushRule, bool]]
) -> Dict[str, Set[Tuple[str, str]]]:
"""
Fetch event metadata for events which related to the same event as the given event.
@ -216,12 +228,11 @@ class BulkPushRuleEvaluator:
# Pre-filter to figure out which relation types are interesting.
rel_types = set()
for rule in rules:
# Skip disabled rules.
if "enabled" in rule and not rule["enabled"]:
for rule, enabled in rules:
if not enabled:
continue
for condition in rule["conditions"]:
for condition in rule.conditions:
if condition["kind"] != "org.matrix.msc3772.relation_match":
continue
@ -254,7 +265,7 @@ class BulkPushRuleEvaluator:
count_as_unread = _should_count_as_unread(event, context)
rules_by_user = await self._get_rules_for_event(event)
actions_by_user: Dict[str, List[Union[dict, str]]] = {}
actions_by_user: Dict[str, Collection[Union[Mapping, str]]] = {}
room_member_count = await self.store.get_number_joined_users_in_room(
event.room_id
@ -317,15 +328,13 @@ class BulkPushRuleEvaluator:
# current user, it'll be added to the dict later.
actions_by_user[uid] = []
for rule in rules:
if "enabled" in rule and not rule["enabled"]:
for rule, enabled in rules:
if not enabled:
continue
matches = evaluator.check_conditions(
rule["conditions"], uid, display_name
)
matches = evaluator.check_conditions(rule.conditions, uid, display_name)
if matches:
actions = [x for x in rule["actions"] if x != "dont_notify"]
actions = [x for x in rule.actions if x != "dont_notify"]
if actions and "notify" in actions:
# Push rules say we should notify the user of this event
actions_by_user[uid] = actions

View file

@ -18,16 +18,15 @@ from typing import Any, Dict, List, Optional
from synapse.push.rulekinds import PRIORITY_CLASS_INVERSE_MAP, PRIORITY_CLASS_MAP
from synapse.types import UserID
from .baserules import FilteredPushRules, PushRule
def format_push_rules_for_user(
user: UserID, ruleslist: List
user: UserID, ruleslist: FilteredPushRules
) -> Dict[str, Dict[str, list]]:
"""Converts a list of rawrules and a enabled map into nested dictionaries
to match the Matrix client-server format for push rules"""
# We're going to be mutating this a lot, so do a deep copy
ruleslist = copy.deepcopy(ruleslist)
rules: Dict[str, Dict[str, List[Dict[str, Any]]]] = {
"global": {},
"device": {},
@ -35,11 +34,30 @@ def format_push_rules_for_user(
rules["global"] = _add_empty_priority_class_arrays(rules["global"])
for r in ruleslist:
template_name = _priority_class_to_template_name(r["priority_class"])
for r, enabled in ruleslist:
template_name = _priority_class_to_template_name(r.priority_class)
rulearray = rules["global"][template_name]
template_rule = _rule_to_template(r)
if not template_rule:
continue
rulearray.append(template_rule)
template_rule["enabled"] = enabled
if "conditions" not in template_rule:
# Not all formatted rules have explicit conditions, e.g. "room"
# rules omit them as they can be derived from the kind and rule ID.
#
# If the formatted rule has no conditions then we can skip the
# formatting of conditions.
continue
# Remove internal stuff.
for c in r["conditions"]:
template_rule["conditions"] = copy.deepcopy(template_rule["conditions"])
for c in template_rule["conditions"]:
c.pop("_cache_key", None)
pattern_type = c.pop("pattern_type", None)
@ -52,16 +70,6 @@ def format_push_rules_for_user(
if sender_type == "user_id":
c["sender"] = user.to_string()
rulearray = rules["global"][template_name]
template_rule = _rule_to_template(r)
if template_rule:
if "enabled" in r:
template_rule["enabled"] = r["enabled"]
else:
template_rule["enabled"] = True
rulearray.append(template_rule)
return rules
@ -71,24 +79,24 @@ def _add_empty_priority_class_arrays(d: Dict[str, list]) -> Dict[str, list]:
return d
def _rule_to_template(rule: Dict[str, Any]) -> Optional[Dict[str, Any]]:
unscoped_rule_id = None
if "rule_id" in rule:
unscoped_rule_id = _rule_id_from_namespaced(rule["rule_id"])
def _rule_to_template(rule: PushRule) -> Optional[Dict[str, Any]]:
templaterule: Dict[str, Any]
template_name = _priority_class_to_template_name(rule["priority_class"])
unscoped_rule_id = _rule_id_from_namespaced(rule.rule_id)
template_name = _priority_class_to_template_name(rule.priority_class)
if template_name in ["override", "underride"]:
templaterule = {k: rule[k] for k in ["conditions", "actions"]}
templaterule = {"conditions": rule.conditions, "actions": rule.actions}
elif template_name in ["sender", "room"]:
templaterule = {"actions": rule["actions"]}
unscoped_rule_id = rule["conditions"][0]["pattern"]
templaterule = {"actions": rule.actions}
unscoped_rule_id = rule.conditions[0]["pattern"]
elif template_name == "content":
if len(rule["conditions"]) != 1:
if len(rule.conditions) != 1:
return None
thecond = rule["conditions"][0]
thecond = rule.conditions[0]
if "pattern" not in thecond:
return None
templaterule = {"actions": rule["actions"]}
templaterule = {"actions": rule.actions}
templaterule["pattern"] = thecond["pattern"]
else:
# This should not be reached unless this function is not kept in sync
@ -97,8 +105,8 @@ def _rule_to_template(rule: Dict[str, Any]) -> Optional[Dict[str, Any]]:
if unscoped_rule_id:
templaterule["rule_id"] = unscoped_rule_id
if "default" in rule:
templaterule["default"] = rule["default"]
if rule.default:
templaterule["default"] = True
return templaterule

View file

@ -15,7 +15,18 @@
import logging
import re
from typing import Any, Dict, List, Mapping, Optional, Pattern, Set, Tuple, Union
from typing import (
Any,
Dict,
List,
Mapping,
Optional,
Pattern,
Sequence,
Set,
Tuple,
Union,
)
from matrix_common.regex import glob_to_regex, to_word_pattern
@ -32,14 +43,14 @@ INEQUALITY_EXPR = re.compile("^([=<>]*)([0-9]*)$")
def _room_member_count(
ev: EventBase, condition: Dict[str, Any], room_member_count: int
ev: EventBase, condition: Mapping[str, Any], room_member_count: int
) -> bool:
return _test_ineq_condition(condition, room_member_count)
def _sender_notification_permission(
ev: EventBase,
condition: Dict[str, Any],
condition: Mapping[str, Any],
sender_power_level: int,
power_levels: Dict[str, Union[int, Dict[str, int]]],
) -> bool:
@ -54,7 +65,7 @@ def _sender_notification_permission(
return sender_power_level >= room_notif_level
def _test_ineq_condition(condition: Dict[str, Any], number: int) -> bool:
def _test_ineq_condition(condition: Mapping[str, Any], number: int) -> bool:
if "is" not in condition:
return False
m = INEQUALITY_EXPR.match(condition["is"])
@ -137,7 +148,7 @@ class PushRuleEvaluatorForEvent:
self._condition_cache: Dict[str, bool] = {}
def check_conditions(
self, conditions: List[dict], uid: str, display_name: Optional[str]
self, conditions: Sequence[Mapping], uid: str, display_name: Optional[str]
) -> bool:
"""
Returns true if a user's conditions/user ID/display name match the event.
@ -169,7 +180,7 @@ class PushRuleEvaluatorForEvent:
return True
def matches(
self, condition: Dict[str, Any], user_id: str, display_name: Optional[str]
self, condition: Mapping[str, Any], user_id: str, display_name: Optional[str]
) -> bool:
"""
Returns true if a user's condition/user ID/display name match the event.
@ -204,7 +215,7 @@ class PushRuleEvaluatorForEvent:
# endpoint with an unknown kind, see _rule_tuple_from_request_object.
return True
def _event_match(self, condition: dict, user_id: str) -> bool:
def _event_match(self, condition: Mapping, user_id: str) -> bool:
"""
Check an "event_match" push rule condition.
@ -269,7 +280,7 @@ class PushRuleEvaluatorForEvent:
return bool(r.search(body))
def _relation_match(self, condition: dict, user_id: str) -> bool:
def _relation_match(self, condition: Mapping, user_id: str) -> bool:
"""
Check an "relation_match" push rule condition.

View file

@ -1 +1,12 @@
<html><body>Your account is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}.</body><html>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Your account is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}.</title>
</head>
<body>
Your account is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}.
</body>
</html>

View file

@ -1 +1,12 @@
<html><body>Your account has been successfully renewed and is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}.</body><html>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Your account has been successfully renewed and is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}.</title>
</head>
<body>
Your account has been successfully renewed and is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}.
</body>
</html>

View file

@ -1,9 +1,14 @@
<html>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Request to add an email address to your Matrix account</title>
</head>
<body>
<p>A request to add an email address to your Matrix account has been received. If this was you, please click the link below to confirm adding this email:</p>
<a href="{{ link }}">{{ link }}</a>
<p>If this was not you, you can safely ignore this email. Thank you.</p>
</body>
</html>

View file

@ -1,8 +1,13 @@
<html>
<head></head>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Request failed</title>
</head>
<body>
<p>The request failed for the following reason: {{ failure_reason }}.</p>
<p>No changes have been made to your account.</p>
<p>The request failed for the following reason: {{ failure_reason }}.</p>
<p>No changes have been made to your account.</p>
</body>
</html>

View file

@ -1,6 +1,12 @@
<html>
<head></head>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Your email has now been validated</title>
</head>
<body>
<p>Your email has now been validated, please return to your client. You may now close this window.</p>
<p>Your email has now been validated, please return to your client. You may now close this window.</p>
</body>
</html>
</html>

View file

@ -1,8 +1,8 @@
<html>
<head>
<title>Success!</title>
<meta name='viewport' content='width=device-width, initial-scale=1,
user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="/_matrix/static/client/register/style.css">
<script>
if (window.onAuthDone) {

View file

@ -1 +1,12 @@
<html><body>Invalid renewal token.</body><html>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Invalid renewal token.</title>
</head>
<body>
Invalid renewal token.
</body>
</html>

View file

@ -1,6 +1,8 @@
<!doctype html>
<html lang="en">
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style type="text/css">
{% include 'mail.css' without context %}
{% include "mail-%s.css" % app_name ignore missing without context %}

View file

@ -1,6 +1,8 @@
<!doctype html>
<html lang="en">
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style type="text/css">
{%- include 'mail.css' without context %}
{%- include "mail-%s.css" % app_name ignore missing without context %}

View file

@ -1,4 +1,9 @@
<html>
<html lang="en">
<head>
<title>Password reset</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<body>
<p>A password reset request has been received for your Matrix account. If this was you, please click the link below to confirm resetting your password:</p>

View file

@ -1,5 +1,9 @@
<html>
<head></head>
<html lang="en">
<head>
<title>Password reset confirmation</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<body>
<!--Use a hidden form to resubmit the information necessary to reset the password-->
<form method="post">

View file

@ -1,5 +1,9 @@
<html>
<head></head>
<html lang="en">
<head>
<title>Password reset failure</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<body>
<p>The request failed for the following reason: {{ failure_reason }}.</p>

View file

@ -1,5 +1,8 @@
<html>
<head></head>
<html lang="en">
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<body>
<p>Your email has now been validated, please return to your client to reset your password. You may now close this window.</p>
</body>

View file

@ -1,8 +1,8 @@
<html>
<head>
<title>Authentication</title>
<meta name='viewport' content='width=device-width, initial-scale=1,
user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<script src="https://www.recaptcha.net/recaptcha/api.js"
async defer></script>
<script src="//code.jquery.com/jquery-1.11.2.min.js"></script>

View file

@ -1,4 +1,9 @@
<html>
<html lang="en">
<head>
<title>Registration</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<body>
<p>You have asked us to register this email with a new Matrix account. If this was you, please click the link below to confirm your email address:</p>

View file

@ -1,5 +1,8 @@
<html>
<head></head>
<html lang="en">
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<body>
<p>Validation failed for the following reason: {{ failure_reason }}.</p>
</body>

View file

@ -1,5 +1,9 @@
<html>
<head></head>
<html lang="en">
<head>
<title>Your email has now been validated</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<body>
<p>Your email has now been validated, please return to your client. You may now close this window.</p>
</body>

View file

@ -1,8 +1,8 @@
<html>
<html lang="en">
<head>
<title>Authentication</title>
<meta name='viewport' content='width=device-width, initial-scale=1,
user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="/_matrix/static/client/register/style.css">
</head>
<body>

View file

@ -3,8 +3,8 @@
<head>
<meta charset="UTF-8">
<title>SSO account deactivated</title>
<meta name="viewport" content="width=device-width, user-scalable=no">
<style type="text/css">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0"> <style type="text/css">
{% include "sso.css" without context %}
</style>
</head>

View file

@ -3,7 +3,8 @@
<head>
<title>Create your account</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, user-scalable=no">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<script type="text/javascript">
let wasKeyboard = false;
document.addEventListener("mousedown", function() { wasKeyboard = false; });

View file

@ -3,7 +3,8 @@
<head>
<meta charset="UTF-8">
<title>Authentication failed</title>
<meta name="viewport" content="width=device-width, user-scalable=no">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style type="text/css">
{% include "sso.css" without context %}
</style>

View file

@ -3,7 +3,8 @@
<head>
<meta charset="UTF-8">
<title>Confirm it's you</title>
<meta name="viewport" content="width=device-width, user-scalable=no">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style type="text/css">
{% include "sso.css" without context %}
</style>

View file

@ -3,7 +3,8 @@
<head>
<meta charset="UTF-8">
<title>Authentication successful</title>
<meta name="viewport" content="width=device-width, user-scalable=no">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style type="text/css">
{% include "sso.css" without context %}
</style>

View file

@ -3,7 +3,8 @@
<head>
<meta charset="UTF-8">
<title>Authentication failed</title>
<meta name="viewport" content="width=device-width, user-scalable=no">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style type="text/css">
{% include "sso.css" without context %}

View file

@ -1,6 +1,8 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta charset="UTF-8">
<title>Choose identity provider</title>
<style type="text/css">

View file

@ -3,7 +3,8 @@
<head>
<meta charset="UTF-8">
<title>Agree to terms and conditions</title>
<meta name="viewport" content="width=device-width, user-scalable=no">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style type="text/css">
{% include "sso.css" without context %}

View file

@ -3,7 +3,8 @@
<head>
<meta charset="UTF-8">
<title>Continue to your account</title>
<meta name="viewport" content="width=device-width, user-scalable=no">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style type="text/css">
{% include "sso.css" without context %}

View file

@ -1,8 +1,8 @@
<html>
<head>
<title>Authentication</title>
<meta name='viewport' content='width=device-width, initial-scale=1,
user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="/_matrix/static/client/register/style.css">
</head>
<body>

View file

@ -19,7 +19,7 @@ from typing import Iterable, Pattern
from synapse.api.auth import Auth
from synapse.api.errors import AuthError
from synapse.http.site import SynapseRequest
from synapse.types import UserID
from synapse.types import Requester
def admin_patterns(path_regex: str, version: str = "v1") -> Iterable[Pattern]:
@ -48,19 +48,19 @@ async def assert_requester_is_admin(auth: Auth, request: SynapseRequest) -> None
AuthError if the requester is not a server admin
"""
requester = await auth.get_user_by_req(request)
await assert_user_is_admin(auth, requester.user)
await assert_user_is_admin(auth, requester)
async def assert_user_is_admin(auth: Auth, user_id: UserID) -> None:
async def assert_user_is_admin(auth: Auth, requester: Requester) -> None:
"""Verify that the given user is an admin user
Args:
auth: Auth singleton
user_id: user to check
requester: The user making the request, according to the access token.
Raises:
AuthError if the user is not a server admin
"""
is_admin = await auth.is_server_admin(user_id)
is_admin = await auth.is_server_admin(requester)
if not is_admin:
raise AuthError(HTTPStatus.FORBIDDEN, "You are not a server admin")

View file

@ -54,7 +54,7 @@ class QuarantineMediaInRoom(RestServlet):
self, request: SynapseRequest, room_id: str
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
await assert_user_is_admin(self.auth, requester.user)
await assert_user_is_admin(self.auth, requester)
logging.info("Quarantining room: %s", room_id)
@ -81,7 +81,7 @@ class QuarantineMediaByUser(RestServlet):
self, request: SynapseRequest, user_id: str
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
await assert_user_is_admin(self.auth, requester.user)
await assert_user_is_admin(self.auth, requester)
logging.info("Quarantining media by user: %s", user_id)
@ -110,7 +110,7 @@ class QuarantineMediaByID(RestServlet):
self, request: SynapseRequest, server_name: str, media_id: str
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
await assert_user_is_admin(self.auth, requester.user)
await assert_user_is_admin(self.auth, requester)
logging.info("Quarantining media by ID: %s/%s", server_name, media_id)

View file

@ -75,7 +75,7 @@ class RoomRestV2Servlet(RestServlet):
) -> Tuple[int, JsonDict]:
requester = await self._auth.get_user_by_req(request)
await assert_user_is_admin(self._auth, requester.user)
await assert_user_is_admin(self._auth, requester)
content = parse_json_object_from_request(request)
@ -303,6 +303,7 @@ class RoomRestServlet(RestServlet):
members = await self.store.get_users_in_room(room_id)
ret["joined_local_devices"] = await self.store.count_devices_by_users(members)
ret["forgotten"] = await self.store.is_locally_forgotten_room(room_id)
return HTTPStatus.OK, ret
@ -326,7 +327,7 @@ class RoomRestServlet(RestServlet):
pagination_handler: "PaginationHandler",
) -> Tuple[int, JsonDict]:
requester = await auth.get_user_by_req(request)
await assert_user_is_admin(auth, requester.user)
await assert_user_is_admin(auth, requester)
content = parse_json_object_from_request(request)
@ -460,7 +461,7 @@ class JoinRoomAliasServlet(ResolveRoomIdMixin, RestServlet):
assert request.args is not None
requester = await self.auth.get_user_by_req(request)
await assert_user_is_admin(self.auth, requester.user)
await assert_user_is_admin(self.auth, requester)
content = parse_json_object_from_request(request)
@ -550,7 +551,7 @@ class MakeRoomAdminRestServlet(ResolveRoomIdMixin, RestServlet):
self, request: SynapseRequest, room_identifier: str
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
await assert_user_is_admin(self.auth, requester.user)
await assert_user_is_admin(self.auth, requester)
content = parse_json_object_from_request(request, allow_empty_body=True)
room_id, _ = await self.resolve_room_id(room_identifier)
@ -741,7 +742,7 @@ class RoomEventContextServlet(RestServlet):
self, request: SynapseRequest, room_id: str, event_id: str
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request, allow_guest=False)
await assert_user_is_admin(self.auth, requester.user)
await assert_user_is_admin(self.auth, requester)
limit = parse_integer(request, "limit", default=10)
@ -833,7 +834,7 @@ class BlockRoomRestServlet(RestServlet):
self, request: SynapseRequest, room_id: str
) -> Tuple[int, JsonDict]:
requester = await self._auth.get_user_by_req(request)
await assert_user_is_admin(self._auth, requester.user)
await assert_user_is_admin(self._auth, requester)
content = parse_json_object_from_request(request)

View file

@ -183,7 +183,7 @@ class UserRestServletV2(RestServlet):
self, request: SynapseRequest, user_id: str
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
await assert_user_is_admin(self.auth, requester.user)
await assert_user_is_admin(self.auth, requester)
target_user = UserID.from_string(user_id)
body = parse_json_object_from_request(request)
@ -575,10 +575,9 @@ class WhoisRestServlet(RestServlet):
) -> Tuple[int, JsonDict]:
target_user = UserID.from_string(user_id)
requester = await self.auth.get_user_by_req(request)
auth_user = requester.user
if target_user != auth_user:
await assert_user_is_admin(self.auth, auth_user)
if target_user != requester.user:
await assert_user_is_admin(self.auth, requester)
if not self.is_mine(target_user):
raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only whois a local user")
@ -601,7 +600,7 @@ class DeactivateAccountRestServlet(RestServlet):
self, request: SynapseRequest, target_user_id: str
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
await assert_user_is_admin(self.auth, requester.user)
await assert_user_is_admin(self.auth, requester)
if not self.is_mine(UserID.from_string(target_user_id)):
raise SynapseError(
@ -693,7 +692,7 @@ class ResetPasswordRestServlet(RestServlet):
This needs user to have administrator access in Synapse.
"""
requester = await self.auth.get_user_by_req(request)
await assert_user_is_admin(self.auth, requester.user)
await assert_user_is_admin(self.auth, requester)
UserID.from_string(target_user_id)
@ -807,7 +806,7 @@ class UserAdminServlet(RestServlet):
self, request: SynapseRequest, user_id: str
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
await assert_user_is_admin(self.auth, requester.user)
await assert_user_is_admin(self.auth, requester)
auth_user = requester.user
target_user = UserID.from_string(user_id)
@ -921,7 +920,7 @@ class UserTokenRestServlet(RestServlet):
self, request: SynapseRequest, user_id: str
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
await assert_user_is_admin(self.auth, requester.user)
await assert_user_is_admin(self.auth, requester)
auth_user = requester.user
if not self.is_mine_id(user_id):

View file

@ -15,10 +15,11 @@
# limitations under the License.
import logging
import random
from http import HTTPStatus
from typing import TYPE_CHECKING, Optional, Tuple
from urllib.parse import urlparse
from pydantic import StrictBool, StrictStr, constr
from twisted.web.server import Request
from synapse.api.constants import LoginType
@ -34,12 +35,15 @@ from synapse.http.server import HttpServer, finish_request, respond_with_html
from synapse.http.servlet import (
RestServlet,
assert_params_in_dict,
parse_and_validate_json_object_from_request,
parse_json_object_from_request,
parse_string,
)
from synapse.http.site import SynapseRequest
from synapse.metrics import threepid_send_requests
from synapse.push.mailer import Mailer
from synapse.rest.client.models import AuthenticationData, EmailRequestTokenBody
from synapse.rest.models import RequestBodyModel
from synapse.types import JsonDict
from synapse.util.msisdn import phone_number_to_msisdn
from synapse.util.stringutils import assert_valid_client_secret, random_string
@ -82,32 +86,16 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
400, "Email-based password resets have been disabled on this server"
)
body = parse_json_object_from_request(request)
body = parse_and_validate_json_object_from_request(
request, EmailRequestTokenBody
)
assert_params_in_dict(body, ["client_secret", "email", "send_attempt"])
# Extract params from body
client_secret = body["client_secret"]
assert_valid_client_secret(client_secret)
# Canonicalise the email address. The addresses are all stored canonicalised
# in the database. This allows the user to reset his password without having to
# know the exact spelling (eg. upper and lower case) of address in the database.
# Stored in the database "foo@bar.com"
# User requests with "FOO@bar.com" would raise a Not Found error
try:
email = validate_email(body["email"])
except ValueError as e:
raise SynapseError(400, str(e))
send_attempt = body["send_attempt"]
next_link = body.get("next_link") # Optional param
if next_link:
if body.next_link:
# Raise if the provided next_link value isn't valid
assert_valid_next_link(self.hs, next_link)
assert_valid_next_link(self.hs, body.next_link)
await self.identity_handler.ratelimit_request_token_requests(
request, "email", email
request, "email", body.email
)
# The email will be sent to the stored address.
@ -115,7 +103,7 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
# an email address which is controlled by the attacker but which, after
# canonicalisation, matches the one in our database.
existing_user_id = await self.hs.get_datastores().main.get_user_id_by_threepid(
"email", email
"email", body.email
)
if existing_user_id is None:
@ -135,26 +123,26 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
# Have the configured identity server handle the request
ret = await self.identity_handler.request_email_token(
self.hs.config.registration.account_threepid_delegate_email,
email,
client_secret,
send_attempt,
next_link,
body.email,
body.client_secret,
body.send_attempt,
body.next_link,
)
else:
# Send password reset emails from Synapse
sid = await self.identity_handler.send_threepid_validation(
email,
client_secret,
send_attempt,
body.email,
body.client_secret,
body.send_attempt,
self.mailer.send_password_reset_mail,
next_link,
body.next_link,
)
# Wrap the session id in a JSON object
ret = {"sid": sid}
threepid_send_requests.labels(type="email", reason="password_reset").observe(
send_attempt
body.send_attempt
)
return 200, ret
@ -172,16 +160,23 @@ class PasswordRestServlet(RestServlet):
self.password_policy_handler = hs.get_password_policy_handler()
self._set_password_handler = hs.get_set_password_handler()
class PostBody(RequestBodyModel):
auth: Optional[AuthenticationData] = None
logout_devices: StrictBool = True
if TYPE_CHECKING:
# workaround for https://github.com/samuelcolvin/pydantic/issues/156
new_password: Optional[StrictStr] = None
else:
new_password: Optional[constr(max_length=512, strict=True)] = None
@interactive_auth_handler
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
body = parse_json_object_from_request(request)
body = parse_and_validate_json_object_from_request(request, self.PostBody)
# we do basic sanity checks here because the auth layer will store these
# in sessions. Pull out the new password provided to us.
new_password = body.pop("new_password", None)
new_password = body.new_password
if new_password is not None:
if not isinstance(new_password, str) or len(new_password) > 512:
raise SynapseError(400, "Invalid password")
self.password_policy_handler.validate_password(new_password)
# there are two possibilities here. Either the user does not have an
@ -201,7 +196,7 @@ class PasswordRestServlet(RestServlet):
params, session_id = await self.auth_handler.validate_user_via_ui_auth(
requester,
request,
body,
body.dict(exclude_unset=True),
"modify your account password",
)
except InteractiveAuthIncompleteError as e:
@ -224,7 +219,7 @@ class PasswordRestServlet(RestServlet):
result, params, session_id = await self.auth_handler.check_ui_auth(
[[LoginType.EMAIL_IDENTITY]],
request,
body,
body.dict(exclude_unset=True),
"modify your account password",
)
except InteractiveAuthIncompleteError as e:
@ -299,37 +294,33 @@ class DeactivateAccountRestServlet(RestServlet):
self.auth_handler = hs.get_auth_handler()
self._deactivate_account_handler = hs.get_deactivate_account_handler()
class PostBody(RequestBodyModel):
auth: Optional[AuthenticationData] = None
id_server: Optional[StrictStr] = None
# Not specced, see https://github.com/matrix-org/matrix-spec/issues/297
erase: StrictBool = False
@interactive_auth_handler
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
body = parse_json_object_from_request(request)
erase = body.get("erase", False)
if not isinstance(erase, bool):
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"Param 'erase' must be a boolean, if given",
Codes.BAD_JSON,
)
body = parse_and_validate_json_object_from_request(request, self.PostBody)
requester = await self.auth.get_user_by_req(request)
# allow ASes to deactivate their own users
if requester.app_service:
await self._deactivate_account_handler.deactivate_account(
requester.user.to_string(), erase, requester
requester.user.to_string(), body.erase, requester
)
return 200, {}
await self.auth_handler.validate_user_via_ui_auth(
requester,
request,
body,
body.dict(exclude_unset=True),
"deactivate your account",
)
result = await self._deactivate_account_handler.deactivate_account(
requester.user.to_string(),
erase,
requester,
id_server=body.get("id_server"),
requester.user.to_string(), body.erase, requester, id_server=body.id_server
)
if result:
id_server_unbind_result = "success"
@ -364,28 +355,15 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
"Adding emails have been disabled due to lack of an email config"
)
raise SynapseError(
400, "Adding an email to your account is disabled on this server"
400,
"Adding an email to your account is disabled on this server",
)
body = parse_json_object_from_request(request)
assert_params_in_dict(body, ["client_secret", "email", "send_attempt"])
client_secret = body["client_secret"]
assert_valid_client_secret(client_secret)
body = parse_and_validate_json_object_from_request(
request, EmailRequestTokenBody
)
# Canonicalise the email address. The addresses are all stored canonicalised
# in the database.
# This ensures that the validation email is sent to the canonicalised address
# as it will later be entered into the database.
# Otherwise the email will be sent to "FOO@bar.com" and stored as
# "foo@bar.com" in database.
try:
email = validate_email(body["email"])
except ValueError as e:
raise SynapseError(400, str(e))
send_attempt = body["send_attempt"]
next_link = body.get("next_link") # Optional param
if not await check_3pid_allowed(self.hs, "email", email):
if not await check_3pid_allowed(self.hs, "email", body.email):
raise SynapseError(
403,
"Your email domain is not authorized on this server",
@ -393,14 +371,14 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
)
await self.identity_handler.ratelimit_request_token_requests(
request, "email", email
request, "email", body.email
)
if next_link:
if body.next_link:
# Raise if the provided next_link value isn't valid
assert_valid_next_link(self.hs, next_link)
assert_valid_next_link(self.hs, body.next_link)
existing_user_id = await self.store.get_user_id_by_threepid("email", email)
existing_user_id = await self.store.get_user_id_by_threepid("email", body.email)
if existing_user_id is not None:
if self.config.server.request_token_inhibit_3pid_errors:
@ -419,26 +397,26 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
# Have the configured identity server handle the request
ret = await self.identity_handler.request_email_token(
self.hs.config.registration.account_threepid_delegate_email,
email,
client_secret,
send_attempt,
next_link,
body.email,
body.client_secret,
body.send_attempt,
body.next_link,
)
else:
# Send threepid validation emails from Synapse
sid = await self.identity_handler.send_threepid_validation(
email,
client_secret,
send_attempt,
body.email,
body.client_secret,
body.send_attempt,
self.mailer.send_add_threepid_mail,
next_link,
body.next_link,
)
# Wrap the session id in a JSON object
ret = {"sid": sid}
threepid_send_requests.labels(type="email", reason="add_threepid").observe(
send_attempt
body.send_attempt
)
return 200, ret

View file

@ -42,12 +42,26 @@ class DevicesRestServlet(RestServlet):
self.hs = hs
self.auth = hs.get_auth()
self.device_handler = hs.get_device_handler()
self._msc3852_enabled = hs.config.experimental.msc3852_enabled
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request, allow_guest=True)
devices = await self.device_handler.get_devices_by_user(
requester.user.to_string()
)
# If MSC3852 is disabled, then the "last_seen_user_agent" field will be
# removed from each device. If it is enabled, then the field name will
# be replaced by the unstable identifier.
#
# When MSC3852 is accepted, this block of code can just be removed to
# expose "last_seen_user_agent" to clients.
for device in devices:
last_seen_user_agent = device["last_seen_user_agent"]
del device["last_seen_user_agent"]
if self._msc3852_enabled:
device["org.matrix.msc3852.last_seen_user_agent"] = last_seen_user_agent
return 200, {"devices": devices}
@ -108,6 +122,7 @@ class DeviceRestServlet(RestServlet):
self.auth = hs.get_auth()
self.device_handler = hs.get_device_handler()
self.auth_handler = hs.get_auth_handler()
self._msc3852_enabled = hs.config.experimental.msc3852_enabled
async def on_GET(
self, request: SynapseRequest, device_id: str
@ -118,6 +133,18 @@ class DeviceRestServlet(RestServlet):
)
if device is None:
raise NotFoundError("No device found")
# If MSC3852 is disabled, then the "last_seen_user_agent" field will be
# removed from each device. If it is enabled, then the field name will
# be replaced by the unstable identifier.
#
# When MSC3852 is accepted, this block of code can just be removed to
# expose "last_seen_user_agent" to clients.
last_seen_user_agent = device["last_seen_user_agent"]
del device["last_seen_user_agent"]
if self._msc3852_enabled:
device["org.matrix.msc3852.last_seen_user_agent"] = last_seen_user_agent
return 200, device
@interactive_auth_handler

View file

@ -26,7 +26,7 @@ from synapse.http.servlet import (
parse_string,
)
from synapse.http.site import SynapseRequest
from synapse.logging.opentracing import log_kv, set_tag, trace_with_opname
from synapse.logging.opentracing import log_kv, set_tag
from synapse.types import JsonDict, StreamToken
from ._base import client_patterns, interactive_auth_handler
@ -71,7 +71,6 @@ class KeyUploadServlet(RestServlet):
self.e2e_keys_handler = hs.get_e2e_keys_handler()
self.device_handler = hs.get_device_handler()
@trace_with_opname("upload_keys")
async def on_POST(
self, request: SynapseRequest, device_id: Optional[str]
) -> Tuple[int, JsonDict]:

View file

@ -0,0 +1,69 @@
# Copyright 2022 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING, Dict, Optional
from pydantic import Extra, StrictInt, StrictStr, constr, validator
from synapse.rest.models import RequestBodyModel
from synapse.util.threepids import validate_email
class AuthenticationData(RequestBodyModel):
"""
Data used during user-interactive authentication.
(The name "Authentication Data" is taken directly from the spec.)
Additional keys will be present, depending on the `type` field. Use `.dict()` to
access them.
"""
class Config:
extra = Extra.allow
session: Optional[StrictStr] = None
type: Optional[StrictStr] = None
class EmailRequestTokenBody(RequestBodyModel):
if TYPE_CHECKING:
client_secret: StrictStr
else:
# See also assert_valid_client_secret()
client_secret: constr(
regex="[0-9a-zA-Z.=_-]", # noqa: F722
min_length=0,
max_length=255,
strict=True,
)
email: StrictStr
id_server: Optional[StrictStr]
id_access_token: Optional[StrictStr]
next_link: Optional[StrictStr]
send_attempt: StrictInt
@validator("id_access_token", always=True)
def token_required_for_identity_server(
cls, token: Optional[str], values: Dict[str, object]
) -> Optional[str]:
if values.get("id_server") is not None and token is None:
raise ValueError("id_access_token is required if an id_server is supplied.")
return token
# Canonicalise the email address. The addresses are all stored canonicalised
# in the database. This allows the user to reset his password without having to
# know the exact spelling (eg. upper and lower case) of address in the database.
# Without this, an email stored in the database as "foo@bar.com" would cause
# user requests for "FOO@bar.com" to raise a Not Found error.
_email_validator = validator("email", allow_reuse=True)(validate_email)

View file

@ -66,7 +66,7 @@ class ProfileDisplaynameRestServlet(RestServlet):
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request, allow_guest=True)
user = UserID.from_string(user_id)
is_admin = await self.auth.is_server_admin(requester.user)
is_admin = await self.auth.is_server_admin(requester)
content = parse_json_object_from_request(request)
@ -123,7 +123,7 @@ class ProfileAvatarURLRestServlet(RestServlet):
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
user = UserID.from_string(user_id)
is_admin = await self.auth.is_server_admin(requester.user)
is_admin = await self.auth.is_server_admin(requester)
content = parse_json_object_from_request(request)
try:

View file

@ -484,9 +484,6 @@ class RegisterRestServlet(RestServlet):
"Appservice token must be provided when using a type of m.login.application_service",
)
# Verify the AS
self.auth.get_appservice_by_req(request)
# Set the desired user according to the AS API (which uses the
# 'user' key not 'username'). Since this is a new addition, we'll
# fallback to 'username' if they gave one.

View file

@ -16,9 +16,12 @@
""" This module contains REST servlets to do with rooms: /rooms/<paths> """
import logging
import re
from enum import Enum
from typing import TYPE_CHECKING, Awaitable, Dict, List, Optional, Tuple
from urllib import parse as urlparse
from prometheus_client.core import Histogram
from twisted.web.server import Request
from synapse import event_auth
@ -46,6 +49,7 @@ from synapse.http.servlet import (
parse_strings_from_args,
)
from synapse.http.site import SynapseRequest
from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.logging.opentracing import set_tag
from synapse.rest.client._base import client_patterns
from synapse.rest.client.transactions import HttpTransactionCache
@ -61,6 +65,70 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
class _RoomSize(Enum):
"""
Enum to differentiate sizes of rooms. This is a pretty good approximation
about how hard it will be to get events in the room. We could also look at
room "complexity".
"""
# This doesn't necessarily mean the room is a DM, just that there is a DM
# amount of people there.
DM_SIZE = "direct_message_size"
SMALL = "small"
SUBSTANTIAL = "substantial"
LARGE = "large"
@staticmethod
def from_member_count(member_count: int) -> "_RoomSize":
if member_count <= 2:
return _RoomSize.DM_SIZE
elif member_count < 100:
return _RoomSize.SMALL
elif member_count < 1000:
return _RoomSize.SUBSTANTIAL
else:
return _RoomSize.LARGE
# This is an extra metric on top of `synapse_http_server_response_time_seconds`
# which times the same sort of thing but this one allows us to see values
# greater than 10s. We use a separate dedicated histogram with its own buckets
# so that we don't increase the cardinality of the general one because it's
# multiplied across hundreds of servlets.
messsages_response_timer = Histogram(
"synapse_room_message_list_rest_servlet_response_time_seconds",
"sec",
# We have a label for room size so we can try to see a more realistic
# picture of /messages response time for bigger rooms. We don't want the
# tiny rooms that can always respond fast skewing our results when we're trying
# to optimize the bigger cases.
["room_size"],
buckets=(
0.005,
0.01,
0.025,
0.05,
0.1,
0.25,
0.5,
1.0,
2.5,
5.0,
10.0,
20.0,
30.0,
60.0,
80.0,
100.0,
120.0,
150.0,
180.0,
"+Inf",
),
)
class TransactionRestServlet(RestServlet):
def __init__(self, hs: "HomeServer"):
super().__init__()
@ -165,7 +233,7 @@ class RoomStateEventRestServlet(TransactionRestServlet):
msg_handler = self.message_handler
data = await msg_handler.get_room_data(
user_id=requester.user.to_string(),
requester=requester,
room_id=room_id,
event_type=event_type,
state_key=state_key,
@ -514,7 +582,7 @@ class RoomMemberListRestServlet(RestServlet):
events = await handler.get_state_events(
room_id=room_id,
user_id=requester.user.to_string(),
requester=requester,
at_token=at_token,
state_filter=StateFilter.from_types([(EventTypes.Member, None)]),
)
@ -560,6 +628,7 @@ class RoomMessageListRestServlet(RestServlet):
def __init__(self, hs: "HomeServer"):
super().__init__()
self._hs = hs
self.clock = hs.get_clock()
self.pagination_handler = hs.get_pagination_handler()
self.auth = hs.get_auth()
self.store = hs.get_datastores().main
@ -567,6 +636,18 @@ class RoomMessageListRestServlet(RestServlet):
async def on_GET(
self, request: SynapseRequest, room_id: str
) -> Tuple[int, JsonDict]:
processing_start_time = self.clock.time_msec()
# Fire off and hope that we get a result by the end.
#
# We're using the mypy type ignore comment because the `@cached`
# decorator on `get_number_joined_users_in_room` doesn't play well with
# the type system. Maybe in the future, it can use some ParamSpec
# wizardry to fix it up.
room_member_count_deferred = run_in_background( # type: ignore[call-arg]
self.store.get_number_joined_users_in_room,
room_id, # type: ignore[arg-type]
)
requester = await self.auth.get_user_by_req(request, allow_guest=True)
pagination_config = await PaginationConfig.from_request(
self.store, request, default_limit=10
@ -597,6 +678,12 @@ class RoomMessageListRestServlet(RestServlet):
event_filter=event_filter,
)
processing_end_time = self.clock.time_msec()
room_member_count = await make_deferred_yieldable(room_member_count_deferred)
messsages_response_timer.labels(
room_size=_RoomSize.from_member_count(room_member_count)
).observe((processing_end_time - processing_start_time) / 1000)
return 200, msgs
@ -617,8 +704,7 @@ class RoomStateRestServlet(RestServlet):
# Get all the current state for this room
events = await self.message_handler.get_state_events(
room_id=room_id,
user_id=requester.user.to_string(),
is_guest=requester.is_guest,
requester=requester,
)
return 200, events
@ -676,7 +762,7 @@ class RoomEventServlet(RestServlet):
== "true"
)
if include_unredacted_content and not await self.auth.is_server_admin(
requester.user
requester
):
power_level_event = (
await self._storage_controllers.state.get_current_state_event(
@ -1181,9 +1267,7 @@ class TimestampLookupRestServlet(RestServlet):
self, request: SynapseRequest, room_id: str
) -> Tuple[int, JsonDict]:
requester = await self._auth.get_user_by_req(request)
await self._auth.check_user_in_room_or_world_readable(
room_id, requester.user.to_string()
)
await self._auth.check_user_in_room_or_world_readable(room_id, requester)
timestamp = parse_integer(request, "ts", required=True)
direction = parse_string(request, "dir", default="f", allowed_values=["f", "b"])

View file

@ -19,7 +19,7 @@ from synapse.http import servlet
from synapse.http.server import HttpServer
from synapse.http.servlet import assert_params_in_dict, parse_json_object_from_request
from synapse.http.site import SynapseRequest
from synapse.logging.opentracing import set_tag, trace_with_opname
from synapse.logging.opentracing import set_tag
from synapse.rest.client.transactions import HttpTransactionCache
from synapse.types import JsonDict
@ -43,7 +43,6 @@ class SendToDeviceRestServlet(servlet.RestServlet):
self.txns = HttpTransactionCache(hs)
self.device_message_handler = hs.get_device_message_handler()
@trace_with_opname("sendToDevice")
def on_PUT(
self, request: SynapseRequest, message_type: str, txn_id: str
) -> Awaitable[Tuple[int, JsonDict]]:

23
synapse/rest/models.py Normal file
View file

@ -0,0 +1,23 @@
from pydantic import BaseModel, Extra
class RequestBodyModel(BaseModel):
"""A custom version of Pydantic's BaseModel which
- ignores unknown fields and
- does not allow fields to be overwritten after construction,
but otherwise uses Pydantic's default behaviour.
Ignoring unknown fields is a useful default. It means that clients can provide
unstable field not known to the server without the request being refused outright.
Subclassing in this way is recommended by
https://pydantic-docs.helpmanual.io/usage/model_config/#change-behaviour-globally
"""
class Config:
# By default, ignore fields that we don't recognise.
extra = Extra.ignore
# By default, don't allow fields to be reassigned after parsing.
allow_mutation = False

View file

@ -244,7 +244,7 @@ class ServerNoticesManager:
assert self.server_notices_mxid is not None
notice_user_data_in_room = await self._message_handler.get_room_data(
self.server_notices_mxid,
create_requester(self.server_notices_mxid),
room_id,
EventTypes.Member,
self.server_notices_mxid,

View file

@ -3,7 +3,8 @@
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title> Login </title>
<meta name='viewport' content='width=device-width, initial-scale=1, user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="style.css">
<script src="js/jquery-3.4.1.min.js"></script>
<script src="js/login.js"></script>

View file

@ -2,7 +2,8 @@
<html>
<head>
<title> Registration </title>
<meta name='viewport' content='width=device-width, initial-scale=1, user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="style.css">
<script src="js/jquery-3.4.1.min.js"></script>
<script src="https://www.recaptcha.net/recaptcha/api/js/recaptcha_ajax.js"></script>

View file

@ -45,8 +45,14 @@ from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.logging import opentracing
from synapse.logging.context import PreserveLoggingContext, make_deferred_yieldable
from synapse.logging.opentracing import (
SynapseTags,
active_span,
set_tag,
start_active_span_follows_from,
trace,
)
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.controllers.state import StateStorageController
from synapse.storage.databases import Databases
@ -223,7 +229,7 @@ class _EventPeristenceQueue(Generic[_PersistResult]):
queue.append(end_item)
# also add our active opentracing span to the item so that we get a link back
span = opentracing.active_span()
span = active_span()
if span:
end_item.parent_opentracing_span_contexts.append(span.context)
@ -234,7 +240,7 @@ class _EventPeristenceQueue(Generic[_PersistResult]):
res = await make_deferred_yieldable(end_item.deferred.observe())
# add another opentracing span which links to the persist trace.
with opentracing.start_active_span_follows_from(
with start_active_span_follows_from(
f"{task.name}_complete", (end_item.opentracing_span_context,)
):
pass
@ -266,7 +272,7 @@ class _EventPeristenceQueue(Generic[_PersistResult]):
queue = self._get_drainining_queue(room_id)
for item in queue:
try:
with opentracing.start_active_span_follows_from(
with start_active_span_follows_from(
item.task.name,
item.parent_opentracing_span_contexts,
inherit_force_tracing=True,
@ -355,7 +361,7 @@ class EventsPersistenceStorageController:
f"Found an unexpected task type in event persistence queue: {task}"
)
@opentracing.trace
@trace
async def persist_events(
self,
events_and_contexts: Iterable[Tuple[EventBase, EventContext]],
@ -380,9 +386,21 @@ class EventsPersistenceStorageController:
PartialStateConflictError: if attempting to persist a partial state event in
a room that has been un-partial stated.
"""
event_ids: List[str] = []
partitioned: Dict[str, List[Tuple[EventBase, EventContext]]] = {}
for event, ctx in events_and_contexts:
partitioned.setdefault(event.room_id, []).append((event, ctx))
event_ids.append(event.event_id)
set_tag(
SynapseTags.FUNC_ARG_PREFIX + "event_ids",
str(event_ids),
)
set_tag(
SynapseTags.FUNC_ARG_PREFIX + "event_ids.length",
str(len(event_ids)),
)
set_tag(SynapseTags.FUNC_ARG_PREFIX + "backfilled", str(backfilled))
async def enqueue(
item: Tuple[str, List[Tuple[EventBase, EventContext]]]
@ -418,7 +436,7 @@ class EventsPersistenceStorageController:
self.main_store.get_room_max_token(),
)
@opentracing.trace
@trace
async def persist_event(
self, event: EventBase, context: EventContext, backfilled: bool = False
) -> Tuple[EventBase, PersistedEventPosition, RoomStreamToken]:

View file

@ -29,7 +29,8 @@ from typing import (
from synapse.api.constants import EventTypes
from synapse.events import EventBase
from synapse.logging.opentracing import trace
from synapse.logging.opentracing import tag_args, trace
from synapse.storage.roommember import ProfileInfo
from synapse.storage.state import StateFilter
from synapse.storage.util.partial_state_events_tracker import (
PartialCurrentStateTracker,
@ -228,10 +229,12 @@ class StateStorageController:
return {event: event_to_state[event] for event in event_ids}
@trace
@tag_args
async def get_state_ids_for_events(
self,
event_ids: Collection[str],
state_filter: Optional[StateFilter] = None,
await_full_state: bool = True,
) -> Dict[str, StateMap[str]]:
"""
Get the state dicts corresponding to a list of events, containing the event_ids
@ -240,6 +243,9 @@ class StateStorageController:
Args:
event_ids: events whose state should be returned
state_filter: The state filter used to fetch state from the database.
await_full_state: if `True`, will block if we do not yet have complete state
at these events and `state_filter` is not satisfied by partial state.
Defaults to `True`.
Returns:
A dict from event_id -> (type, state_key) -> event_id
@ -248,8 +254,12 @@ class StateStorageController:
RuntimeError if we don't have a state group for one or more of the events
(ie they are outliers or unknown)
"""
await_full_state = True
if state_filter and not state_filter.must_await_full_state(self._is_mine_id):
if (
await_full_state
and state_filter
and not state_filter.must_await_full_state(self._is_mine_id)
):
# Full state is not required if the state filter is restrictive enough.
await_full_state = False
event_to_groups = await self.get_state_group_for_events(
@ -292,7 +302,10 @@ class StateStorageController:
@trace
async def get_state_ids_for_event(
self, event_id: str, state_filter: Optional[StateFilter] = None
self,
event_id: str,
state_filter: Optional[StateFilter] = None,
await_full_state: bool = True,
) -> StateMap[str]:
"""
Get the state dict corresponding to a particular event
@ -300,6 +313,9 @@ class StateStorageController:
Args:
event_id: event whose state should be returned
state_filter: The state filter used to fetch state from the database.
await_full_state: if `True`, will block if we do not yet have complete state
at the event and `state_filter` is not satisfied by partial state.
Defaults to `True`.
Returns:
A dict from (type, state_key) -> state_event_id
@ -309,7 +325,9 @@ class StateStorageController:
outlier or is unknown)
"""
state_map = await self.get_state_ids_for_events(
[event_id], state_filter or StateFilter.all()
[event_id],
state_filter or StateFilter.all(),
await_full_state=await_full_state,
)
return state_map[event_id]
@ -332,6 +350,7 @@ class StateStorageController:
)
@trace
@tag_args
async def get_state_group_for_events(
self,
event_ids: Collection[str],
@ -473,6 +492,7 @@ class StateStorageController:
prev_stream_id, max_stream_id
)
@trace
async def get_current_state(
self, room_id: str, state_filter: Optional[StateFilter] = None
) -> StateMap[EventBase]:
@ -506,3 +526,15 @@ class StateStorageController:
await self._partial_state_room_tracker.await_full_state(room_id)
return await self.stores.main.get_current_hosts_in_room(room_id)
async def get_users_in_room_with_profiles(
self, room_id: str
) -> Dict[str, ProfileInfo]:
"""
Get the current users in the room with their profiles.
If the room is currently partial-stated, this will block until the room has
full state.
"""
await self._partial_state_room_tracker.await_full_state(room_id)
return await self.stores.main.get_users_in_room_with_profiles(room_id)

View file

@ -33,6 +33,7 @@ from synapse.api.constants import MAX_DEPTH, EventTypes
from synapse.api.errors import StoreError
from synapse.api.room_versions import EventFormatVersions, RoomVersion
from synapse.events import EventBase, make_event_from_dict
from synapse.logging.opentracing import tag_args, trace
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.storage._base import SQLBaseStore, db_to_json, make_in_list_sql_clause
from synapse.storage.database import (
@ -126,6 +127,8 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
)
return await self.get_events_as_list(event_ids)
@trace
@tag_args
async def get_auth_chain_ids(
self,
room_id: str,
@ -709,6 +712,8 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
# Return all events where not all sets can reach them.
return {eid for eid, n in event_to_missing_sets.items() if n}
@trace
@tag_args
async def get_oldest_event_ids_with_depth_in_room(
self, room_id: str
) -> List[Tuple[str, int]]:
@ -767,6 +772,7 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
room_id,
)
@trace
async def get_insertion_event_backward_extremities_in_room(
self, room_id: str
) -> List[Tuple[str, int]]:
@ -1339,6 +1345,8 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
event_results.reverse()
return event_results
@trace
@tag_args
async def get_successor_events(self, event_id: str) -> List[str]:
"""Fetch all events that have the given event as a prev event
@ -1375,6 +1383,7 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
_delete_old_forward_extrem_cache_txn,
)
@trace
async def insert_insertion_extremity(self, event_id: str, room_id: str) -> None:
await self.db_pool.simple_upsert(
table="insertion_event_extremities",

View file

@ -74,7 +74,17 @@ receipt.
"""
import logging
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Union, cast
from typing import (
TYPE_CHECKING,
Collection,
Dict,
List,
Mapping,
Optional,
Tuple,
Union,
cast,
)
import attr
@ -154,7 +164,9 @@ class NotifCounts:
highlight_count: int = 0
def _serialize_action(actions: List[Union[dict, str]], is_highlight: bool) -> str:
def _serialize_action(
actions: Collection[Union[Mapping, str]], is_highlight: bool
) -> str:
"""Custom serializer for actions. This allows us to "compress" common actions.
We use the fact that most users have the same actions for notifs (and for
@ -227,7 +239,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
user_id: str,
) -> NotifCounts:
"""Get the notification count, the highlight count and the unread message count
for a given user in a given room after the given read receipt.
for a given user in a given room after their latest read receipt.
Note that this function assumes the user to be a current member of the room,
since it's either called by the sync handler to handle joined room entries, or by
@ -238,9 +250,8 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
user_id: The user to retrieve the counts for.
Returns
A dict containing the counts mentioned earlier in this docstring,
respectively under the keys "notify_count", "highlight_count" and
"unread_count".
A NotifCounts object containing the notification count, the highlight count
and the unread message count.
"""
return await self.db_pool.runInteraction(
"get_unread_event_push_actions_by_room",
@ -255,6 +266,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
room_id: str,
user_id: str,
) -> NotifCounts:
# Get the stream ordering of the user's latest receipt in the room.
result = self.get_last_receipt_for_user_txn(
txn,
user_id,
@ -266,13 +278,11 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
),
)
stream_ordering = None
if result:
_, stream_ordering = result
if stream_ordering is None:
# Either last_read_event_id is None, or it's an event we don't have (e.g.
# because it's been purged), in which case retrieve the stream ordering for
else:
# If the user has no receipts in the room, retrieve the stream ordering for
# the latest membership event from this user in this room (which we assume is
# a join).
event_id = self.db_pool.simple_select_one_onecol_txn(
@ -289,10 +299,26 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
)
def _get_unread_counts_by_pos_txn(
self, txn: LoggingTransaction, room_id: str, user_id: str, stream_ordering: int
self,
txn: LoggingTransaction,
room_id: str,
user_id: str,
receipt_stream_ordering: int,
) -> NotifCounts:
"""Get the number of unread messages for a user/room that have happened
since the given stream ordering.
Args:
txn: The database transaction.
room_id: The room ID to get unread counts for.
user_id: The user ID to get unread counts for.
receipt_stream_ordering: The stream ordering of the user's latest
receipt in the room. If there are no receipts, the stream ordering
of the user's join event.
Returns
A NotifCounts object containing the notification count, the highlight count
and the unread message count.
"""
counts = NotifCounts()
@ -320,7 +346,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
OR last_receipt_stream_ordering = ?
)
""",
(room_id, user_id, stream_ordering, stream_ordering),
(room_id, user_id, receipt_stream_ordering, receipt_stream_ordering),
)
row = txn.fetchone()
@ -338,17 +364,20 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
AND stream_ordering > ?
AND highlight = 1
"""
txn.execute(sql, (user_id, room_id, stream_ordering))
txn.execute(sql, (user_id, room_id, receipt_stream_ordering))
row = txn.fetchone()
if row:
counts.highlight_count += row[0]
# Finally we need to count push actions that aren't included in the
# summary returned above, e.g. recent events that haven't been
# summarised yet, or the summary is empty due to a recent read receipt.
stream_ordering = max(stream_ordering, summary_stream_ordering)
# summary returned above. This might be due to recent events that haven't
# been summarised yet or the summary is out of date due to a recent read
# receipt.
start_unread_stream_ordering = max(
receipt_stream_ordering, summary_stream_ordering
)
notify_count, unread_count = self._get_notif_unread_count_for_user_room(
txn, room_id, user_id, stream_ordering
txn, room_id, user_id, start_unread_stream_ordering
)
counts.notify_count += notify_count
@ -733,7 +762,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
async def add_push_actions_to_staging(
self,
event_id: str,
user_id_actions: Dict[str, List[Union[dict, str]]],
user_id_actions: Dict[str, Collection[Union[Mapping, str]]],
count_as_unread: bool,
) -> None:
"""Add the push actions for the event to the push action staging area.
@ -750,7 +779,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
# This is a helper function for generating the necessary tuple that
# can be used to insert into the `event_push_actions_staging` table.
def _gen_entry(
user_id: str, actions: List[Union[dict, str]]
user_id: str, actions: Collection[Union[Mapping, str]]
) -> Tuple[str, str, str, int, int, int]:
is_highlight = 1 if _action_has_highlight(actions) else 0
notif = 1 if "notify" in actions else 0
@ -1151,8 +1180,6 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
txn: The database transaction.
old_rotate_stream_ordering: The previous maximum event stream ordering.
rotate_to_stream_ordering: The new maximum event stream ordering to summarise.
Returns whether the archiving process has caught up or not.
"""
# Calculate the new counts that should be upserted into event_push_summary
@ -1238,9 +1265,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
(rotate_to_stream_ordering,),
)
async def _remove_old_push_actions_that_have_rotated(
self,
) -> None:
async def _remove_old_push_actions_that_have_rotated(self) -> None:
"""Clear out old push actions that have been summarised."""
# We want to clear out anything that is older than a day that *has* already
@ -1397,7 +1422,7 @@ class EventPushActionsStore(EventPushActionsWorkerStore):
]
def _action_has_highlight(actions: List[Union[dict, str]]) -> bool:
def _action_has_highlight(actions: Collection[Union[Mapping, str]]) -> bool:
for action in actions:
if not isinstance(action, dict):
continue

Some files were not shown because too many files have changed in this diff Show more