0
0
Fork 1
mirror of https://mau.dev/maunium/synapse.git synced 2024-11-13 21:41:30 +01:00

Merge remote-tracking branch 'upstream/release-v1.64'

This commit is contained in:
Tulir Asokan 2022-07-28 10:49:41 +03:00
commit b0f213fd3d
176 changed files with 4539 additions and 2643 deletions

View file

@ -69,7 +69,7 @@ with open('pyproject.toml', 'w') as f:
" "
python3 -c "$REMOVE_DEV_DEPENDENCIES" python3 -c "$REMOVE_DEV_DEPENDENCIES"
pipx install poetry==1.1.12 pipx install poetry==1.1.14
~/.local/bin/poetry lock ~/.local/bin/poetry lock
echo "::group::Patched pyproject.toml" echo "::group::Patched pyproject.toml"

View file

@ -1,3 +1,16 @@
# Commits in this file will be removed from GitHub blame results.
#
# To use this file locally, use:
# git blame --ignore-revs-file="path/to/.git-blame-ignore-revs" <files>
#
# or configure the `blame.ignoreRevsFile` option in your git config.
#
# If ignoring a pull request that was not squash merged, only the merge
# commit needs to be put here. Child commits will be resolved from it.
# Run black (#3679).
8b3d9b6b199abb87246f982d5db356f1966db925
# Black reformatting (#5482). # Black reformatting (#5482).
32e7c9e7f20b57dd081023ac42d6931a8da9b3a3 32e7c9e7f20b57dd081023ac42d6931a8da9b3a3

View file

@ -328,9 +328,6 @@ jobs:
- arrangement: monolith - arrangement: monolith
database: Postgres database: Postgres
- arrangement: workers
database: Postgres
steps: steps:
- name: Run actions/checkout@v2 for synapse - name: Run actions/checkout@v2 for synapse
uses: actions/checkout@v2 uses: actions/checkout@v2
@ -346,6 +343,30 @@ jobs:
shell: bash shell: bash
name: Run Complement Tests name: Run Complement Tests
# XXX When complement with workers is stable, move this back into the standard
# "complement" matrix above.
#
# See https://github.com/matrix-org/synapse/issues/13161
complement-workers:
if: "${{ !failure() && !cancelled() }}"
needs: linting-done
runs-on: ubuntu-latest
steps:
- name: Run actions/checkout@v2 for synapse
uses: actions/checkout@v2
with:
path: synapse
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- run: |
set -o pipefail
POSTGRES=1 WORKERS=1 COMPLEMENT_DIR=`pwd`/complement synapse/scripts-dev/complement.sh -json 2>&1 | gotestfmt
shell: bash
name: Run Complement Tests
# a job which marks all the other jobs as complete, thus allowing PRs to be merged. # a job which marks all the other jobs as complete, thus allowing PRs to be merged.
tests-done: tests-done:
if: ${{ always() }} if: ${{ always() }}

View file

@ -127,12 +127,12 @@ jobs:
run: | run: |
set -x set -x
DEBIAN_FRONTEND=noninteractive sudo apt-get install -yqq python3 pipx DEBIAN_FRONTEND=noninteractive sudo apt-get install -yqq python3 pipx
pipx install poetry==1.1.12 pipx install poetry==1.1.14
poetry remove -n twisted poetry remove -n twisted
poetry add -n --extras tls git+https://github.com/twisted/twisted.git#trunk poetry add -n --extras tls git+https://github.com/twisted/twisted.git#trunk
poetry lock --no-update poetry lock --no-update
# NOT IN 1.1.12 poetry lock --check # NOT IN 1.1.14 poetry lock --check
working-directory: synapse working-directory: synapse
- run: | - run: |

View file

@ -1,3 +1,113 @@
Synapse 1.64.0rc1 (2022-07-26)
==============================
As of this release, Synapse no longer allows the tasks of verifying email address ownership, and password reset confirmation, to be delegated to an identity server. For more information, see the [upgrade notes](https://matrix-org.github.io/synapse/v1.64/upgrade.html#upgrading-to-v1640).
We have also stopped building `.deb` packages for Ubuntu 21.10 as it is no longer an active version of Ubuntu.
Features
--------
- Improve error messages when media thumbnails cannot be served. ([\#13038](https://github.com/matrix-org/synapse/issues/13038))
- Allow pagination from remote event after discovering it from [MSC3030](https://github.com/matrix-org/matrix-spec-proposals/pull/3030) `/timestamp_to_event`. ([\#13205](https://github.com/matrix-org/synapse/issues/13205))
- Add a `room_type` field in the responses for the list room and room details admin APIs. Contributed by @andrewdoh. ([\#13208](https://github.com/matrix-org/synapse/issues/13208))
- Add support for room version 10. ([\#13220](https://github.com/matrix-org/synapse/issues/13220))
- Add per-room rate limiting for room joins. For each room, Synapse now monitors the rate of join events in that room, and throttles additional joins if that rate grows too large. ([\#13253](https://github.com/matrix-org/synapse/issues/13253), [\#13254](https://github.com/matrix-org/synapse/issues/13254), [\#13255](https://github.com/matrix-org/synapse/issues/13255), [\#13276](https://github.com/matrix-org/synapse/issues/13276))
- Support Implicit TLS (TLS without using a STARTTLS upgrade, typically on port 465) for sending emails, enabled by the new option `force_tls`. Contributed by Jan Schär. ([\#13317](https://github.com/matrix-org/synapse/issues/13317))
Bugfixes
--------
- Fix a bug introduced in Synapse 1.15.0 where adding a user through the Synapse Admin API with a phone number would fail if the `enable_email_notifs` and `email_notifs_for_new_users` options were enabled. Contributed by @thomasweston12. ([\#13263](https://github.com/matrix-org/synapse/issues/13263))
- Fix a bug introduced in Synapse 1.40.0 where a user invited to a restricted room would be briefly unable to join. ([\#13270](https://github.com/matrix-org/synapse/issues/13270))
- Fix a long-standing bug where, in rare instances, Synapse could store the incorrect state for a room after a state resolution. ([\#13278](https://github.com/matrix-org/synapse/issues/13278))
- Fix a bug introduced in v1.18.0 where the `synapse_pushers` metric would overcount pushers when they are replaced. ([\#13296](https://github.com/matrix-org/synapse/issues/13296))
- Disable autocorrection and autocapitalisation on the username text field shown during registration when using SSO. ([\#13350](https://github.com/matrix-org/synapse/issues/13350))
- Update locked version of `frozendict` to 2.3.3, which has fixes for memory leaks affecting `/sync`. ([\#13284](https://github.com/matrix-org/synapse/issues/13284), [\#13352](https://github.com/matrix-org/synapse/issues/13352))
Improved Documentation
----------------------
- Provide an example of using the Admin API. Contributed by @jejo86. ([\#13231](https://github.com/matrix-org/synapse/issues/13231))
- Move the documentation for how URL previews work to the URL preview module. ([\#13233](https://github.com/matrix-org/synapse/issues/13233), [\#13261](https://github.com/matrix-org/synapse/issues/13261))
- Add another `contrib` script to help set up worker processes. Contributed by @villepeh. ([\#13271](https://github.com/matrix-org/synapse/issues/13271))
- Document that certain config options were added or changed in Synapse 1.62. Contributed by @behrmann. ([\#13314](https://github.com/matrix-org/synapse/issues/13314))
- Document the new `rc_invites.per_issuer` throttling option added in Synapse 1.63. ([\#13333](https://github.com/matrix-org/synapse/issues/13333))
- Mention that BuildKit is needed when building Docker images for tests. ([\#13338](https://github.com/matrix-org/synapse/issues/13338))
- Improve Caddy reverse proxy documentation. ([\#13344](https://github.com/matrix-org/synapse/issues/13344))
Deprecations and Removals
-------------------------
- Drop tables that were formerly used for groups/communities. ([\#12967](https://github.com/matrix-org/synapse/issues/12967))
- Drop support for delegating email verification to an external server. ([\#13192](https://github.com/matrix-org/synapse/issues/13192))
- Drop support for calling `/_matrix/client/v3/account/3pid/bind` without an `id_access_token`, which was not permitted by the spec. Contributed by @Vetchu. ([\#13239](https://github.com/matrix-org/synapse/issues/13239))
- Stop building `.deb` packages for Ubuntu 21.10 (Impish Indri), which has reached end of life. ([\#13326](https://github.com/matrix-org/synapse/issues/13326))
Internal Changes
----------------
- Use lower transaction isolation level when purging rooms to avoid serialization errors. Contributed by Nick @ Beeper. ([\#12942](https://github.com/matrix-org/synapse/issues/12942))
- Remove code which incorrectly attempted to reconcile state with remote servers when processing incoming events. ([\#12943](https://github.com/matrix-org/synapse/issues/12943))
- Make the AS login method call `Auth.get_user_by_req` for checking the AS token. ([\#13094](https://github.com/matrix-org/synapse/issues/13094))
- Always use a version of canonicaljson that supports the C implementation of frozendict. ([\#13172](https://github.com/matrix-org/synapse/issues/13172))
- Add prometheus counters for ephemeral events and to device messages pushed to app services. Contributed by Brad @ Beeper. ([\#13175](https://github.com/matrix-org/synapse/issues/13175))
- Refactor receipts servlet logic to avoid duplicated code. ([\#13198](https://github.com/matrix-org/synapse/issues/13198))
- Preparation for database schema simplifications: populate `state_key` and `rejection_reason` for existing rows in the `events` table. ([\#13215](https://github.com/matrix-org/synapse/issues/13215))
- Remove unused database table `event_reference_hashes`. ([\#13218](https://github.com/matrix-org/synapse/issues/13218))
- Further reduce queries used sending events when creating new rooms. Contributed by Nick @ Beeper (@fizzadar). ([\#13224](https://github.com/matrix-org/synapse/issues/13224))
- Call the v2 identity service `/3pid/unbind` endpoint, rather than v1. Contributed by @Vetchu. ([\#13240](https://github.com/matrix-org/synapse/issues/13240))
- Use an asynchronous cache wrapper for the get event cache. Contributed by Nick @ Beeper (@fizzadar). ([\#13242](https://github.com/matrix-org/synapse/issues/13242), [\#13308](https://github.com/matrix-org/synapse/issues/13308))
- Optimise federation sender and appservice pusher event stream processing queries. Contributed by Nick @ Beeper (@fizzadar). ([\#13251](https://github.com/matrix-org/synapse/issues/13251))
- Log the stack when waiting for an entire room to be un-partial stated. ([\#13257](https://github.com/matrix-org/synapse/issues/13257))
- Fix spurious warning when fetching state after a missing prev event. ([\#13258](https://github.com/matrix-org/synapse/issues/13258))
- Clean-up tests for notifications. ([\#13260](https://github.com/matrix-org/synapse/issues/13260))
- Do not fail build if complement with workers fails. ([\#13266](https://github.com/matrix-org/synapse/issues/13266))
- Don't pull out state in `compute_event_context` for unconflicted state. ([\#13267](https://github.com/matrix-org/synapse/issues/13267), [\#13274](https://github.com/matrix-org/synapse/issues/13274))
- Reduce the rebuild time for the complement-synapse docker image. ([\#13279](https://github.com/matrix-org/synapse/issues/13279))
- Don't pull out the full state when creating an event. ([\#13281](https://github.com/matrix-org/synapse/issues/13281), [\#13307](https://github.com/matrix-org/synapse/issues/13307))
- Upgrade from Poetry 1.1.12 to 1.1.14, to fix bugs when locking packages. ([\#13285](https://github.com/matrix-org/synapse/issues/13285))
- Make `DictionaryCache` expire full entries if they haven't been queried in a while, even if specific keys have been queried recently. ([\#13292](https://github.com/matrix-org/synapse/issues/13292))
- Use `HTTPStatus` constants in place of literals in tests. ([\#13297](https://github.com/matrix-org/synapse/issues/13297))
- Improve performance of query `_get_subset_users_in_room_with_profiles`. ([\#13299](https://github.com/matrix-org/synapse/issues/13299))
- Up batch size of `bulk_get_push_rules` and `_get_joined_profiles_from_event_ids`. ([\#13300](https://github.com/matrix-org/synapse/issues/13300))
- Remove unnecessary `json.dumps` from tests. ([\#13303](https://github.com/matrix-org/synapse/issues/13303))
- Reduce memory usage of sending dummy events. ([\#13310](https://github.com/matrix-org/synapse/issues/13310))
- Prevent formatting changes of [#3679](https://github.com/matrix-org/synapse/pull/3679) from appearing in `git blame`. ([\#13311](https://github.com/matrix-org/synapse/issues/13311))
- Change `get_users_in_room` and `get_rooms_for_user` caches to enable pruning of old entries. ([\#13313](https://github.com/matrix-org/synapse/issues/13313))
- Validate federation destinations and log an error if a destination is invalid. ([\#13318](https://github.com/matrix-org/synapse/issues/13318))
- Fix `FederationClient.get_pdu()` returning events from the cache as `outliers` instead of original events we saw over federation. ([\#13320](https://github.com/matrix-org/synapse/issues/13320))
- Reduce memory usage of state caches. ([\#13323](https://github.com/matrix-org/synapse/issues/13323))
- Reduce the amount of state we store in the `state_cache`. ([\#13324](https://github.com/matrix-org/synapse/issues/13324))
- Add missing type hints to open tracing module. ([\#13328](https://github.com/matrix-org/synapse/issues/13328), [\#13345](https://github.com/matrix-org/synapse/issues/13345), [\#13362](https://github.com/matrix-org/synapse/issues/13362))
- Remove old base slaved store and de-duplicate cache ID generators. Contributed by Nick @ Beeper (@fizzadar). ([\#13329](https://github.com/matrix-org/synapse/issues/13329), [\#13349](https://github.com/matrix-org/synapse/issues/13349))
- When reporting metrics is enabled, use ~8x less data to describe DB transaction metrics. ([\#13342](https://github.com/matrix-org/synapse/issues/13342))
- Faster room joins: skip soft fail checks while Synapse only has partial room state, since the current membership of event senders may not be accurately known. ([\#13354](https://github.com/matrix-org/synapse/issues/13354))
Synapse 1.63.1 (2022-07-20)
===========================
Bugfixes
--------
- Fix a bug introduced in Synapse 1.63.0 where push actions were incorrectly calculated for appservice users. This caused performance issues on servers with large numbers of appservices. ([\#13332](https://github.com/matrix-org/synapse/issues/13332))
Synapse 1.63.0 (2022-07-19)
===========================
Improved Documentation
----------------------
- Clarify that homeserver server names are included in the reported data when the `report_stats` config option is enabled. ([\#13321](https://github.com/matrix-org/synapse/issues/13321))
Synapse 1.63.0rc1 (2022-07-12) Synapse 1.63.0rc1 (2022-07-12)
============================== ==============================
@ -6,7 +116,7 @@ Features
- Add a rate limit for local users sending invites. ([\#13125](https://github.com/matrix-org/synapse/issues/13125)) - Add a rate limit for local users sending invites. ([\#13125](https://github.com/matrix-org/synapse/issues/13125))
- Implement [MSC3827](https://github.com/matrix-org/matrix-spec-proposals/pull/3827): Filtering of `/publicRooms` by room type. ([\#13031](https://github.com/matrix-org/synapse/issues/13031)) - Implement [MSC3827](https://github.com/matrix-org/matrix-spec-proposals/pull/3827): Filtering of `/publicRooms` by room type. ([\#13031](https://github.com/matrix-org/synapse/issues/13031))
- Improve validation logic in Synapse's REST endpoints. ([\#13148](https://github.com/matrix-org/synapse/issues/13148)) - Improve validation logic in the account data REST endpoints. ([\#13148](https://github.com/matrix-org/synapse/issues/13148))
Bugfixes Bugfixes
@ -34,7 +144,7 @@ Improved Documentation
- Add an explanation of the `--report-stats` argument to the docs. ([\#13029](https://github.com/matrix-org/synapse/issues/13029)) - Add an explanation of the `--report-stats` argument to the docs. ([\#13029](https://github.com/matrix-org/synapse/issues/13029))
- Add a helpful example bash script to the contrib directory for creating multiple worker configuration files of the same type. Contributed by @villepeh. ([\#13032](https://github.com/matrix-org/synapse/issues/13032)) - Add a helpful example bash script to the contrib directory for creating multiple worker configuration files of the same type. Contributed by @villepeh. ([\#13032](https://github.com/matrix-org/synapse/issues/13032))
- Add missing links to config options. ([\#13166](https://github.com/matrix-org/synapse/issues/13166)) - Add missing links to config options. ([\#13166](https://github.com/matrix-org/synapse/issues/13166))
- Add documentation for anonymised homeserver statistics collection. ([\#13086](https://github.com/matrix-org/synapse/issues/13086)) - Add documentation for homeserver usage statistics collection. ([\#13086](https://github.com/matrix-org/synapse/issues/13086))
- Add documentation for the existing `databases` option in the homeserver configuration manual. ([\#13212](https://github.com/matrix-org/synapse/issues/13212)) - Add documentation for the existing `databases` option in the homeserver configuration manual. ([\#13212](https://github.com/matrix-org/synapse/issues/13212))
- Clean up references to sample configuration and redirect users to the configuration manual instead. ([\#13077](https://github.com/matrix-org/synapse/issues/13077), [\#13139](https://github.com/matrix-org/synapse/issues/13139)) - Clean up references to sample configuration and redirect users to the configuration manual instead. ([\#13077](https://github.com/matrix-org/synapse/issues/13077), [\#13139](https://github.com/matrix-org/synapse/issues/13139))
- Document how the Synapse team does reviews. ([\#13132](https://github.com/matrix-org/synapse/issues/13132)) - Document how the Synapse team does reviews. ([\#13132](https://github.com/matrix-org/synapse/issues/13132))
@ -73,7 +183,6 @@ Internal Changes
- More aggressively rotate push actions. ([\#13211](https://github.com/matrix-org/synapse/issues/13211)) - More aggressively rotate push actions. ([\#13211](https://github.com/matrix-org/synapse/issues/13211))
- Add `max_line_length` setting for Python files to the `.editorconfig`. Contributed by @sumnerevans @ Beeper. ([\#13228](https://github.com/matrix-org/synapse/issues/13228)) - Add `max_line_length` setting for Python files to the `.editorconfig`. Contributed by @sumnerevans @ Beeper. ([\#13228](https://github.com/matrix-org/synapse/issues/13228))
Synapse 1.62.0 (2022-07-05) Synapse 1.62.0 (2022-07-05)
=========================== ===========================
@ -81,7 +190,6 @@ No significant changes since 1.62.0rc3.
Authors of spam-checker plugins should consult the [upgrade notes](https://github.com/matrix-org/synapse/blob/release-v1.62/docs/upgrade.md#upgrading-to-v1620) to learn about the enriched signatures for spam checker callbacks, which are supported with this release of Synapse. Authors of spam-checker plugins should consult the [upgrade notes](https://github.com/matrix-org/synapse/blob/release-v1.62/docs/upgrade.md#upgrading-to-v1620) to learn about the enriched signatures for spam checker callbacks, which are supported with this release of Synapse.
Synapse 1.62.0rc3 (2022-07-04) Synapse 1.62.0rc3 (2022-07-04)
============================== ==============================
@ -121,7 +229,7 @@ Bugfixes
- Update [MSC3786](https://github.com/matrix-org/matrix-spec-proposals/pull/3786) implementation to check `state_key`. ([\#12939](https://github.com/matrix-org/synapse/issues/12939)) - Update [MSC3786](https://github.com/matrix-org/matrix-spec-proposals/pull/3786) implementation to check `state_key`. ([\#12939](https://github.com/matrix-org/synapse/issues/12939))
- Fix a bug introduced in Synapse 1.58 where Synapse would not report full version information when installed from a git checkout. This is a best-effort affair and not guaranteed to be stable. ([\#12973](https://github.com/matrix-org/synapse/issues/12973)) - Fix a bug introduced in Synapse 1.58 where Synapse would not report full version information when installed from a git checkout. This is a best-effort affair and not guaranteed to be stable. ([\#12973](https://github.com/matrix-org/synapse/issues/12973))
- Fix a bug introduced in Synapse 1.60 where Synapse would fail to start if the `sqlite3` module was not available. ([\#12979](https://github.com/matrix-org/synapse/issues/12979)) - Fix a bug introduced in Synapse 1.60 where Synapse would fail to start if the `sqlite3` module was not available. ([\#12979](https://github.com/matrix-org/synapse/issues/12979))
- Fix a bug where non-standard information was required when requesting the `/hierarchy` API over federation. Introduced - Fix a bug where non-standard information was required when requesting the `/hierarchy` API over federation. Introduced
in Synapse v1.41.0. ([\#12991](https://github.com/matrix-org/synapse/issues/12991)) in Synapse v1.41.0. ([\#12991](https://github.com/matrix-org/synapse/issues/12991))
- Fix a long-standing bug which meant that rate limiting was not restrictive enough in some cases. ([\#13018](https://github.com/matrix-org/synapse/issues/13018)) - Fix a long-standing bug which meant that rate limiting was not restrictive enough in some cases. ([\#13018](https://github.com/matrix-org/synapse/issues/13018))
- Fix a bug introduced in Synapse 1.58 where profile requests for a malformed user ID would ccause an internal error. Synapse now returns 400 Bad Request in this situation. ([\#13041](https://github.com/matrix-org/synapse/issues/13041)) - Fix a bug introduced in Synapse 1.58 where profile requests for a malformed user ID would ccause an internal error. Synapse now returns 400 Bad Request in this situation. ([\#13041](https://github.com/matrix-org/synapse/issues/13041))

View file

@ -1,4 +1,4 @@
# Creating multiple workers with a bash script # Creating multiple generic workers with a bash script
Setting up multiple worker configuration files manually can be time-consuming. Setting up multiple worker configuration files manually can be time-consuming.
You can alternatively create multiple worker configuration files with a simple `bash` script. For example: You can alternatively create multiple worker configuration files with a simple `bash` script. For example:

View file

@ -0,0 +1,145 @@
# Creating multiple stream writers with a bash script
This script creates multiple [stream writer](https://github.com/matrix-org/synapse/blob/develop/docs/workers.md#stream-writers) workers.
Stream writers require both replication and HTTP listeners.
It also prints out the example lines for Synapse main configuration file.
Remember to route necessary endpoints directly to a worker associated with it.
If you run the script as-is, it will create workers with the replication listener starting from port 8034 and another, regular http listener starting from 8044. If you don't need all of the stream writers listed in the script, just remove them from the ```STREAM_WRITERS``` array.
```sh
#!/bin/bash
# Start with these replication and http ports.
# The script loop starts with the exact port and then increments it by one.
REP_START_PORT=8034
HTTP_START_PORT=8044
# Stream writer workers to generate. Feel free to add or remove them as you wish.
# Event persister ("events") isn't included here as it does not require its
# own HTTP listener.
STREAM_WRITERS+=( "presence" "typing" "receipts" "to_device" "account_data" )
NUM_WRITERS=$(expr ${#STREAM_WRITERS[@]})
i=0
while [ $i -lt "$NUM_WRITERS" ]
do
cat << EOF > ${STREAM_WRITERS[$i]}_stream_writer.yaml
worker_app: synapse.app.generic_worker
worker_name: ${STREAM_WRITERS[$i]}_stream_writer
# The replication listener on the main synapse process.
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093
worker_listeners:
- type: http
port: $(expr $REP_START_PORT + $i)
resources:
- names: [replication]
- type: http
port: $(expr $HTTP_START_PORT + $i)
resources:
- names: [client]
worker_log_config: /etc/matrix-synapse/stream-writer-log.yaml
EOF
HOMESERVER_YAML_INSTANCE_MAP+=$" ${STREAM_WRITERS[$i]}_stream_writer:
host: 127.0.0.1
port: $(expr $REP_START_PORT + $i)
"
HOMESERVER_YAML_STREAM_WRITERS+=$" ${STREAM_WRITERS[$i]}: ${STREAM_WRITERS[$i]}_stream_writer
"
((i++))
done
cat << EXAMPLECONFIG
# Add these lines to your homeserver.yaml.
# Don't forget to configure your reverse proxy and
# necessary endpoints to their respective worker.
# See https://github.com/matrix-org/synapse/blob/develop/docs/workers.md
# for more information.
# Remember: Under NO circumstances should the replication
# listener be exposed to the public internet;
# it has no authentication and is unencrypted.
instance_map:
$HOMESERVER_YAML_INSTANCE_MAP
stream_writers:
$HOMESERVER_YAML_STREAM_WRITERS
EXAMPLECONFIG
```
Copy the code above save it to a file ```create_stream_writers.sh``` (for example).
Make the script executable by running ```chmod +x create_stream_writers.sh```.
## Run the script to create workers and print out a sample configuration
Simply run the script to create YAML files in the current folder and print out the required configuration for ```homeserver.yaml```.
```console
$ ./create_stream_writers.sh
# Add these lines to your homeserver.yaml.
# Don't forget to configure your reverse proxy and
# necessary endpoints to their respective worker.
# See https://github.com/matrix-org/synapse/blob/develop/docs/workers.md
# for more information
# Remember: Under NO circumstances should the replication
# listener be exposed to the public internet;
# it has no authentication and is unencrypted.
instance_map:
presence_stream_writer:
host: 127.0.0.1
port: 8034
typing_stream_writer:
host: 127.0.0.1
port: 8035
receipts_stream_writer:
host: 127.0.0.1
port: 8036
to_device_stream_writer:
host: 127.0.0.1
port: 8037
account_data_stream_writer:
host: 127.0.0.1
port: 8038
stream_writers:
presence: presence_stream_writer
typing: typing_stream_writer
receipts: receipts_stream_writer
to_device: to_device_stream_writer
account_data: account_data_stream_writer
```
Simply copy-and-paste the output to an appropriate place in your Synapse main configuration file.
## Write directly to Synapse configuration file
You could also write the output directly to homeserver main configuration file. **This, however, is not recommended** as even a small typo (such as replacing >> with >) can erase the entire ```homeserver.yaml```.
If you do this, back up your original configuration file first:
```console
# Back up homeserver.yaml first
cp /etc/matrix-synapse/homeserver.yaml /etc/matrix-synapse/homeserver.yaml.bak
# Create workers and write output to your homeserver.yaml
./create_stream_writers.sh >> /etc/matrix-synapse/homeserver.yaml
```

20
debian/changelog vendored
View file

@ -1,3 +1,23 @@
matrix-synapse-py3 (1.64.0~rc1) stable; urgency=medium
* New Synapse release 1.64.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 26 Jul 2022 12:11:49 +0100
matrix-synapse-py3 (1.63.1) stable; urgency=medium
* New Synapse release 1.63.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 20 Jul 2022 13:36:52 +0100
matrix-synapse-py3 (1.63.0) stable; urgency=medium
* Clarify that homeserver server names are included in the data reported
by opt-in server stats reporting (`report_stats` homeserver config option).
* New Synapse release 1.63.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 19 Jul 2022 14:42:24 +0200
matrix-synapse-py3 (1.63.0~rc1) stable; urgency=medium matrix-synapse-py3 (1.63.0~rc1) stable; urgency=medium
* New Synapse release 1.63.0rc1. * New Synapse release 1.63.0rc1.

View file

@ -31,7 +31,7 @@ EOF
# This file is autogenerated, and will be recreated on upgrade if it is deleted. # This file is autogenerated, and will be recreated on upgrade if it is deleted.
# Any changes you make will be preserved. # Any changes you make will be preserved.
# Whether to report anonymized homeserver usage statistics. # Whether to report homeserver usage statistics.
report_stats: false report_stats: false
EOF EOF
fi fi

View file

@ -37,7 +37,7 @@ msgstr ""
#. Type: boolean #. Type: boolean
#. Description #. Description
#: ../templates:2001 #: ../templates:2001
msgid "Report anonymous statistics?" msgid "Report homeserver usage statistics?"
msgstr "" msgstr ""
#. Type: boolean #. Type: boolean
@ -45,11 +45,11 @@ msgstr ""
#: ../templates:2001 #: ../templates:2001
msgid "" msgid ""
"Developers of Matrix and Synapse really appreciate helping the project out " "Developers of Matrix and Synapse really appreciate helping the project out "
"by reporting anonymized usage statistics from this homeserver. Only very " "by reporting homeserver usage statistics from this homeserver. Your "
"basic aggregate data (e.g. number of users) will be reported, but it helps " "homeserver's server name, along with very basic aggregate data (e.g. "
"track the growth of the Matrix community, and helps in making Matrix a " "number of users) will be reported. But it helps track the growth of the "
"success, as well as to convince other networks that they should peer with " "Matrix community, and helps in making Matrix a success, as well as to "
"Matrix." "convince other networks that they should peer with Matrix."
msgstr "" msgstr ""
#. Type: boolean #. Type: boolean

13
debian/templates vendored
View file

@ -10,12 +10,13 @@ _Description: Name of the server:
Template: matrix-synapse/report-stats Template: matrix-synapse/report-stats
Type: boolean Type: boolean
Default: false Default: false
_Description: Report anonymous statistics? _Description: Report homeserver usage statistics?
Developers of Matrix and Synapse really appreciate helping the Developers of Matrix and Synapse really appreciate helping the
project out by reporting anonymized usage statistics from this project out by reporting homeserver usage statistics from this
homeserver. Only very basic aggregate data (e.g. number of users) homeserver. Your homeserver's server name, along with very basic
will be reported, but it helps track the growth of the Matrix aggregate data (e.g. number of users) will be reported. But it
community, and helps in making Matrix a success, as well as to helps track the growth of the Matrix community, and helps in
convince other networks that they should peer with Matrix. making Matrix a success, as well as to convince other networks
that they should peer with Matrix.
. .
Thank you. Thank you.

View file

@ -45,7 +45,7 @@ RUN \
# We install poetry in its own build stage to avoid its dependencies conflicting with # We install poetry in its own build stage to avoid its dependencies conflicting with
# synapse's dependencies. # synapse's dependencies.
# We use a specific commit from poetry's master branch instead of our usual 1.1.12, # We use a specific commit from poetry's master branch instead of our usual 1.1.14,
# to incorporate fixes to some bugs in `poetry export`. This commit corresponds to # to incorporate fixes to some bugs in `poetry export`. This commit corresponds to
# https://github.com/python-poetry/poetry/pull/5156 and # https://github.com/python-poetry/poetry/pull/5156 and
# https://github.com/python-poetry/poetry/issues/5141 ; # https://github.com/python-poetry/poetry/issues/5141 ;

View file

@ -1,3 +1,4 @@
# syntax=docker/dockerfile:1
# Inherit from the official Synapse docker image # Inherit from the official Synapse docker image
ARG SYNAPSE_VERSION=latest ARG SYNAPSE_VERSION=latest
FROM matrixdotorg/synapse:$SYNAPSE_VERSION FROM matrixdotorg/synapse:$SYNAPSE_VERSION

View file

@ -22,6 +22,10 @@ Consult the [contributing guide][guideComplementSh] for instructions on how to u
Under some circumstances, you may wish to build the images manually. Under some circumstances, you may wish to build the images manually.
The instructions below will lead you to doing that. The instructions below will lead you to doing that.
Note that these images can only be built using [BuildKit](https://docs.docker.com/develop/develop-images/build_enhancements/),
therefore BuildKit needs to be enabled when calling `docker build`. This can be done by
setting `DOCKER_BUILDKIT=1` in your environment.
Start by building the base Synapse docker image. If you wish to run tests with the latest Start by building the base Synapse docker image. If you wish to run tests with the latest
release of Synapse, instead of your current checkout, you can skip this step. From the release of Synapse, instead of your current checkout, you can skip this step. From the
root of the repository: root of the repository:

View file

@ -1,45 +1,62 @@
# syntax=docker/dockerfile:1
# This dockerfile builds on top of 'docker/Dockerfile-workers' in matrix-org/synapse # This dockerfile builds on top of 'docker/Dockerfile-workers' in matrix-org/synapse
# by including a built-in postgres instance, as well as setting up the homeserver so # by including a built-in postgres instance, as well as setting up the homeserver so
# that it is ready for testing via Complement. # that it is ready for testing via Complement.
# #
# Instructions for building this image from those it depends on is detailed in this guide: # Instructions for building this image from those it depends on is detailed in this guide:
# https://github.com/matrix-org/synapse/blob/develop/docker/README-testing.md#testing-with-postgresql-and-single-or-multi-process-synapse # https://github.com/matrix-org/synapse/blob/develop/docker/README-testing.md#testing-with-postgresql-and-single-or-multi-process-synapse
ARG SYNAPSE_VERSION=latest ARG SYNAPSE_VERSION=latest
# first of all, we create a base image with a postgres server and database,
# which we can copy into the target image. For repeated rebuilds, this is
# much faster than apt installing postgres each time.
#
# This trick only works because (a) the Synapse image happens to have all the
# shared libraries that postgres wants, (b) we use a postgres image based on
# the same debian version as Synapse's docker image (so the versions of the
# shared libraries match).
FROM postgres:13-bullseye AS postgres_base
# initialise the database cluster in /var/lib/postgresql
RUN gosu postgres initdb --locale=C --encoding=UTF-8 --auth-host password
# Configure a password and create a database for Synapse
RUN echo "ALTER USER postgres PASSWORD 'somesecret'" | gosu postgres postgres --single
RUN echo "CREATE DATABASE synapse" | gosu postgres postgres --single
# now build the final image, based on the Synapse image.
FROM matrixdotorg/synapse-workers:$SYNAPSE_VERSION FROM matrixdotorg/synapse-workers:$SYNAPSE_VERSION
# copy the postgres installation over from the image we built above
RUN adduser --system --uid 999 postgres --home /var/lib/postgresql
COPY --from=postgres_base /var/lib/postgresql /var/lib/postgresql
COPY --from=postgres_base /usr/lib/postgresql /usr/lib/postgresql
COPY --from=postgres_base /usr/share/postgresql /usr/share/postgresql
RUN mkdir /var/run/postgresql && chown postgres /var/run/postgresql
ENV PATH="${PATH}:/usr/lib/postgresql/13/bin"
ENV PGDATA=/var/lib/postgresql/data
# Install postgresql # Extend the shared homeserver config to disable rate-limiting,
RUN apt-get update && \ # set Complement's static shared secret, enable registration, amongst other
DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -yqq postgresql-13 # tweaks to get Synapse ready for testing.
# To do this, we copy the old template out of the way and then include it
# with Jinja2.
RUN mv /conf/shared.yaml.j2 /conf/shared-orig.yaml.j2
COPY conf/workers-shared-extra.yaml.j2 /conf/shared.yaml.j2
# Configure a user and create a database for Synapse WORKDIR /data
RUN pg_ctlcluster 13 main start && su postgres -c "echo \
\"ALTER USER postgres PASSWORD 'somesecret'; \
CREATE DATABASE synapse \
ENCODING 'UTF8' \
LC_COLLATE='C' \
LC_CTYPE='C' \
template=template0;\" | psql" && pg_ctlcluster 13 main stop
# Extend the shared homeserver config to disable rate-limiting, COPY conf/postgres.supervisord.conf /etc/supervisor/conf.d/postgres.conf
# set Complement's static shared secret, enable registration, amongst other
# tweaks to get Synapse ready for testing.
# To do this, we copy the old template out of the way and then include it
# with Jinja2.
RUN mv /conf/shared.yaml.j2 /conf/shared-orig.yaml.j2
COPY conf/workers-shared-extra.yaml.j2 /conf/shared.yaml.j2
WORKDIR /data # Copy the entrypoint
COPY conf/start_for_complement.sh /
COPY conf/postgres.supervisord.conf /etc/supervisor/conf.d/postgres.conf # Expose nginx's listener ports
EXPOSE 8008 8448
# Copy the entrypoint ENTRYPOINT ["/start_for_complement.sh"]
COPY conf/start_for_complement.sh /
# Expose nginx's listener ports # Update the healthcheck to have a shorter check interval
EXPOSE 8008 8448 HEALTHCHECK --start-period=5s --interval=1s --timeout=1s \
CMD /bin/sh /healthcheck.sh
ENTRYPOINT ["/start_for_complement.sh"]
# Update the healthcheck to have a shorter check interval
HEALTHCHECK --start-period=5s --interval=1s --timeout=1s \
CMD /bin/sh /healthcheck.sh

View file

@ -1,5 +1,5 @@
[program:postgres] [program:postgres]
command=/usr/local/bin/prefix-log /usr/bin/pg_ctlcluster 13 main start --foreground command=/usr/local/bin/prefix-log gosu postgres postgres
# Only start if START_POSTGRES=1 # Only start if START_POSTGRES=1
autostart=%(ENV_START_POSTGRES)s autostart=%(ENV_START_POSTGRES)s

View file

@ -67,6 +67,10 @@ rc_joins:
per_second: 9999 per_second: 9999
burst_count: 9999 burst_count: 9999
rc_joins_per_room:
per_second: 9999
burst_count: 9999
rc_3pid_validation: rc_3pid_validation:
per_second: 1000 per_second: 1000
burst_count: 1000 burst_count: 1000

View file

@ -35,7 +35,6 @@
- [Application Services](application_services.md) - [Application Services](application_services.md)
- [Server Notices](server_notices.md) - [Server Notices](server_notices.md)
- [Consent Tracking](consent_tracking.md) - [Consent Tracking](consent_tracking.md)
- [URL Previews](development/url_previews.md)
- [User Directory](user_directory.md) - [User Directory](user_directory.md)
- [Message Retention Policies](message_retention_policies.md) - [Message Retention Policies](message_retention_policies.md)
- [Pluggable Modules](modules/index.md) - [Pluggable Modules](modules/index.md)
@ -69,7 +68,7 @@
- [Federation](usage/administration/admin_api/federation.md) - [Federation](usage/administration/admin_api/federation.md)
- [Manhole](manhole.md) - [Manhole](manhole.md)
- [Monitoring](metrics-howto.md) - [Monitoring](metrics-howto.md)
- [Reporting Anonymised Statistics](usage/administration/monitoring/reporting_anonymised_statistics.md) - [Reporting Homeserver Usage Statistics](usage/administration/monitoring/reporting_homeserver_usage_statistics.md)
- [Understanding Synapse Through Grafana Graphs](usage/administration/understanding_synapse_through_grafana_graphs.md) - [Understanding Synapse Through Grafana Graphs](usage/administration/understanding_synapse_through_grafana_graphs.md)
- [Useful SQL for Admins](usage/administration/useful_sql_for_admins.md) - [Useful SQL for Admins](usage/administration/useful_sql_for_admins.md)
- [Database Maintenance Tools](usage/administration/database_maintenance_tools.md) - [Database Maintenance Tools](usage/administration/database_maintenance_tools.md)

View file

@ -59,6 +59,7 @@ The following fields are possible in the JSON response body:
- `guest_access` - Whether guests can join the room. One of: ["can_join", "forbidden"]. - `guest_access` - Whether guests can join the room. One of: ["can_join", "forbidden"].
- `history_visibility` - Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"]. - `history_visibility` - Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"].
- `state_events` - Total number of state_events of a room. Complexity of the room. - `state_events` - Total number of state_events of a room. Complexity of the room.
- `room_type` - The type of the room taken from the room's creation event; for example "m.space" if the room is a space. If the room does not define a type, the value will be `null`.
* `offset` - The current pagination offset in rooms. This parameter should be * `offset` - The current pagination offset in rooms. This parameter should be
used instead of `next_token` for room offset as `next_token` is used instead of `next_token` for room offset as `next_token` is
not intended to be parsed. not intended to be parsed.
@ -101,7 +102,8 @@ A response body like the following is returned:
"join_rules": "invite", "join_rules": "invite",
"guest_access": null, "guest_access": null,
"history_visibility": "shared", "history_visibility": "shared",
"state_events": 93534 "state_events": 93534,
"room_type": "m.space"
}, },
... (8 hidden items) ... ... (8 hidden items) ...
{ {
@ -118,7 +120,8 @@ A response body like the following is returned:
"join_rules": "invite", "join_rules": "invite",
"guest_access": null, "guest_access": null,
"history_visibility": "shared", "history_visibility": "shared",
"state_events": 8345 "state_events": 8345,
"room_type": null
} }
], ],
"offset": 0, "offset": 0,
@ -151,7 +154,8 @@ A response body like the following is returned:
"join_rules": "invite", "join_rules": "invite",
"guest_access": null, "guest_access": null,
"history_visibility": "shared", "history_visibility": "shared",
"state_events": 8 "state_events": 8,
"room_type": null
} }
], ],
"offset": 0, "offset": 0,
@ -184,7 +188,8 @@ A response body like the following is returned:
"join_rules": "invite", "join_rules": "invite",
"guest_access": null, "guest_access": null,
"history_visibility": "shared", "history_visibility": "shared",
"state_events": 93534 "state_events": 93534,
"room_type": null
}, },
... (98 hidden items) ... ... (98 hidden items) ...
{ {
@ -201,7 +206,8 @@ A response body like the following is returned:
"join_rules": "invite", "join_rules": "invite",
"guest_access": null, "guest_access": null,
"history_visibility": "shared", "history_visibility": "shared",
"state_events": 8345 "state_events": 8345,
"room_type": "m.space"
} }
], ],
"offset": 0, "offset": 0,
@ -238,7 +244,9 @@ A response body like the following is returned:
"join_rules": "invite", "join_rules": "invite",
"guest_access": null, "guest_access": null,
"history_visibility": "shared", "history_visibility": "shared",
"state_events": 93534 "state_events": 93534,
"room_type": "m.space"
}, },
... (48 hidden items) ... ... (48 hidden items) ...
{ {
@ -255,7 +263,9 @@ A response body like the following is returned:
"join_rules": "invite", "join_rules": "invite",
"guest_access": null, "guest_access": null,
"history_visibility": "shared", "history_visibility": "shared",
"state_events": 8345 "state_events": 8345,
"room_type": null
} }
], ],
"offset": 100, "offset": 100,
@ -290,6 +300,8 @@ The following fields are possible in the JSON response body:
* `guest_access` - Whether guests can join the room. One of: ["can_join", "forbidden"]. * `guest_access` - Whether guests can join the room. One of: ["can_join", "forbidden"].
* `history_visibility` - Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"]. * `history_visibility` - Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"].
* `state_events` - Total number of state_events of a room. Complexity of the room. * `state_events` - Total number of state_events of a room. Complexity of the room.
* `room_type` - The type of the room taken from the room's creation event; for example "m.space" if the room is a space.
If the room does not define a type, the value will be `null`.
The API is: The API is:
@ -317,7 +329,8 @@ A response body like the following is returned:
"join_rules": "invite", "join_rules": "invite",
"guest_access": null, "guest_access": null,
"history_visibility": "shared", "history_visibility": "shared",
"state_events": 93534 "state_events": 93534,
"room_type": "m.space"
} }
``` ```

View file

@ -544,7 +544,7 @@ Gets a list of all local media that a specific `user_id` has created.
These are media that the user has uploaded themselves These are media that the user has uploaded themselves
([local media](../media_repository.md#local-media)), as well as ([local media](../media_repository.md#local-media)), as well as
[URL preview images](../media_repository.md#url-previews) requested by the user if the [URL preview images](../media_repository.md#url-previews) requested by the user if the
[feature is enabled](../development/url_previews.md). [feature is enabled](../usage/configuration/config_documentation.md#url_preview_enabled).
By default, the response is ordered by descending creation date and ascending media ID. By default, the response is ordered by descending creation date and ascending media ID.
The newest media is on top. You can change the order with parameters The newest media is on top. You can change the order with parameters

View file

@ -237,3 +237,28 @@ poetry run pip install build && poetry run python -m build
because [`build`](https://github.com/pypa/build) is a standardish tool which because [`build`](https://github.com/pypa/build) is a standardish tool which
doesn't require poetry. (It's what we use in CI too). However, you could try doesn't require poetry. (It's what we use in CI too). However, you could try
`poetry build` too. `poetry build` too.
# Troubleshooting
## Check the version of poetry with `poetry --version`.
At the time of writing, the 1.2 series is beta only. We have seen some examples
where the lockfiles generated by 1.2 prereleasese aren't interpreted correctly
by poetry 1.1.x. For now, use poetry 1.1.14, which includes a critical
[change](https://github.com/python-poetry/poetry/pull/5973) needed to remain
[compatible with PyPI](https://github.com/pypi/warehouse/pull/11775).
It can also be useful to check the version of `poetry-core` in use. If you've
installed `poetry` with `pipx`, try `pipx runpip poetry list | grep poetry-core`.
## Clear caches: `poetry cache clear --all pypi`.
Poetry caches a bunch of information about packages that isn't readily available
from PyPI. (This is what makes poetry seem slow when doing the first
`poetry install`.) Try `poetry cache list` and `poetry cache clear --all
<name of cache>` to see if that fixes things.
## Try `--verbose` or `--dry-run` arguments.
Sometimes useful to see what poetry's internal logic is.

View file

@ -1,61 +0,0 @@
URL Previews
============
The `GET /_matrix/media/r0/preview_url` endpoint provides a generic preview API
for URLs which outputs [Open Graph](https://ogp.me/) responses (with some Matrix
specific additions).
This does have trade-offs compared to other designs:
* Pros:
* Simple and flexible; can be used by any clients at any point
* Cons:
* If each homeserver provides one of these independently, all the HSes in a
room may needlessly DoS the target URI
* The URL metadata must be stored somewhere, rather than just using Matrix
itself to store the media.
* Matrix cannot be used to distribute the metadata between homeservers.
When Synapse is asked to preview a URL it does the following:
1. Checks against a URL blacklist (defined as `url_preview_url_blacklist` in the
config).
2. Checks the in-memory cache by URLs and returns the result if it exists. (This
is also used to de-duplicate processing of multiple in-flight requests at once.)
3. Kicks off a background process to generate a preview:
1. Checks the database cache by URL and timestamp and returns the result if it
has not expired and was successful (a 2xx return code).
2. Checks if the URL matches an [oEmbed](https://oembed.com/) pattern. If it
does, update the URL to download.
3. Downloads the URL and stores it into a file via the media storage provider
and saves the local media metadata.
4. If the media is an image:
1. Generates thumbnails.
2. Generates an Open Graph response based on image properties.
5. If the media is HTML:
1. Decodes the HTML via the stored file.
2. Generates an Open Graph response from the HTML.
3. If a JSON oEmbed URL was found in the HTML via autodiscovery:
1. Downloads the URL and stores it into a file via the media storage provider
and saves the local media metadata.
2. Convert the oEmbed response to an Open Graph response.
3. Override any Open Graph data from the HTML with data from oEmbed.
4. If an image exists in the Open Graph response:
1. Downloads the URL and stores it into a file via the media storage
provider and saves the local media metadata.
2. Generates thumbnails.
3. Updates the Open Graph response based on image properties.
6. If the media is JSON and an oEmbed URL was found:
1. Convert the oEmbed response to an Open Graph response.
2. If a thumbnail or image is in the oEmbed response:
1. Downloads the URL and stores it into a file via the media storage
provider and saves the local media metadata.
2. Generates thumbnails.
3. Updates the Open Graph response based on image properties.
7. Stores the result in the database cache.
4. Returns the result.
The in-memory cache expires after 1 hour.
Expired entries in the database cache (and their associated media files) are
deleted every 10 seconds. The default expiration time is 1 hour from download.

View file

@ -7,8 +7,7 @@ The media repository
users. users.
* caches avatars, attachments and their thumbnails for media uploaded by remote * caches avatars, attachments and their thumbnails for media uploaded by remote
users. users.
* caches resources and thumbnails used for * caches resources and thumbnails used for URL previews.
[URL previews](development/url_previews.md).
All media in Matrix can be identified by a unique All media in Matrix can be identified by a unique
[MXC URI](https://spec.matrix.org/latest/client-server-api/#matrix-content-mxc-uris), [MXC URI](https://spec.matrix.org/latest/client-server-api/#matrix-content-mxc-uris),
@ -59,8 +58,6 @@ remote_thumbnail/matrix.org/aa/bb/cccccccccccccccccccc/128-96-image-jpeg
Note that `remote_thumbnail/` does not have an `s`. Note that `remote_thumbnail/` does not have an `s`.
## URL Previews ## URL Previews
See [URL Previews](development/url_previews.md) for documentation on the URL preview
process.
When generating previews for URLs, Synapse may download and cache various When generating previews for URLs, Synapse may download and cache various
resources, including images. These resources are assigned temporary media IDs resources, including images. These resources are assigned temporary media IDs

View file

@ -79,63 +79,32 @@ server {
} }
``` ```
### Caddy v1
```
matrix.example.com {
proxy /_matrix http://localhost:8008 {
transparent
}
proxy /_synapse/client http://localhost:8008 {
transparent
}
}
example.com:8448 {
proxy / http://localhost:8008 {
transparent
}
}
```
### Caddy v2 ### Caddy v2
``` ```
matrix.example.com { matrix.example.com {
reverse_proxy /_matrix/* http://localhost:8008 reverse_proxy /_matrix/* localhost:8008
reverse_proxy /_synapse/client/* http://localhost:8008 reverse_proxy /_synapse/client/* localhost:8008
} }
example.com:8448 { example.com:8448 {
reverse_proxy http://localhost:8008 reverse_proxy localhost:8008
} }
``` ```
[Delegation](delegate.md) example: [Delegation](delegate.md) example:
``` ```
(matrix-well-known-header) {
# Headers
header Access-Control-Allow-Origin "*"
header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS"
header Access-Control-Allow-Headers "Origin, X-Requested-With, Content-Type, Accept, Authorization"
header Content-Type "application/json"
}
example.com { example.com {
handle /.well-known/matrix/server { header /.well-known/matrix/* Content-Type application/json
import matrix-well-known-header header /.well-known/matrix/* Access-Control-Allow-Origin *
respond `{"m.server":"matrix.example.com:443"}` respond /.well-known/matrix/server `{"m.server": "matrix.example.com:443"}`
} respond /.well-known/matrix/client `{"m.homeserver":{"base_url":"https://matrix.example.com"},"m.identity_server":{"base_url":"https://identity.example.com"}}`
handle /.well-known/matrix/client {
import matrix-well-known-header
respond `{"m.homeserver":{"base_url":"https://matrix.example.com"},"m.identity_server":{"base_url":"https://identity.example.com"}}`
}
} }
matrix.example.com { matrix.example.com {
reverse_proxy /_matrix/* http://localhost:8008 reverse_proxy /_matrix/* localhost:8008
reverse_proxy /_synapse/client/* http://localhost:8008 reverse_proxy /_synapse/client/* localhost:8008
} }
``` ```

View file

@ -89,6 +89,40 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
``` ```
# Upgrading to v1.64.0
## Delegation of email validation no longer supported
As of this version, Synapse no longer allows the tasks of verifying email address
ownership, and password reset confirmation, to be delegated to an identity server.
To continue to allow users to add email addresses to their homeserver accounts,
and perform password resets, make sure that Synapse is configured with a
working email server in the `email` configuration section (including, at a
minimum, a `notif_from` setting.)
Specifying an `email` setting under `account_threepid_delegates` will now cause
an error at startup.
## Changes to the event replication streams
Synapse now includes a flag indicating if an event is an outlier when
replicating it to other workers. This is a forwards- and backwards-incompatible
change: v1.63 and workers cannot process events replicated by v1.64 workers, and
vice versa.
Once all workers are upgraded to v1.64 (or downgraded to v1.63), event
replication will resume as normal.
## frozendict release
[frozendict 2.3.3](https://github.com/Marco-Sulla/python-frozendict/releases/tag/v2.3.3)
has recently been released, which fixes a memory leak that occurs during `/sync`
requests. We advise server administrators who installed Synapse via pip to upgrade
frozendict with `pip install --upgrade frozendict`. The Docker image
`matrixdotorg/synapse` and the Debian packages from `packages.matrix.org` already
include the updated library.
# Upgrading to v1.62.0 # Upgrading to v1.62.0
## New signatures for spam checker callbacks ## New signatures for spam checker callbacks

View file

@ -18,6 +18,11 @@ already on your `$PATH` depending on how Synapse was installed.
Finding your user's `access_token` is client-dependent, but will usually be shown in the client's settings. Finding your user's `access_token` is client-dependent, but will usually be shown in the client's settings.
## Making an Admin API request ## Making an Admin API request
For security reasons, we [recommend](reverse_proxy.md#synapse-administration-endpoints)
that the Admin API (`/_synapse/admin/...`) should be hidden from public view using a
reverse proxy. This means you should typically query the Admin API from a terminal on
the machine which runs Synapse.
Once you have your `access_token`, you will need to authenticate each request to an Admin API endpoint by Once you have your `access_token`, you will need to authenticate each request to an Admin API endpoint by
providing the token as either a query parameter or a request header. To add it as a request header in cURL: providing the token as either a query parameter or a request header. To add it as a request header in cURL:
@ -25,5 +30,17 @@ providing the token as either a query parameter or a request header. To add it a
curl --header "Authorization: Bearer <access_token>" <the_rest_of_your_API_request> curl --header "Authorization: Bearer <access_token>" <the_rest_of_your_API_request>
``` ```
For example, suppose we want to
[query the account](user_admin_api.md#query-user-account) of the user
`@foo:bar.com`. We need an admin access token (e.g.
`syt_AjfVef2_L33JNpafeif_0feKJfeaf0CQpoZk`), and we need to know which port
Synapse's [`client` listener](config_documentation.md#listeners) is listening
on (e.g. `8008`). Then we can use the following command to request the account
information from the Admin API.
```sh
curl --header "Authorization: Bearer syt_AjfVef2_L33JNpafeif_0feKJfeaf0CQpoZk" -X GET http://127.0.0.1:8008/_synapse/admin/v2/users/@foo:bar.com
```
For more details on access tokens in Matrix, please refer to the complete For more details on access tokens in Matrix, please refer to the complete
[matrix spec documentation](https://matrix.org/docs/spec/client_server/r0.6.1#using-access-tokens). [matrix spec documentation](https://matrix.org/docs/spec/client_server/r0.6.1#using-access-tokens).

View file

@ -1,11 +1,11 @@
# Reporting Anonymised Statistics # Reporting Homeserver Usage Statistics
When generating your Synapse configuration file, you are asked whether you When generating your Synapse configuration file, you are asked whether you
would like to report anonymised statistics to Matrix.org. These statistics would like to report usage statistics to Matrix.org. These statistics
provide the foundation a glimpse into the number of Synapse homeservers provide the foundation a glimpse into the number of Synapse homeservers
participating in the network, as well as statistics such as the number of participating in the network, as well as statistics such as the number of
rooms being created and messages being sent. This feature is sometimes rooms being created and messages being sent. This feature is sometimes
affectionately called "phone-home" stats. Reporting affectionately called "phone home" stats. Reporting
[is optional](../../configuration/config_documentation.md#report_stats) [is optional](../../configuration/config_documentation.md#report_stats)
and the reporting endpoint and the reporting endpoint
[can be configured](../../configuration/config_documentation.md#report_stats_endpoint), [can be configured](../../configuration/config_documentation.md#report_stats_endpoint),
@ -21,9 +21,9 @@ The following statistics are sent to the configured reporting endpoint:
| Statistic Name | Type | Description | | Statistic Name | Type | Description |
|----------------------------|--------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |----------------------------|--------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `homeserver` | string | The homeserver's server name. |
| `memory_rss` | int | The memory usage of the process (in kilobytes on Unix-based systems, bytes on MacOS). | | `memory_rss` | int | The memory usage of the process (in kilobytes on Unix-based systems, bytes on MacOS). |
| `cpu_average` | int | CPU time in % of a single core (not % of all cores). | | `cpu_average` | int | CPU time in % of a single core (not % of all cores). |
| `homeserver` | string | The homeserver's server name. |
| `server_context` | string | An arbitrary string used to group statistics from a set of homeservers. | | `server_context` | string | An arbitrary string used to group statistics from a set of homeservers. |
| `timestamp` | int | The current time, represented as the number of seconds since the epoch. | | `timestamp` | int | The current time, represented as the number of seconds since the epoch. |
| `uptime_seconds` | int | The number of seconds since the homeserver was last started. | | `uptime_seconds` | int | The number of seconds since the homeserver was last started. |

View file

@ -239,6 +239,8 @@ If this option is provided, it parses the given yaml to json and
serves it on `/.well-known/matrix/client` endpoint serves it on `/.well-known/matrix/client` endpoint
alongside the standard properties. alongside the standard properties.
*Added in Synapse 1.62.0.*
Example configuration: Example configuration:
```yaml ```yaml
extra_well_known_client_content : extra_well_known_client_content :
@ -1155,6 +1157,9 @@ Caching can be configured through the following sub-options:
with intermittent connections, at the cost of higher memory usage. with intermittent connections, at the cost of higher memory usage.
A value of zero means that sync responses are not cached. A value of zero means that sync responses are not cached.
Defaults to 2m. Defaults to 2m.
*Changed in Synapse 1.62.0*: The default was changed from 0 to 2m.
* `cache_autotuning` and its sub-options `max_cache_memory_usage`, `target_cache_memory_usage`, and * `cache_autotuning` and its sub-options `max_cache_memory_usage`, `target_cache_memory_usage`, and
`min_cache_ttl` work in conjunction with each other to maintain a balance between cache memory `min_cache_ttl` work in conjunction with each other to maintain a balance between cache memory
usage and cache entry availability. You must be using [jemalloc](https://github.com/matrix-org/synapse#help-synapse-is-slow-and-eats-all-my-ramcpu) usage and cache entry availability. You must be using [jemalloc](https://github.com/matrix-org/synapse#help-synapse-is-slow-and-eats-all-my-ramcpu)
@ -1471,6 +1476,25 @@ rc_joins:
per_second: 0.03 per_second: 0.03
burst_count: 12 burst_count: 12
``` ```
---
### `rc_joins_per_room`
This option allows admins to ratelimit joins to a room based on the number of recent
joins (local or remote) to that room. It is intended to mitigate mass-join spam
waves which target multiple homeservers.
By default, one join is permitted to a room every second, with an accumulating
buffer of up to ten instantaneous joins.
Example configuration (default values):
```yaml
rc_joins_per_room:
per_second: 1
burst_count: 10
```
_Added in Synapse 1.64.0._
--- ---
### `rc_3pid_validation` ### `rc_3pid_validation`
@ -1504,6 +1528,10 @@ The `rc_invites.per_user` limit applies to the *receiver* of the invite, rather
sender, meaning that a `rc_invite.per_user.burst_count` of 5 mandates that a single user sender, meaning that a `rc_invite.per_user.burst_count` of 5 mandates that a single user
cannot *receive* more than a burst of 5 invites at a time. cannot *receive* more than a burst of 5 invites at a time.
In contrast, the `rc_invites.per_issuer` limit applies to the *issuer* of the invite, meaning that a `rc_invite.per_issuer.burst_count` of 5 mandates that single user cannot *send* more than a burst of 5 invites at a time.
_Changed in version 1.63:_ added the `per_issuer` limit.
Example configuration: Example configuration:
```yaml ```yaml
rc_invites: rc_invites:
@ -1513,7 +1541,11 @@ rc_invites:
per_user: per_user:
per_second: 0.004 per_second: 0.004
burst_count: 3 burst_count: 3
per_issuer:
per_second: 0.5
burst_count: 5
``` ```
--- ---
### `rc_third_party_invite` ### `rc_third_party_invite`
@ -2168,30 +2200,26 @@ default_identity_server: https://matrix.org
--- ---
### `account_threepid_delegates` ### `account_threepid_delegates`
Handle threepid (email/phone etc) registration and password resets through a set of Delegate verification of phone numbers to an identity server.
*trusted* identity servers. Note that this allows the configured identity server to
reset passwords for accounts!
Be aware that if `email` is not set, and SMTP options have not been When a user wishes to add a phone number to their account, we need to verify that they
configured in the email config block, registration and user password resets via actually own that phone number, which requires sending them a text message (SMS).
email will be globally disabled. Currently Synapse does not support sending those texts itself and instead delegates the
task to an identity server. The base URI for the identity server to be used is
specified by the `account_threepid_delegates.msisdn` option.
Additionally, if `msisdn` is not set, registration and password resets via msisdn If this is left unspecified, Synapse will not allow users to add phone numbers to
will be disabled regardless, and users will not be able to associate an msisdn their account.
identifier to their account. This is due to Synapse currently not supporting
any method of sending SMS messages on its own.
To enable using an identity server for operations regarding a particular third-party (Servers handling the these requests must answer the `/requestToken` endpoints defined
identifier type, set the value to the URL of that identity server as shown in the by the Matrix Identity Service API
examples below. [specification](https://matrix.org/docs/spec/identity_service/latest).)
Servers handling the these requests must answer the `/requestToken` endpoints defined *Updated in Synapse 1.64.0*: No longer accepts an `email` option.
by the Matrix Identity Service API [specification](https://matrix.org/docs/spec/identity_service/latest).
Example configuration: Example configuration:
```yaml ```yaml
account_threepid_delegates: account_threepid_delegates:
email: https://example.com # Delegate email sending to example.com
msisdn: http://localhost:8090 # Delegate SMS sending to this local process msisdn: http://localhost:8090 # Delegate SMS sending to this local process
``` ```
--- ---
@ -2409,9 +2437,14 @@ metrics_flags:
--- ---
### `report_stats` ### `report_stats`
Whether or not to report anonymized homeserver usage statistics. This is originally Whether or not to report homeserver usage statistics. This is originally
set when generating the config. Set this option to true or false to change the current set when generating the config. Set this option to true or false to change the current
behavior. behavior. See
[Reporting Homeserver Usage Statistics](../administration/monitoring/reporting_homeserver_usage_statistics.md)
for information on what data is reported.
Statistics will be reported 5 minutes after Synapse starts, and then every 3 hours
after that.
Example configuration: Example configuration:
```yaml ```yaml
@ -2420,7 +2453,7 @@ report_stats: true
--- ---
### `report_stats_endpoint` ### `report_stats_endpoint`
The endpoint to report the anonymized homeserver usage statistics to. The endpoint to report homeserver usage statistics to.
Defaults to https://matrix.org/report-usage-stats/push Defaults to https://matrix.org/report-usage-stats/push
Example configuration: Example configuration:
@ -3154,9 +3187,17 @@ Server admins can configure custom templates for email content. See
This setting has the following sub-options: This setting has the following sub-options:
* `smtp_host`: The hostname of the outgoing SMTP server to use. Defaults to 'localhost'. * `smtp_host`: The hostname of the outgoing SMTP server to use. Defaults to 'localhost'.
* `smtp_port`: The port on the mail server for outgoing SMTP. Defaults to 25. * `smtp_port`: The port on the mail server for outgoing SMTP. Defaults to 465 if `force_tls` is true, else 25.
_Changed in Synapse 1.64.0:_ the default port is now aware of `force_tls`.
* `smtp_user` and `smtp_pass`: Username/password for authentication to the SMTP server. By default, no * `smtp_user` and `smtp_pass`: Username/password for authentication to the SMTP server. By default, no
authentication is attempted. authentication is attempted.
* `force_tls`: By default, Synapse connects over plain text and then optionally upgrades
to TLS via STARTTLS. If this option is set to true, TLS is used from the start (Implicit TLS),
and the option `require_transport_security` is ignored.
It is recommended to enable this if supported by your mail server.
_New in Synapse 1.64.0._
* `require_transport_security`: Set to true to require TLS transport security for SMTP. * `require_transport_security`: Set to true to require TLS transport security for SMTP.
By default, Synapse will connect over plain text, and will then switch to By default, Synapse will connect over plain text, and will then switch to
TLS via STARTTLS *if the SMTP server supports it*. If this option is set, TLS via STARTTLS *if the SMTP server supports it*. If this option is set,
@ -3221,6 +3262,7 @@ email:
smtp_port: 587 smtp_port: 587
smtp_user: "exampleusername" smtp_user: "exampleusername"
smtp_pass: "examplepassword" smtp_pass: "examplepassword"
force_tls: true
require_transport_security: true require_transport_security: true
enable_tls: false enable_tls: false
notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>" notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"

View file

@ -84,9 +84,6 @@ disallow_untyped_defs = False
[mypy-synapse.http.matrixfederationclient] [mypy-synapse.http.matrixfederationclient]
disallow_untyped_defs = False disallow_untyped_defs = False
[mypy-synapse.logging.opentracing]
disallow_untyped_defs = False
[mypy-synapse.metrics._reactor_metrics] [mypy-synapse.metrics._reactor_metrics]
disallow_untyped_defs = False disallow_untyped_defs = False
# This module imports select.epoll. That exists on Linux, but doesn't on macOS. # This module imports select.epoll. That exists on Linux, but doesn't on macOS.

38
poetry.lock generated
View file

@ -290,7 +290,7 @@ importlib-metadata = {version = "*", markers = "python_version < \"3.8\""}
[[package]] [[package]]
name = "frozendict" name = "frozendict"
version = "2.3.0" version = "2.3.3"
description = "A simple immutable dictionary" description = "A simple immutable dictionary"
category = "main" category = "main"
optional = false optional = false
@ -1563,7 +1563,7 @@ url_preview = ["lxml"]
[metadata] [metadata]
lock-version = "1.1" lock-version = "1.1"
python-versions = "^3.7.1" python-versions = "^3.7.1"
content-hash = "e96625923122e29b6ea5964379828e321b6cede2b020fc32c6f86c09d86d1ae8" content-hash = "c24bbcee7e86dbbe7cdbf49f91a25b310bf21095452641e7440129f59b077f78"
[metadata.files] [metadata.files]
attrs = [ attrs = [
@ -1753,23 +1753,23 @@ flake8-comprehensions = [
{file = "flake8_comprehensions-3.8.0-py3-none-any.whl", hash = "sha256:9406314803abe1193c064544ab14fdc43c58424c0882f6ff8a581eb73fc9bb58"}, {file = "flake8_comprehensions-3.8.0-py3-none-any.whl", hash = "sha256:9406314803abe1193c064544ab14fdc43c58424c0882f6ff8a581eb73fc9bb58"},
] ]
frozendict = [ frozendict = [
{file = "frozendict-2.3.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e18e2abd144a9433b0a8334582843b2aa0d3b9ac8b209aaa912ad365115fe2e1"}, {file = "frozendict-2.3.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39942914c1217a5a49c7551495a103b3dbd216e19413687e003b859c6b0ebc12"},
{file = "frozendict-2.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:96dc7a02e78da5725e5e642269bb7ae792e0c9f13f10f2e02689175ebbfedb35"}, {file = "frozendict-2.3.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5589256058b31f2b91419fa30b8dc62dbdefe7710e688a3fd5b43849161eecc9"},
{file = "frozendict-2.3.0-cp310-cp310-win_amd64.whl", hash = "sha256:752a6dcfaf9bb20a7ecab24980e4dbe041f154509c989207caf185522ef85461"}, {file = "frozendict-2.3.3-cp310-cp310-win_amd64.whl", hash = "sha256:35eb7e59e287c41f4f712d4d3d2333354175b155d217b97c99c201d2d8920790"},
{file = "frozendict-2.3.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:5346d9fc1c936c76d33975a9a9f1a067342963105d9a403a99e787c939cc2bb2"}, {file = "frozendict-2.3.3-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:310aaf81793abf4f471895e6fe65e0e74a28a2aaf7b25c2ba6ccd4e35af06842"},
{file = "frozendict-2.3.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:60dd2253f1bacb63a7c486ec541a968af4f985ffb06602ee8954a3d39ec6bd2e"}, {file = "frozendict-2.3.3-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c353c11010a986566a0cb37f9a783c560ffff7d67d5e7fd52221fb03757cdc43"},
{file = "frozendict-2.3.0-cp36-cp36m-win_amd64.whl", hash = "sha256:b2e044602ce17e5cd86724add46660fb9d80169545164e763300a3b839cb1b79"}, {file = "frozendict-2.3.3-cp36-cp36m-win_amd64.whl", hash = "sha256:15b5f82aad108125336593cec1b6420c638bf45f449c57e50949fc7654ea5a41"},
{file = "frozendict-2.3.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:a27a69b1ac3591e4258325108aee62b53c0eeb6ad0a993ae68d3c7eaea980420"}, {file = "frozendict-2.3.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:a4737e5257756bd6b877504ff50185b705db577b5330d53040a6cf6417bb3cdb"},
{file = "frozendict-2.3.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f45ef5f6b184d84744fff97b61f6b9a855e24d36b713ea2352fc723a047afa5"}, {file = "frozendict-2.3.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:80a14c11e33e8b0bc09e07bba3732c77a502c39edb8c3959fd9a0e490e031158"},
{file = "frozendict-2.3.0-cp37-cp37m-win_amd64.whl", hash = "sha256:2d3f5016650c0e9a192f5024e68fb4d63f670d0ee58b099ed3f5b4c62ea30ecb"}, {file = "frozendict-2.3.3-cp37-cp37m-win_amd64.whl", hash = "sha256:027952d1698ac9c766ef43711226b178cdd49d2acbdff396936639ad1d2a5615"},
{file = "frozendict-2.3.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6cf605916f50aabaaba5624c81eb270200f6c2c466c46960237a125ec8fe3ae0"}, {file = "frozendict-2.3.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ef818d66c85098a37cf42509545a4ba7dd0c4c679d6262123a8dc14cc474bab7"},
{file = "frozendict-2.3.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6da06e44904beae4412199d7e49be4f85c6cc168ab06b77c735ea7da5ce3454"}, {file = "frozendict-2.3.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:812279f2b270c980112dc4e367b168054f937108f8044eced4199e0ab2945a37"},
{file = "frozendict-2.3.0-cp38-cp38-win_amd64.whl", hash = "sha256:1f34793fb409c4fa70ffd25bea87b01f3bd305fb1c6b09e7dff085b126302206"}, {file = "frozendict-2.3.3-cp38-cp38-win_amd64.whl", hash = "sha256:c1fb7efbfebc2075f781be3d9774e4ba6ce4fc399148b02097f68d4b3c4bc00a"},
{file = "frozendict-2.3.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:fd72494a559bdcd28aa71f4aa81860269cd0b7c45fff3e2614a0a053ecfd2a13"}, {file = "frozendict-2.3.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a0b46d4bf95bce843c0151959d54c3e5b8d0ce29cb44794e820b3ec980d63eee"},
{file = "frozendict-2.3.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:00ea9166aa68cc5feed05986206fdbf35e838a09cb3feef998cf35978ff8a803"}, {file = "frozendict-2.3.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:38c4660f37fcc70a32ff997fe58e40b3fcc60b2017b286e33828efaa16b01308"},
{file = "frozendict-2.3.0-cp39-cp39-win_amd64.whl", hash = "sha256:9ffaf440648b44e0bc694c1a4701801941378ba3ba6541e17750ae4b4aeeb116"}, {file = "frozendict-2.3.3-cp39-cp39-win_amd64.whl", hash = "sha256:919e3609844fece11ab18bcbf28a3ed20f8108ad4149d7927d413687f281c6c9"},
{file = "frozendict-2.3.0-py3-none-any.whl", hash = "sha256:8578fe06815fcdcc672bd5603eebc98361a5317c1c3a13b28c6c810f6ea3b323"}, {file = "frozendict-2.3.3-py3-none-any.whl", hash = "sha256:f988b482d08972a196664718167a993a61c9e9f6fe7b0ca2443570b5f20ca44a"},
{file = "frozendict-2.3.0.tar.gz", hash = "sha256:da4231adefc5928e7810da2732269d3ad7b5616295b3e693746392a8205ea0b5"}, {file = "frozendict-2.3.3.tar.gz", hash = "sha256:398539c52af3c647d103185bbaa1291679f0507ad035fe3bab2a8b0366d52cf1"},
] ]
gitdb = [ gitdb = [
{file = "gitdb-4.0.9-py3-none-any.whl", hash = "sha256:8033ad4e853066ba6ca92050b9df2f89301b8fc8bf7e9324d412a63f8bf1a8fd"}, {file = "gitdb-4.0.9-py3-none-any.whl", hash = "sha256:8033ad4e853066ba6ca92050b9df2f89301b8fc8bf7e9324d412a63f8bf1a8fd"},

View file

@ -54,7 +54,7 @@ skip_gitignore = true
[tool.poetry] [tool.poetry]
name = "matrix-synapse" name = "matrix-synapse"
version = "1.63.0rc1" version = "1.64.0rc1"
description = "Homeserver for the Matrix decentralised comms protocol" description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"] authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0" license = "Apache-2.0"
@ -110,7 +110,9 @@ jsonschema = ">=3.0.0"
frozendict = ">=1,!=2.1.2" frozendict = ">=1,!=2.1.2"
# We require 2.1.0 or higher for type hints. Previous guard was >= 1.1.0 # We require 2.1.0 or higher for type hints. Previous guard was >= 1.1.0
unpaddedbase64 = ">=2.1.0" unpaddedbase64 = ">=2.1.0"
canonicaljson = "^1.4.0" # We require 1.5.0 to work around an issue when running against the C implementation of
# frozendict: https://github.com/matrix-org/python-canonicaljson/issues/36
canonicaljson = "^1.5.0"
# we use the type definitions added in signedjson 1.1. # we use the type definitions added in signedjson 1.1.
signedjson = "^1.1.0" signedjson = "^1.1.0"
# validating SSL certs for IP addresses requires service_identity 18.1. # validating SSL certs for IP addresses requires service_identity 18.1.

View file

@ -105,24 +105,24 @@ cryptography==36.0.1 \
--hash=sha256:ca28641954f767f9822c24e927ad894d45d5a1e501767599647259cbf030b903 \ --hash=sha256:ca28641954f767f9822c24e927ad894d45d5a1e501767599647259cbf030b903 \
--hash=sha256:39bdf8e70eee6b1c7b289ec6e5d84d49a6bfa11f8b8646b5b3dfe41219153316 \ --hash=sha256:39bdf8e70eee6b1c7b289ec6e5d84d49a6bfa11f8b8646b5b3dfe41219153316 \
--hash=sha256:53e5c1dc3d7a953de055d77bef2ff607ceef7a2aac0353b5d630ab67f7423638 --hash=sha256:53e5c1dc3d7a953de055d77bef2ff607ceef7a2aac0353b5d630ab67f7423638
frozendict==2.3.0 \ frozendict==2.3.3 \
--hash=sha256:e18e2abd144a9433b0a8334582843b2aa0d3b9ac8b209aaa912ad365115fe2e1 \ --hash=sha256:39942914c1217a5a49c7551495a103b3dbd216e19413687e003b859c6b0ebc12 \
--hash=sha256:96dc7a02e78da5725e5e642269bb7ae792e0c9f13f10f2e02689175ebbfedb35 \ --hash=sha256:5589256058b31f2b91419fa30b8dc62dbdefe7710e688a3fd5b43849161eecc9 \
--hash=sha256:752a6dcfaf9bb20a7ecab24980e4dbe041f154509c989207caf185522ef85461 \ --hash=sha256:35eb7e59e287c41f4f712d4d3d2333354175b155d217b97c99c201d2d8920790 \
--hash=sha256:5346d9fc1c936c76d33975a9a9f1a067342963105d9a403a99e787c939cc2bb2 \ --hash=sha256:310aaf81793abf4f471895e6fe65e0e74a28a2aaf7b25c2ba6ccd4e35af06842 \
--hash=sha256:60dd2253f1bacb63a7c486ec541a968af4f985ffb06602ee8954a3d39ec6bd2e \ --hash=sha256:c353c11010a986566a0cb37f9a783c560ffff7d67d5e7fd52221fb03757cdc43 \
--hash=sha256:b2e044602ce17e5cd86724add46660fb9d80169545164e763300a3b839cb1b79 \ --hash=sha256:15b5f82aad108125336593cec1b6420c638bf45f449c57e50949fc7654ea5a41 \
--hash=sha256:a27a69b1ac3591e4258325108aee62b53c0eeb6ad0a993ae68d3c7eaea980420 \ --hash=sha256:a4737e5257756bd6b877504ff50185b705db577b5330d53040a6cf6417bb3cdb \
--hash=sha256:4f45ef5f6b184d84744fff97b61f6b9a855e24d36b713ea2352fc723a047afa5 \ --hash=sha256:80a14c11e33e8b0bc09e07bba3732c77a502c39edb8c3959fd9a0e490e031158 \
--hash=sha256:2d3f5016650c0e9a192f5024e68fb4d63f670d0ee58b099ed3f5b4c62ea30ecb \ --hash=sha256:027952d1698ac9c766ef43711226b178cdd49d2acbdff396936639ad1d2a5615 \
--hash=sha256:6cf605916f50aabaaba5624c81eb270200f6c2c466c46960237a125ec8fe3ae0 \ --hash=sha256:ef818d66c85098a37cf42509545a4ba7dd0c4c679d6262123a8dc14cc474bab7 \
--hash=sha256:f6da06e44904beae4412199d7e49be4f85c6cc168ab06b77c735ea7da5ce3454 \ --hash=sha256:812279f2b270c980112dc4e367b168054f937108f8044eced4199e0ab2945a37 \
--hash=sha256:1f34793fb409c4fa70ffd25bea87b01f3bd305fb1c6b09e7dff085b126302206 \ --hash=sha256:c1fb7efbfebc2075f781be3d9774e4ba6ce4fc399148b02097f68d4b3c4bc00a \
--hash=sha256:fd72494a559bdcd28aa71f4aa81860269cd0b7c45fff3e2614a0a053ecfd2a13 \ --hash=sha256:a0b46d4bf95bce843c0151959d54c3e5b8d0ce29cb44794e820b3ec980d63eee \
--hash=sha256:00ea9166aa68cc5feed05986206fdbf35e838a09cb3feef998cf35978ff8a803 \ --hash=sha256:38c4660f37fcc70a32ff997fe58e40b3fcc60b2017b286e33828efaa16b01308 \
--hash=sha256:9ffaf440648b44e0bc694c1a4701801941378ba3ba6541e17750ae4b4aeeb116 \ --hash=sha256:919e3609844fece11ab18bcbf28a3ed20f8108ad4149d7927d413687f281c6c9 \
--hash=sha256:8578fe06815fcdcc672bd5603eebc98361a5317c1c3a13b28c6c810f6ea3b323 \ --hash=sha256:f988b482d08972a196664718167a993a61c9e9f6fe7b0ca2443570b5f20ca44a \
--hash=sha256:da4231adefc5928e7810da2732269d3ad7b5616295b3e693746392a8205ea0b5 --hash=sha256:398539c52af3c647d103185bbaa1291679f0507ad035fe3bab2a8b0366d52cf1
hiredis==2.0.0 \ hiredis==2.0.0 \
--hash=sha256:b4c8b0bc5841e578d5fb32a16e0c305359b987b850a06964bd5a62739d688048 \ --hash=sha256:b4c8b0bc5841e578d5fb32a16e0c305359b987b850a06964bd5a62739d688048 \
--hash=sha256:0adea425b764a08270820531ec2218d0508f8ae15a448568109ffcae050fee26 \ --hash=sha256:0adea425b764a08270820531ec2218d0508f8ae15a448568109ffcae050fee26 \

View file

@ -26,7 +26,6 @@ DISTS = (
"debian:bookworm", "debian:bookworm",
"debian:sid", "debian:sid",
"ubuntu:focal", # 20.04 LTS (our EOL forced by Py38 on 2024-10-14) "ubuntu:focal", # 20.04 LTS (our EOL forced by Py38 on 2024-10-14)
"ubuntu:impish", # 21.10 (EOL 2022-07)
"ubuntu:jammy", # 22.04 LTS (EOL 2027-04) "ubuntu:jammy", # 22.04 LTS (EOL 2027-04)
) )

View file

@ -33,7 +33,7 @@ def main() -> None:
parser.add_argument( parser.add_argument(
"--report-stats", "--report-stats",
action="store", action="store",
help="Whether the generated config reports anonymized usage statistics", help="Whether the generated config reports homeserver usage statistics",
choices=["yes", "no"], choices=["yes", "no"],
) )

View file

@ -166,22 +166,6 @@ IGNORED_TABLES = {
"ui_auth_sessions", "ui_auth_sessions",
"ui_auth_sessions_credentials", "ui_auth_sessions_credentials",
"ui_auth_sessions_ips", "ui_auth_sessions_ips",
# Groups/communities is no longer supported.
"group_attestations_remote",
"group_attestations_renewals",
"group_invites",
"group_roles",
"group_room_categories",
"group_rooms",
"group_summary_roles",
"group_summary_room_categories",
"group_summary_rooms",
"group_summary_users",
"group_users",
"groups",
"local_group_membership",
"local_group_updates",
"remote_profile_cache",
} }

View file

@ -27,6 +27,33 @@ class Ratelimiter:
""" """
Ratelimit actions marked by arbitrary keys. Ratelimit actions marked by arbitrary keys.
(Note that the source code speaks of "actions" and "burst_count" rather than
"tokens" and a "bucket_size".)
This is a "leaky bucket as a meter". For each key to be tracked there is a bucket
containing some number 0 <= T <= `burst_count` of tokens corresponding to previously
permitted requests for that key. Each bucket starts empty, and gradually leaks
tokens at a rate of `rate_hz`.
Upon an incoming request, we must determine:
- the key that this request falls under (which bucket to inspect), and
- the cost C of this request in tokens.
Then, if there is room in the bucket for C tokens (T + C <= `burst_count`),
the request is permitted and `cost` tokens are added to the bucket.
Otherwise the request is denied, and the bucket continues to hold T tokens.
This means that the limiter enforces an average request frequency of `rate_hz`,
while accumulating a buffer of up to `burst_count` requests which can be consumed
instantaneously.
The tricky bit is the leaking. We do not want to have a periodic process which
leaks every bucket! Instead, we track
- the time point when the bucket was last completely empty, and
- how many tokens have added to the bucket permitted since then.
Then for each incoming request, we can calculate how many tokens have leaked
since this time point, and use that to decide if we should accept or reject the
request.
Args: Args:
clock: A homeserver clock, for retrieving the current time clock: A homeserver clock, for retrieving the current time
rate_hz: The long term number of actions that can be performed in a second. rate_hz: The long term number of actions that can be performed in a second.
@ -41,14 +68,30 @@ class Ratelimiter:
self.burst_count = burst_count self.burst_count = burst_count
self.store = store self.store = store
# A ordered dictionary keeping track of actions, when they were last # An ordered dictionary representing the token buckets tracked by this rate
# performed and how often. Each entry is a mapping from a key of arbitrary type # limiter. Each entry maps a key of arbitrary type to a tuple representing:
# to a tuple representing: # * The number of tokens currently in the bucket,
# * How many times an action has occurred since a point in time # * The time point when the bucket was last completely empty, and
# * The point in time # * The rate_hz (leak rate) of this particular bucket.
# * The rate_hz of this particular entry. This can vary per request
self.actions: OrderedDict[Hashable, Tuple[float, float, float]] = OrderedDict() self.actions: OrderedDict[Hashable, Tuple[float, float, float]] = OrderedDict()
def _get_key(
self, requester: Optional[Requester], key: Optional[Hashable]
) -> Hashable:
"""Use the requester's MXID as a fallback key if no key is provided."""
if key is None:
if not requester:
raise ValueError("Must supply at least one of `requester` or `key`")
key = requester.user.to_string()
return key
def _get_action_counts(
self, key: Hashable, time_now_s: float
) -> Tuple[float, float, float]:
"""Retrieve the action counts, with a fallback representing an empty bucket."""
return self.actions.get(key, (0.0, time_now_s, 0.0))
async def can_do_action( async def can_do_action(
self, self,
requester: Optional[Requester], requester: Optional[Requester],
@ -88,11 +131,7 @@ class Ratelimiter:
* The reactor timestamp for when the action can be performed next. * The reactor timestamp for when the action can be performed next.
-1 if rate_hz is less than or equal to zero -1 if rate_hz is less than or equal to zero
""" """
if key is None: key = self._get_key(requester, key)
if not requester:
raise ValueError("Must supply at least one of `requester` or `key`")
key = requester.user.to_string()
if requester: if requester:
# Disable rate limiting of users belonging to any AS that is configured # Disable rate limiting of users belonging to any AS that is configured
@ -121,7 +160,7 @@ class Ratelimiter:
self._prune_message_counts(time_now_s) self._prune_message_counts(time_now_s)
# Check if there is an existing count entry for this key # Check if there is an existing count entry for this key
action_count, time_start, _ = self.actions.get(key, (0.0, time_now_s, 0.0)) action_count, time_start, _ = self._get_action_counts(key, time_now_s)
# Check whether performing another action is allowed # Check whether performing another action is allowed
time_delta = time_now_s - time_start time_delta = time_now_s - time_start
@ -164,6 +203,37 @@ class Ratelimiter:
return allowed, time_allowed return allowed, time_allowed
def record_action(
self,
requester: Optional[Requester],
key: Optional[Hashable] = None,
n_actions: int = 1,
_time_now_s: Optional[float] = None,
) -> None:
"""Record that an action(s) took place, even if they violate the rate limit.
This is useful for tracking the frequency of events that happen across
federation which we still want to impose local rate limits on. For instance, if
we are alice.com monitoring a particular room, we cannot prevent bob.com
from joining users to that room. However, we can track the number of recent
joins in the room and refuse to serve new joins ourselves if there have been too
many in the room across both homeservers.
Args:
requester: The requester that is doing the action, if any.
key: An arbitrary key used to classify an action. Defaults to the
requester's user ID.
n_actions: The number of times the user wants to do this action. If the user
cannot do all of the actions, the user's action count is not incremented
at all.
_time_now_s: The current time. Optional, defaults to the current time according
to self.clock. Only used by tests.
"""
key = self._get_key(requester, key)
time_now_s = _time_now_s if _time_now_s is not None else self.clock.time()
action_count, time_start, rate_hz = self._get_action_counts(key, time_now_s)
self.actions[key] = (action_count + n_actions, time_start, rate_hz)
def _prune_message_counts(self, time_now_s: float) -> None: def _prune_message_counts(self, time_now_s: float) -> None:
"""Remove message count entries that have not exceeded their defined """Remove message count entries that have not exceeded their defined
rate_hz limit rate_hz limit

View file

@ -84,6 +84,8 @@ class RoomVersion:
# MSC3787: Adds support for a `knock_restricted` join rule, mixing concepts of # MSC3787: Adds support for a `knock_restricted` join rule, mixing concepts of
# knocks and restricted join rules into the same join condition. # knocks and restricted join rules into the same join condition.
msc3787_knock_restricted_join_rule: bool msc3787_knock_restricted_join_rule: bool
# MSC3667: Enforce integer power levels
msc3667_int_only_power_levels: bool
class RoomVersions: class RoomVersions:
@ -103,6 +105,7 @@ class RoomVersions:
msc2716_historical=False, msc2716_historical=False,
msc2716_redactions=False, msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False, msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
) )
V2 = RoomVersion( V2 = RoomVersion(
"2", "2",
@ -120,6 +123,7 @@ class RoomVersions:
msc2716_historical=False, msc2716_historical=False,
msc2716_redactions=False, msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False, msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
) )
V3 = RoomVersion( V3 = RoomVersion(
"3", "3",
@ -137,6 +141,7 @@ class RoomVersions:
msc2716_historical=False, msc2716_historical=False,
msc2716_redactions=False, msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False, msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
) )
V4 = RoomVersion( V4 = RoomVersion(
"4", "4",
@ -154,6 +159,7 @@ class RoomVersions:
msc2716_historical=False, msc2716_historical=False,
msc2716_redactions=False, msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False, msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
) )
V5 = RoomVersion( V5 = RoomVersion(
"5", "5",
@ -171,6 +177,7 @@ class RoomVersions:
msc2716_historical=False, msc2716_historical=False,
msc2716_redactions=False, msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False, msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
) )
V6 = RoomVersion( V6 = RoomVersion(
"6", "6",
@ -188,6 +195,7 @@ class RoomVersions:
msc2716_historical=False, msc2716_historical=False,
msc2716_redactions=False, msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False, msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
) )
MSC2176 = RoomVersion( MSC2176 = RoomVersion(
"org.matrix.msc2176", "org.matrix.msc2176",
@ -205,6 +213,7 @@ class RoomVersions:
msc2716_historical=False, msc2716_historical=False,
msc2716_redactions=False, msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False, msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
) )
V7 = RoomVersion( V7 = RoomVersion(
"7", "7",
@ -222,6 +231,7 @@ class RoomVersions:
msc2716_historical=False, msc2716_historical=False,
msc2716_redactions=False, msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False, msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
) )
V8 = RoomVersion( V8 = RoomVersion(
"8", "8",
@ -239,6 +249,7 @@ class RoomVersions:
msc2716_historical=False, msc2716_historical=False,
msc2716_redactions=False, msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False, msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
) )
V9 = RoomVersion( V9 = RoomVersion(
"9", "9",
@ -256,6 +267,7 @@ class RoomVersions:
msc2716_historical=False, msc2716_historical=False,
msc2716_redactions=False, msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False, msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
) )
MSC2716v3 = RoomVersion( MSC2716v3 = RoomVersion(
"org.matrix.msc2716v3", "org.matrix.msc2716v3",
@ -273,6 +285,7 @@ class RoomVersions:
msc2716_historical=True, msc2716_historical=True,
msc2716_redactions=True, msc2716_redactions=True,
msc3787_knock_restricted_join_rule=False, msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
) )
MSC3787 = RoomVersion( MSC3787 = RoomVersion(
"org.matrix.msc3787", "org.matrix.msc3787",
@ -290,6 +303,25 @@ class RoomVersions:
msc2716_historical=False, msc2716_historical=False,
msc2716_redactions=False, msc2716_redactions=False,
msc3787_knock_restricted_join_rule=True, msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=False,
)
V10 = RoomVersion(
"10",
RoomDisposition.STABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2176_redaction_rules=False,
msc3083_join_rules=True,
msc3375_redaction_rules=True,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=True,
) )
@ -308,6 +340,7 @@ KNOWN_ROOM_VERSIONS: Dict[str, RoomVersion] = {
RoomVersions.V9, RoomVersions.V9,
RoomVersions.MSC2716v3, RoomVersions.MSC2716v3,
RoomVersions.MSC3787, RoomVersions.MSC3787,
RoomVersions.V10,
) )
} }

View file

@ -28,19 +28,22 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging from synapse.config.logger import setup_logging
from synapse.events import EventBase from synapse.events import EventBase
from synapse.handlers.admin import ExfiltrationWriter from synapse.handlers.admin import ExfiltrationWriter
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.storage.database import DatabasePool, LoggingDatabaseConnection from synapse.storage.database import DatabasePool, LoggingDatabaseConnection
from synapse.storage.databases.main.account_data import AccountDataWorkerStore
from synapse.storage.databases.main.appservice import (
ApplicationServiceTransactionWorkerStore,
ApplicationServiceWorkerStore,
)
from synapse.storage.databases.main.deviceinbox import DeviceInboxWorkerStore
from synapse.storage.databases.main.receipts import ReceiptsWorkerStore
from synapse.storage.databases.main.registration import RegistrationWorkerStore
from synapse.storage.databases.main.room import RoomWorkerStore from synapse.storage.databases.main.room import RoomWorkerStore
from synapse.storage.databases.main.tags import TagsWorkerStore
from synapse.types import StateMap from synapse.types import StateMap
from synapse.util import SYNAPSE_VERSION from synapse.util import SYNAPSE_VERSION
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
@ -49,16 +52,17 @@ logger = logging.getLogger("synapse.app.admin_cmd")
class AdminCmdSlavedStore( class AdminCmdSlavedStore(
SlavedReceiptsStore,
SlavedAccountDataStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedFilteringStore, SlavedFilteringStore,
SlavedDeviceInboxStore,
SlavedDeviceStore, SlavedDeviceStore,
SlavedPushRuleStore, SlavedPushRuleStore,
SlavedEventStore, SlavedEventStore,
BaseSlavedStore, TagsWorkerStore,
DeviceInboxWorkerStore,
AccountDataWorkerStore,
ApplicationServiceTransactionWorkerStore,
ApplicationServiceWorkerStore,
RegistrationWorkerStore,
ReceiptsWorkerStore,
RoomWorkerStore, RoomWorkerStore,
): ):
def __init__( def __init__(

View file

@ -48,20 +48,12 @@ from synapse.http.site import SynapseRequest, SynapseSite
from synapse.logging.context import LoggingContext from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.keys import SlavedKeyStore from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.profile import SlavedProfileStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.rest.admin import register_servlets_for_media_repo from synapse.rest.admin import register_servlets_for_media_repo
from synapse.rest.client import ( from synapse.rest.client import (
account_data, account_data,
@ -100,8 +92,15 @@ from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.synapse.client import build_synapse_client_resource_tree from synapse.rest.synapse.client import build_synapse_client_resource_tree
from synapse.rest.well_known import well_known_resource from synapse.rest.well_known import well_known_resource
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.storage.databases.main.account_data import AccountDataWorkerStore
from synapse.storage.databases.main.appservice import (
ApplicationServiceTransactionWorkerStore,
ApplicationServiceWorkerStore,
)
from synapse.storage.databases.main.censor_events import CensorEventsStore from synapse.storage.databases.main.censor_events import CensorEventsStore
from synapse.storage.databases.main.client_ips import ClientIpWorkerStore from synapse.storage.databases.main.client_ips import ClientIpWorkerStore
from synapse.storage.databases.main.deviceinbox import DeviceInboxWorkerStore
from synapse.storage.databases.main.directory import DirectoryWorkerStore
from synapse.storage.databases.main.e2e_room_keys import EndToEndRoomKeyStore from synapse.storage.databases.main.e2e_room_keys import EndToEndRoomKeyStore
from synapse.storage.databases.main.lock import LockStore from synapse.storage.databases.main.lock import LockStore
from synapse.storage.databases.main.media_repository import MediaRepositoryStore from synapse.storage.databases.main.media_repository import MediaRepositoryStore
@ -110,11 +109,15 @@ from synapse.storage.databases.main.monthly_active_users import (
MonthlyActiveUsersWorkerStore, MonthlyActiveUsersWorkerStore,
) )
from synapse.storage.databases.main.presence import PresenceStore from synapse.storage.databases.main.presence import PresenceStore
from synapse.storage.databases.main.profile import ProfileWorkerStore
from synapse.storage.databases.main.receipts import ReceiptsWorkerStore
from synapse.storage.databases.main.registration import RegistrationWorkerStore
from synapse.storage.databases.main.room import RoomWorkerStore from synapse.storage.databases.main.room import RoomWorkerStore
from synapse.storage.databases.main.room_batch import RoomBatchStore from synapse.storage.databases.main.room_batch import RoomBatchStore
from synapse.storage.databases.main.search import SearchStore from synapse.storage.databases.main.search import SearchStore
from synapse.storage.databases.main.session import SessionStore from synapse.storage.databases.main.session import SessionStore
from synapse.storage.databases.main.stats import StatsStore from synapse.storage.databases.main.stats import StatsStore
from synapse.storage.databases.main.tags import TagsWorkerStore
from synapse.storage.databases.main.transactions import TransactionWorkerStore from synapse.storage.databases.main.transactions import TransactionWorkerStore
from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore
from synapse.storage.databases.main.user_directory import UserDirectoryStore from synapse.storage.databases.main.user_directory import UserDirectoryStore
@ -227,11 +230,11 @@ class GenericWorkerSlavedStore(
UIAuthWorkerStore, UIAuthWorkerStore,
EndToEndRoomKeyStore, EndToEndRoomKeyStore,
PresenceStore, PresenceStore,
SlavedDeviceInboxStore, DeviceInboxWorkerStore,
SlavedDeviceStore, SlavedDeviceStore,
SlavedReceiptsStore,
SlavedPushRuleStore, SlavedPushRuleStore,
SlavedAccountDataStore, TagsWorkerStore,
AccountDataWorkerStore,
SlavedPusherStore, SlavedPusherStore,
CensorEventsStore, CensorEventsStore,
ClientIpWorkerStore, ClientIpWorkerStore,
@ -239,19 +242,20 @@ class GenericWorkerSlavedStore(
SlavedKeyStore, SlavedKeyStore,
RoomWorkerStore, RoomWorkerStore,
RoomBatchStore, RoomBatchStore,
DirectoryStore, DirectoryWorkerStore,
SlavedApplicationServiceStore, ApplicationServiceTransactionWorkerStore,
SlavedRegistrationStore, ApplicationServiceWorkerStore,
SlavedProfileStore, ProfileWorkerStore,
SlavedFilteringStore, SlavedFilteringStore,
MonthlyActiveUsersWorkerStore, MonthlyActiveUsersWorkerStore,
MediaRepositoryStore, MediaRepositoryStore,
ServerMetricsStore, ServerMetricsStore,
ReceiptsWorkerStore,
RegistrationWorkerStore,
SearchStore, SearchStore,
TransactionWorkerStore, TransactionWorkerStore,
LockStore, LockStore,
SessionStore, SessionStore,
BaseSlavedStore,
): ):
# Properties that multiple storage classes define. Tell mypy what the # Properties that multiple storage classes define. Tell mypy what the
# expected type is. # expected type is.

View file

@ -44,7 +44,6 @@ from synapse.app._base import (
register_start, register_start,
) )
from synapse.config._base import ConfigError, format_config_error from synapse.config._base import ConfigError, format_config_error
from synapse.config.emailconfig import ThreepidBehaviour
from synapse.config.homeserver import HomeServerConfig from synapse.config.homeserver import HomeServerConfig
from synapse.config.server import ListenerConfig from synapse.config.server import ListenerConfig
from synapse.federation.transport.server import TransportLayerServer from synapse.federation.transport.server import TransportLayerServer
@ -202,7 +201,7 @@ class SynapseHomeServer(HomeServer):
} }
) )
if self.config.email.threepid_behaviour_email == ThreepidBehaviour.LOCAL: if self.config.email.can_verify_email:
from synapse.rest.synapse.client.password_reset import ( from synapse.rest.synapse.client.password_reset import (
PasswordResetSubmitTokenResource, PasswordResetSubmitTokenResource,
) )

View file

@ -53,6 +53,18 @@ sent_events_counter = Counter(
"synapse_appservice_api_sent_events", "Number of events sent to the AS", ["service"] "synapse_appservice_api_sent_events", "Number of events sent to the AS", ["service"]
) )
sent_ephemeral_counter = Counter(
"synapse_appservice_api_sent_ephemeral",
"Number of ephemeral events sent to the AS",
["service"],
)
sent_todevice_counter = Counter(
"synapse_appservice_api_sent_todevice",
"Number of todevice messages sent to the AS",
["service"],
)
HOUR_IN_MS = 60 * 60 * 1000 HOUR_IN_MS = 60 * 60 * 1000
@ -310,6 +322,8 @@ class ApplicationServiceApi(SimpleHttpClient):
) )
sent_transactions_counter.labels(service.id).inc() sent_transactions_counter.labels(service.id).inc()
sent_events_counter.labels(service.id).inc(len(serialized_events)) sent_events_counter.labels(service.id).inc(len(serialized_events))
sent_ephemeral_counter.labels(service.id).inc(len(ephemeral))
sent_todevice_counter.labels(service.id).inc(len(to_device_messages))
return True return True
except CodeMessageException as e: except CodeMessageException as e:
logger.warning( logger.warning(

View file

@ -97,16 +97,16 @@ def format_config_error(e: ConfigError) -> Iterator[str]:
# We split these messages out to allow packages to override with package # We split these messages out to allow packages to override with package
# specific instructions. # specific instructions.
MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS = """\ MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS = """\
Please opt in or out of reporting anonymized homeserver usage statistics, by Please opt in or out of reporting homeserver usage statistics, by setting
setting the `report_stats` key in your config file to either True or False. the `report_stats` key in your config file to either True or False.
""" """
MISSING_REPORT_STATS_SPIEL = """\ MISSING_REPORT_STATS_SPIEL = """\
We would really appreciate it if you could help our project out by reporting We would really appreciate it if you could help our project out by reporting
anonymized usage statistics from your homeserver. Only very basic aggregate homeserver usage statistics from your homeserver. Your homeserver's server name,
data (e.g. number of users) will be reported, but it helps us to track the along with very basic aggregate data (e.g. number of users) will be reported. But
growth of the Matrix community, and helps us to make Matrix a success, as well it helps us to track the growth of the Matrix community, and helps us to make Matrix
as to convince other networks that they should peer with us. a success, as well as to convince other networks that they should peer with us.
Thank you. Thank you.
""" """
@ -621,7 +621,7 @@ class RootConfig:
generate_group.add_argument( generate_group.add_argument(
"--report-stats", "--report-stats",
action="store", action="store",
help="Whether the generated config reports anonymized usage statistics.", help="Whether the generated config reports homeserver usage statistics.",
choices=["yes", "no"], choices=["yes", "no"],
) )
generate_group.add_argument( generate_group.add_argument(

View file

@ -18,7 +18,6 @@
import email.utils import email.utils
import logging import logging
import os import os
from enum import Enum
from typing import Any from typing import Any
import attr import attr
@ -86,14 +85,19 @@ class EmailConfig(Config):
if email_config is None: if email_config is None:
email_config = {} email_config = {}
self.force_tls = email_config.get("force_tls", False)
self.email_smtp_host = email_config.get("smtp_host", "localhost") self.email_smtp_host = email_config.get("smtp_host", "localhost")
self.email_smtp_port = email_config.get("smtp_port", 25) self.email_smtp_port = email_config.get(
"smtp_port", 465 if self.force_tls else 25
)
self.email_smtp_user = email_config.get("smtp_user", None) self.email_smtp_user = email_config.get("smtp_user", None)
self.email_smtp_pass = email_config.get("smtp_pass", None) self.email_smtp_pass = email_config.get("smtp_pass", None)
self.require_transport_security = email_config.get( self.require_transport_security = email_config.get(
"require_transport_security", False "require_transport_security", False
) )
self.enable_smtp_tls = email_config.get("enable_tls", True) self.enable_smtp_tls = email_config.get("enable_tls", True)
if self.force_tls and not self.enable_smtp_tls:
raise ConfigError("email.force_tls requires email.enable_tls to be true")
if self.require_transport_security and not self.enable_smtp_tls: if self.require_transport_security and not self.enable_smtp_tls:
raise ConfigError( raise ConfigError(
"email.require_transport_security requires email.enable_tls to be true" "email.require_transport_security requires email.enable_tls to be true"
@ -131,41 +135,22 @@ class EmailConfig(Config):
self.email_enable_notifs = email_config.get("enable_notifs", False) self.email_enable_notifs = email_config.get("enable_notifs", False)
self.threepid_behaviour_email = (
# Have Synapse handle the email sending if account_threepid_delegates.email
# is not defined
# msisdn is currently always remote while Synapse does not support any method of
# sending SMS messages
ThreepidBehaviour.REMOTE
if self.root.registration.account_threepid_delegate_email
else ThreepidBehaviour.LOCAL
)
if config.get("trust_identity_server_for_password_resets"): if config.get("trust_identity_server_for_password_resets"):
raise ConfigError( raise ConfigError(
'The config option "trust_identity_server_for_password_resets" ' 'The config option "trust_identity_server_for_password_resets" '
'has been replaced by "account_threepid_delegate". ' "is no longer supported. Please remove it from the config file."
"Please consult the configuration manual at docs/usage/configuration/config_documentation.md for "
"details and update your config file."
) )
self.local_threepid_handling_disabled_due_to_email_config = False # If we have email config settings, assume that we can verify ownership of
if ( # email addresses.
self.threepid_behaviour_email == ThreepidBehaviour.LOCAL self.can_verify_email = email_config != {}
and email_config == {}
):
# We cannot warn the user this has happened here
# Instead do so when a user attempts to reset their password
self.local_threepid_handling_disabled_due_to_email_config = True
self.threepid_behaviour_email = ThreepidBehaviour.OFF
# Get lifetime of a validation token in milliseconds # Get lifetime of a validation token in milliseconds
self.email_validation_token_lifetime = self.parse_duration( self.email_validation_token_lifetime = self.parse_duration(
email_config.get("validation_token_lifetime", "1h") email_config.get("validation_token_lifetime", "1h")
) )
if self.threepid_behaviour_email == ThreepidBehaviour.LOCAL: if self.can_verify_email:
missing = [] missing = []
if not self.email_notif_from: if not self.email_notif_from:
missing.append("email.notif_from") missing.append("email.notif_from")
@ -356,18 +341,3 @@ class EmailConfig(Config):
"Config option email.invite_client_location must be a http or https URL", "Config option email.invite_client_location must be a http or https URL",
path=("email", "invite_client_location"), path=("email", "invite_client_location"),
) )
class ThreepidBehaviour(Enum):
"""
Enum to define the behaviour of Synapse with regards to when it contacts an identity
server for 3pid registration and password resets
REMOTE = use an external server to send tokens
LOCAL = send tokens ourselves
OFF = disable registration via 3pid and password resets
"""
REMOTE = "remote"
LOCAL = "local"
OFF = "off"

View file

@ -112,6 +112,13 @@ class RatelimitConfig(Config):
defaults={"per_second": 0.01, "burst_count": 10}, defaults={"per_second": 0.01, "burst_count": 10},
) )
# Track the rate of joins to a given room. If there are too many, temporarily
# prevent local joins and remote joins via this server.
self.rc_joins_per_room = RateLimitConfig(
config.get("rc_joins_per_room", {}),
defaults={"per_second": 1, "burst_count": 10},
)
# Ratelimit cross-user key requests: # Ratelimit cross-user key requests:
# * For local requests this is keyed by the sending device. # * For local requests this is keyed by the sending device.
# * For requests received over federation this is keyed by the origin. # * For requests received over federation this is keyed by the origin.

View file

@ -20,6 +20,13 @@ from synapse.config._base import Config, ConfigError
from synapse.types import JsonDict, RoomAlias, UserID from synapse.types import JsonDict, RoomAlias, UserID
from synapse.util.stringutils import random_string_with_symbols, strtobool from synapse.util.stringutils import random_string_with_symbols, strtobool
NO_EMAIL_DELEGATE_ERROR = """\
Delegation of email verification to an identity server is no longer supported. To
continue to allow users to add email addresses to their accounts, and use them for
password resets, configure Synapse with an SMTP server via the `email` setting, and
remove `account_threepid_delegates.email`.
"""
class RegistrationConfig(Config): class RegistrationConfig(Config):
section = "registration" section = "registration"
@ -51,7 +58,9 @@ class RegistrationConfig(Config):
self.bcrypt_rounds = config.get("bcrypt_rounds", 12) self.bcrypt_rounds = config.get("bcrypt_rounds", 12)
account_threepid_delegates = config.get("account_threepid_delegates") or {} account_threepid_delegates = config.get("account_threepid_delegates") or {}
self.account_threepid_delegate_email = account_threepid_delegates.get("email") if "email" in account_threepid_delegates:
raise ConfigError(NO_EMAIL_DELEGATE_ERROR)
# self.account_threepid_delegate_email = account_threepid_delegates.get("email")
self.account_threepid_delegate_msisdn = account_threepid_delegates.get("msisdn") self.account_threepid_delegate_msisdn = account_threepid_delegates.get("msisdn")
self.default_identity_server = config.get("default_identity_server") self.default_identity_server = config.get("default_identity_server")
self.allow_guest_access = config.get("allow_guest_access", False) self.allow_guest_access = config.get("allow_guest_access", False)

View file

@ -42,6 +42,16 @@ THUMBNAIL_SIZE_YAML = """\
# method: %(method)s # method: %(method)s
""" """
# A map from the given media type to the type of thumbnail we should generate
# for it.
THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP = {
"image/jpeg": "jpeg",
"image/jpg": "jpeg",
"image/webp": "webp",
"image/gif": "webp",
"image/png": "png",
}
HTTP_PROXY_SET_WARNING = """\ HTTP_PROXY_SET_WARNING = """\
The Synapse config url_preview_ip_range_blacklist will be ignored as an HTTP(s) proxy is configured.""" The Synapse config url_preview_ip_range_blacklist will be ignored as an HTTP(s) proxy is configured."""
@ -79,14 +89,26 @@ def parse_thumbnail_requirements(
width = size["width"] width = size["width"]
height = size["height"] height = size["height"]
method = size["method"] method = size["method"]
jpeg_thumbnail = ThumbnailRequirement(width, height, method, "image/jpeg")
png_thumbnail = ThumbnailRequirement(width, height, method, "image/png") for format, thumbnail_format in THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP.items():
webp_thumbnail = ThumbnailRequirement(width, height, method, "image/webp") requirement = requirements.setdefault(format, [])
requirements.setdefault("image/jpeg", []).append(jpeg_thumbnail) if thumbnail_format == "jpeg":
requirements.setdefault("image/jpg", []).append(jpeg_thumbnail) requirement.append(
requirements.setdefault("image/webp", []).append(webp_thumbnail) ThumbnailRequirement(width, height, method, "image/jpeg")
requirements.setdefault("image/gif", []).append(png_thumbnail) )
requirements.setdefault("image/png", []).append(png_thumbnail) elif thumbnail_format == "png":
requirement.append(
ThumbnailRequirement(width, height, method, "image/png")
)
elif thumbnail_format == "webp":
requirement.append(
ThumbnailRequirement(width, height, method, "image/webp")
)
else:
raise Exception(
"Unknown thumbnail mapping from %s to %s. This is a Synapse problem, please report!"
% (format, thumbnail_format)
)
return { return {
media_type: tuple(thumbnails) for media_type, thumbnails in requirements.items() media_type: tuple(thumbnails) for media_type, thumbnails in requirements.items()
} }

View file

@ -740,6 +740,32 @@ def _check_power_levels(
except Exception: except Exception:
raise SynapseError(400, "Not a valid power level: %s" % (v,)) raise SynapseError(400, "Not a valid power level: %s" % (v,))
# Reject events with stringy power levels if required by room version
if (
event.type == EventTypes.PowerLevels
and room_version_obj.msc3667_int_only_power_levels
):
for k, v in event.content.items():
if k in {
"users_default",
"events_default",
"state_default",
"ban",
"redact",
"kick",
"invite",
}:
if not isinstance(v, int):
raise SynapseError(400, f"{v!r} must be an integer.")
if k in {"events", "notifications", "users"}:
if not isinstance(v, dict) or not all(
isinstance(v, int) for v in v.values()
):
raise SynapseError(
400,
f"{v!r} must be a dict wherein all the values are integers.",
)
key = (event.type, event.state_key) key = (event.type, event.state_key)
current_state = auth_events.get(key) current_state = auth_events.get(key)

View file

@ -24,9 +24,11 @@ from synapse.api.room_versions import (
RoomVersion, RoomVersion,
) )
from synapse.crypto.event_signing import add_hashes_and_signatures from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.event_auth import auth_types_for_event
from synapse.events import EventBase, _EventInternalMetadata, make_event_from_dict from synapse.events import EventBase, _EventInternalMetadata, make_event_from_dict
from synapse.state import StateHandler from synapse.state import StateHandler
from synapse.storage.databases.main import DataStore from synapse.storage.databases.main import DataStore
from synapse.storage.state import StateFilter
from synapse.types import EventID, JsonDict from synapse.types import EventID, JsonDict
from synapse.util import Clock from synapse.util import Clock
from synapse.util.stringutils import random_string from synapse.util.stringutils import random_string
@ -120,8 +122,12 @@ class EventBuilder:
The signed and hashed event. The signed and hashed event.
""" """
if auth_event_ids is None: if auth_event_ids is None:
state_ids = await self._state.get_current_state_ids( state_ids = await self._state.compute_state_after_events(
self.room_id, prev_event_ids self.room_id,
prev_event_ids,
state_filter=StateFilter.from_types(
auth_types_for_event(self.room_version, self)
),
) )
auth_event_ids = self._event_auth_handler.compute_auth_events( auth_event_ids = self._event_auth_handler.compute_auth_events(
self, state_ids self, state_ids

View file

@ -53,7 +53,7 @@ from synapse.api.room_versions import (
RoomVersion, RoomVersion,
RoomVersions, RoomVersions,
) )
from synapse.events import EventBase, builder from synapse.events import EventBase, builder, make_event_from_dict
from synapse.federation.federation_base import ( from synapse.federation.federation_base import (
FederationBase, FederationBase,
InvalidEventSignatureError, InvalidEventSignatureError,
@ -217,7 +217,7 @@ class FederationClient(FederationBase):
) )
async def claim_client_keys( async def claim_client_keys(
self, destination: str, content: JsonDict, timeout: int self, destination: str, content: JsonDict, timeout: Optional[int]
) -> JsonDict: ) -> JsonDict:
"""Claims one-time keys for a device hosted on a remote server. """Claims one-time keys for a device hosted on a remote server.
@ -299,7 +299,8 @@ class FederationClient(FederationBase):
moving to the next destination. None indicates no timeout. moving to the next destination. None indicates no timeout.
Returns: Returns:
The requested PDU, or None if we were unable to find it. A copy of the requested PDU that is safe to modify, or None if we
were unable to find it.
Raises: Raises:
SynapseError, NotRetryingDestination, FederationDeniedError SynapseError, NotRetryingDestination, FederationDeniedError
@ -309,7 +310,7 @@ class FederationClient(FederationBase):
) )
logger.debug( logger.debug(
"retrieved event id %s from %s: %r", "get_pdu_from_destination_raw: retrieved event id %s from %s: %r",
event_id, event_id,
destination, destination,
transaction_data, transaction_data,
@ -358,54 +359,92 @@ class FederationClient(FederationBase):
The requested PDU, or None if we were unable to find it. The requested PDU, or None if we were unable to find it.
""" """
logger.debug(
"get_pdu: event_id=%s from destinations=%s", event_id, destinations
)
# TODO: Rate limit the number of times we try and get the same event. # TODO: Rate limit the number of times we try and get the same event.
ev = self._get_pdu_cache.get(event_id) # We might need the same event multiple times in quick succession (before
if ev: # it gets persisted to the database), so we cache the results of the lookup.
return ev # Note that this is separate to the regular get_event cache which caches
# events once they have been persisted.
event = self._get_pdu_cache.get(event_id)
pdu_attempts = self.pdu_destination_tried.setdefault(event_id, {}) # If we don't see the event in the cache, go try to fetch it from the
# provided remote federated destinations
if not event:
pdu_attempts = self.pdu_destination_tried.setdefault(event_id, {})
signed_pdu = None for destination in destinations:
for destination in destinations: now = self._clock.time_msec()
now = self._clock.time_msec() last_attempt = pdu_attempts.get(destination, 0)
last_attempt = pdu_attempts.get(destination, 0) if last_attempt + PDU_RETRY_TIME_MS > now:
if last_attempt + PDU_RETRY_TIME_MS > now: logger.debug(
continue "get_pdu: skipping destination=%s because we tried it recently last_attempt=%s and we only check every %s (now=%s)",
destination,
last_attempt,
PDU_RETRY_TIME_MS,
now,
)
continue
try: try:
signed_pdu = await self.get_pdu_from_destination_raw( event = await self.get_pdu_from_destination_raw(
destination=destination, destination=destination,
event_id=event_id, event_id=event_id,
room_version=room_version, room_version=room_version,
timeout=timeout, timeout=timeout,
) )
pdu_attempts[destination] = now pdu_attempts[destination] = now
except SynapseError as e: if event:
logger.info( # Prime the cache
"Failed to get PDU %s from %s because %s", event_id, destination, e self._get_pdu_cache[event.event_id] = event
)
continue
except NotRetryingDestination as e:
logger.info(str(e))
continue
except FederationDeniedError as e:
logger.info(str(e))
continue
except Exception as e:
pdu_attempts[destination] = now
logger.info( # FIXME: We should add a `break` here to avoid calling every
"Failed to get PDU %s from %s because %s", event_id, destination, e # destination after we already found a PDU (will follow-up
) # in a separate PR)
continue
if signed_pdu: except SynapseError as e:
self._get_pdu_cache[event_id] = signed_pdu logger.info(
"Failed to get PDU %s from %s because %s",
event_id,
destination,
e,
)
continue
except NotRetryingDestination as e:
logger.info(str(e))
continue
except FederationDeniedError as e:
logger.info(str(e))
continue
except Exception as e:
pdu_attempts[destination] = now
return signed_pdu logger.info(
"Failed to get PDU %s from %s because %s",
event_id,
destination,
e,
)
continue
if not event:
return None
# `event` now refers to an object stored in `get_pdu_cache`. Our
# callers may need to modify the returned object (eg to set
# `event.internal_metadata.outlier = true`), so we return a copy
# rather than the original object.
event_copy = make_event_from_dict(
event.get_pdu_json(),
event.room_version,
)
return event_copy
async def get_room_state_ids( async def get_room_state_ids(
self, destination: str, room_id: str, event_id: str self, destination: str, room_id: str, event_id: str

View file

@ -118,6 +118,7 @@ class FederationServer(FederationBase):
self._federation_event_handler = hs.get_federation_event_handler() self._federation_event_handler = hs.get_federation_event_handler()
self.state = hs.get_state_handler() self.state = hs.get_state_handler()
self._event_auth_handler = hs.get_event_auth_handler() self._event_auth_handler = hs.get_event_auth_handler()
self._room_member_handler = hs.get_room_member_handler()
self._state_storage_controller = hs.get_storage_controllers().state self._state_storage_controller = hs.get_storage_controllers().state
@ -621,6 +622,15 @@ class FederationServer(FederationBase):
) )
raise IncompatibleRoomVersionError(room_version=room_version) raise IncompatibleRoomVersionError(room_version=room_version)
# Refuse the request if that room has seen too many joins recently.
# This is in addition to the HS-level rate limiting applied by
# BaseFederationServlet.
# type-ignore: mypy doesn't seem able to deduce the type of the limiter(!?)
await self._room_member_handler._join_rate_per_room_limiter.ratelimit( # type: ignore[has-type]
requester=None,
key=room_id,
update=False,
)
pdu = await self.handler.on_make_join_request(origin, room_id, user_id) pdu = await self.handler.on_make_join_request(origin, room_id, user_id)
return {"event": pdu.get_templated_pdu_json(), "room_version": room_version} return {"event": pdu.get_templated_pdu_json(), "room_version": room_version}
@ -655,6 +665,12 @@ class FederationServer(FederationBase):
room_id: str, room_id: str,
caller_supports_partial_state: bool = False, caller_supports_partial_state: bool = False,
) -> Dict[str, Any]: ) -> Dict[str, Any]:
await self._room_member_handler._join_rate_per_room_limiter.ratelimit( # type: ignore[has-type]
requester=None,
key=room_id,
update=False,
)
event, context = await self._on_send_membership_event( event, context = await self._on_send_membership_event(
origin, content, Membership.JOIN, room_id origin, content, Membership.JOIN, room_id
) )

View file

@ -351,7 +351,11 @@ class FederationSender(AbstractFederationSender):
self._is_processing = True self._is_processing = True
while True: while True:
last_token = await self.store.get_federation_out_pos("events") last_token = await self.store.get_federation_out_pos("events")
next_token, events = await self.store.get_all_new_events_stream( (
next_token,
events,
event_to_received_ts,
) = await self.store.get_all_new_events_stream(
last_token, self._last_poked_id, limit=100 last_token, self._last_poked_id, limit=100
) )
@ -476,7 +480,7 @@ class FederationSender(AbstractFederationSender):
await self._send_pdu(event, sharded_destinations) await self._send_pdu(event, sharded_destinations)
now = self.clock.time_msec() now = self.clock.time_msec()
ts = await self.store.get_received_ts(event.event_id) ts = event_to_received_ts[event.event_id]
assert ts is not None assert ts is not None
synapse.metrics.event_processing_lag_by_event.labels( synapse.metrics.event_processing_lag_by_event.labels(
"federation_sender" "federation_sender"
@ -509,7 +513,7 @@ class FederationSender(AbstractFederationSender):
if events: if events:
now = self.clock.time_msec() now = self.clock.time_msec()
ts = await self.store.get_received_ts(events[-1].event_id) ts = event_to_received_ts[events[-1].event_id]
assert ts is not None assert ts is not None
synapse.metrics.event_processing_lag.labels( synapse.metrics.event_processing_lag.labels(

View file

@ -619,7 +619,7 @@ class TransportLayerClient:
) )
async def claim_client_keys( async def claim_client_keys(
self, destination: str, query_content: JsonDict, timeout: int self, destination: str, query_content: JsonDict, timeout: Optional[int]
) -> JsonDict: ) -> JsonDict:
"""Claim one-time keys for a list of devices hosted on a remote server. """Claim one-time keys for a list of devices hosted on a remote server.

View file

@ -309,7 +309,7 @@ class BaseFederationServlet:
raise raise
# update the active opentracing span with the authenticated entity # update the active opentracing span with the authenticated entity
set_tag("authenticated_entity", origin) set_tag("authenticated_entity", str(origin))
# if the origin is authenticated and whitelisted, use its span context # if the origin is authenticated and whitelisted, use its span context
# as the parent. # as the parent.

View file

@ -104,14 +104,15 @@ class ApplicationServicesHandler:
with Measure(self.clock, "notify_interested_services"): with Measure(self.clock, "notify_interested_services"):
self.is_processing = True self.is_processing = True
try: try:
limit = 100
upper_bound = -1 upper_bound = -1
while upper_bound < self.current_max: while upper_bound < self.current_max:
last_token = await self.store.get_appservice_last_pos()
( (
upper_bound, upper_bound,
events, events,
) = await self.store.get_new_events_for_appservice( event_to_received_ts,
self.current_max, limit ) = await self.store.get_all_new_events_stream(
last_token, self.current_max, limit=100, get_prev_content=True
) )
events_by_room: Dict[str, List[EventBase]] = {} events_by_room: Dict[str, List[EventBase]] = {}
@ -150,7 +151,7 @@ class ApplicationServicesHandler:
) )
now = self.clock.time_msec() now = self.clock.time_msec()
ts = await self.store.get_received_ts(event.event_id) ts = event_to_received_ts[event.event_id]
assert ts is not None assert ts is not None
synapse.metrics.event_processing_lag_by_event.labels( synapse.metrics.event_processing_lag_by_event.labels(
@ -187,7 +188,7 @@ class ApplicationServicesHandler:
if events: if events:
now = self.clock.time_msec() now = self.clock.time_msec()
ts = await self.store.get_received_ts(events[-1].event_id) ts = event_to_received_ts[events[-1].event_id]
assert ts is not None assert ts is not None
synapse.metrics.event_processing_lag.labels( synapse.metrics.event_processing_lag.labels(

View file

@ -118,8 +118,8 @@ class DeviceWorkerHandler:
ips = await self.store.get_last_client_ip_by_device(user_id, device_id) ips = await self.store.get_last_client_ip_by_device(user_id, device_id)
_update_device_from_client_ips(device, ips) _update_device_from_client_ips(device, ips)
set_tag("device", device) set_tag("device", str(device))
set_tag("ips", ips) set_tag("ips", str(ips))
return device return device
@ -170,7 +170,7 @@ class DeviceWorkerHandler:
""" """
set_tag("user_id", user_id) set_tag("user_id", user_id)
set_tag("from_token", from_token) set_tag("from_token", str(from_token))
now_room_key = self.store.get_room_max_token() now_room_key = self.store.get_room_max_token()
room_ids = await self.store.get_rooms_for_user(user_id) room_ids = await self.store.get_rooms_for_user(user_id)
@ -795,7 +795,7 @@ class DeviceListUpdater:
""" """
set_tag("origin", origin) set_tag("origin", origin)
set_tag("edu_content", edu_content) set_tag("edu_content", str(edu_content))
user_id = edu_content.pop("user_id") user_id = edu_content.pop("user_id")
device_id = edu_content.pop("device_id") device_id = edu_content.pop("device_id")
stream_id = str(edu_content.pop("stream_id")) # They may come as ints stream_id = str(edu_content.pop("stream_id")) # They may come as ints

View file

@ -15,7 +15,7 @@
# limitations under the License. # limitations under the License.
import logging import logging
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Tuple from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Mapping, Optional, Tuple
import attr import attr
from canonicaljson import encode_canonical_json from canonicaljson import encode_canonical_json
@ -92,7 +92,11 @@ class E2eKeysHandler:
@trace @trace
async def query_devices( async def query_devices(
self, query_body: JsonDict, timeout: int, from_user_id: str, from_device_id: str self,
query_body: JsonDict,
timeout: int,
from_user_id: str,
from_device_id: Optional[str],
) -> JsonDict: ) -> JsonDict:
"""Handle a device key query from a client """Handle a device key query from a client
@ -120,9 +124,7 @@ class E2eKeysHandler:
the number of in-flight queries at a time. the number of in-flight queries at a time.
""" """
async with self._query_devices_linearizer.queue((from_user_id, from_device_id)): async with self._query_devices_linearizer.queue((from_user_id, from_device_id)):
device_keys_query: Dict[str, Iterable[str]] = query_body.get( device_keys_query: Dict[str, List[str]] = query_body.get("device_keys", {})
"device_keys", {}
)
# separate users by domain. # separate users by domain.
# make a map from domain to user_id to device_ids # make a map from domain to user_id to device_ids
@ -136,8 +138,8 @@ class E2eKeysHandler:
else: else:
remote_queries[user_id] = device_ids remote_queries[user_id] = device_ids
set_tag("local_key_query", local_query) set_tag("local_key_query", str(local_query))
set_tag("remote_key_query", remote_queries) set_tag("remote_key_query", str(remote_queries))
# First get local devices. # First get local devices.
# A map of destination -> failure response. # A map of destination -> failure response.
@ -341,7 +343,7 @@ class E2eKeysHandler:
failure = _exception_to_failure(e) failure = _exception_to_failure(e)
failures[destination] = failure failures[destination] = failure
set_tag("error", True) set_tag("error", True)
set_tag("reason", failure) set_tag("reason", str(failure))
return return
@ -392,7 +394,7 @@ class E2eKeysHandler:
@trace @trace
async def query_local_devices( async def query_local_devices(
self, query: Dict[str, Optional[List[str]]] self, query: Mapping[str, Optional[List[str]]]
) -> Dict[str, Dict[str, dict]]: ) -> Dict[str, Dict[str, dict]]:
"""Get E2E device keys for local users """Get E2E device keys for local users
@ -403,7 +405,7 @@ class E2eKeysHandler:
Returns: Returns:
A map from user_id -> device_id -> device details A map from user_id -> device_id -> device details
""" """
set_tag("local_query", query) set_tag("local_query", str(query))
local_query: List[Tuple[str, Optional[str]]] = [] local_query: List[Tuple[str, Optional[str]]] = []
result_dict: Dict[str, Dict[str, dict]] = {} result_dict: Dict[str, Dict[str, dict]] = {}
@ -461,7 +463,7 @@ class E2eKeysHandler:
@trace @trace
async def claim_one_time_keys( async def claim_one_time_keys(
self, query: Dict[str, Dict[str, Dict[str, str]]], timeout: int self, query: Dict[str, Dict[str, Dict[str, str]]], timeout: Optional[int]
) -> JsonDict: ) -> JsonDict:
local_query: List[Tuple[str, str, str]] = [] local_query: List[Tuple[str, str, str]] = []
remote_queries: Dict[str, Dict[str, Dict[str, str]]] = {} remote_queries: Dict[str, Dict[str, Dict[str, str]]] = {}
@ -475,8 +477,8 @@ class E2eKeysHandler:
domain = get_domain_from_id(user_id) domain = get_domain_from_id(user_id)
remote_queries.setdefault(domain, {})[user_id] = one_time_keys remote_queries.setdefault(domain, {})[user_id] = one_time_keys
set_tag("local_key_query", local_query) set_tag("local_key_query", str(local_query))
set_tag("remote_key_query", remote_queries) set_tag("remote_key_query", str(remote_queries))
results = await self.store.claim_e2e_one_time_keys(local_query) results = await self.store.claim_e2e_one_time_keys(local_query)
@ -506,7 +508,7 @@ class E2eKeysHandler:
failure = _exception_to_failure(e) failure = _exception_to_failure(e)
failures[destination] = failure failures[destination] = failure
set_tag("error", True) set_tag("error", True)
set_tag("reason", failure) set_tag("reason", str(failure))
await make_deferred_yieldable( await make_deferred_yieldable(
defer.gatherResults( defer.gatherResults(
@ -609,7 +611,7 @@ class E2eKeysHandler:
result = await self.store.count_e2e_one_time_keys(user_id, device_id) result = await self.store.count_e2e_one_time_keys(user_id, device_id)
set_tag("one_time_key_counts", result) set_tag("one_time_key_counts", str(result))
return {"one_time_key_counts": result} return {"one_time_key_counts": result}
async def _upload_one_time_keys_for_user( async def _upload_one_time_keys_for_user(

View file

@ -14,7 +14,7 @@
# limitations under the License. # limitations under the License.
import logging import logging
from typing import TYPE_CHECKING, Dict, Optional from typing import TYPE_CHECKING, Dict, Optional, cast
from typing_extensions import Literal from typing_extensions import Literal
@ -97,7 +97,7 @@ class E2eRoomKeysHandler:
user_id, version, room_id, session_id user_id, version, room_id, session_id
) )
log_kv(results) log_kv(cast(JsonDict, results))
return results return results
@trace @trace

View file

@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import collections
import itertools import itertools
import logging import logging
from http import HTTPStatus from http import HTTPStatus
@ -347,7 +348,7 @@ class FederationEventHandler:
event.internal_metadata.send_on_behalf_of = origin event.internal_metadata.send_on_behalf_of = origin
context = await self._state_handler.compute_event_context(event) context = await self._state_handler.compute_event_context(event)
context = await self._check_event_auth(origin, event, context) await self._check_event_auth(origin, event, context)
if context.rejected: if context.rejected:
raise SynapseError( raise SynapseError(
403, f"{event.membership} event was rejected", Codes.FORBIDDEN 403, f"{event.membership} event was rejected", Codes.FORBIDDEN
@ -485,7 +486,7 @@ class FederationEventHandler:
partial_state=partial_state, partial_state=partial_state,
) )
context = await self._check_event_auth(origin, event, context) await self._check_event_auth(origin, event, context)
if context.rejected: if context.rejected:
raise SynapseError(400, "Join event was rejected") raise SynapseError(400, "Join event was rejected")
@ -765,10 +766,24 @@ class FederationEventHandler:
""" """
logger.info("Processing pulled event %s", event) logger.info("Processing pulled event %s", event)
# these should not be outliers. # This function should not be used to persist outliers (use something
assert ( # else) because this does a bunch of operations that aren't necessary
not event.internal_metadata.is_outlier() # (extra work; in particular, it makes sure we have all the prev_events
), "pulled event unexpectedly flagged as outlier" # and resolves the state across those prev events). If you happen to run
# into a situation where the event you're trying to process/backfill is
# marked as an `outlier`, then you should update that spot to return an
# `EventBase` copy that doesn't have `outlier` flag set.
#
# `EventBase` is used to represent both an event we have not yet
# persisted, and one that we have persisted and now keep in the cache.
# In an ideal world this method would only be called with the first type
# of event, but it turns out that's not actually the case and for
# example, you could get an event from cache that is marked as an
# `outlier` (fix up that spot though).
assert not event.internal_metadata.is_outlier(), (
"Outlier event passed to _process_pulled_event. "
"To persist an event as a non-outlier, make sure to pass in a copy without `event.internal_metadata.outlier = true`."
)
event_id = event.event_id event_id = event.event_id
@ -778,7 +793,7 @@ class FederationEventHandler:
if existing: if existing:
if not existing.internal_metadata.is_outlier(): if not existing.internal_metadata.is_outlier():
logger.info( logger.info(
"Ignoring received event %s which we have already seen", "_process_pulled_event: Ignoring received event %s which we have already seen",
event_id, event_id,
) )
return return
@ -1036,6 +1051,9 @@ class FederationEventHandler:
# XXX: this doesn't sound right? it means that we'll end up with incomplete # XXX: this doesn't sound right? it means that we'll end up with incomplete
# state. # state.
failed_to_fetch = desired_events - event_metadata.keys() failed_to_fetch = desired_events - event_metadata.keys()
# `event_id` could be missing from `event_metadata` because it's not necessarily
# a state event. We've already checked that we've fetched it above.
failed_to_fetch.discard(event_id)
if failed_to_fetch: if failed_to_fetch:
logger.warning( logger.warning(
"Failed to fetch missing state events for %s %s", "Failed to fetch missing state events for %s %s",
@ -1116,11 +1134,7 @@ class FederationEventHandler:
state_ids_before_event=state_ids, state_ids_before_event=state_ids,
) )
try: try:
context = await self._check_event_auth( await self._check_event_auth(origin, event, context)
origin,
event,
context,
)
except AuthError as e: except AuthError as e:
# This happens only if we couldn't find the auth events. We'll already have # This happens only if we couldn't find the auth events. We'll already have
# logged a warning, so now we just convert to a FederationError. # logged a warning, so now we just convert to a FederationError.
@ -1315,6 +1329,53 @@ class FederationEventHandler:
marker_event, marker_event,
) )
async def backfill_event_id(
self, destination: str, room_id: str, event_id: str
) -> EventBase:
"""Backfill a single event and persist it as a non-outlier which means
we also pull in all of the state and auth events necessary for it.
Args:
destination: The homeserver to pull the given event_id from.
room_id: The room where the event is from.
event_id: The event ID to backfill.
Raises:
FederationError if we are unable to find the event from the destination
"""
logger.info(
"backfill_event_id: event_id=%s from destination=%s", event_id, destination
)
room_version = await self._store.get_room_version(room_id)
event_from_response = await self._federation_client.get_pdu(
[destination],
event_id,
room_version,
)
if not event_from_response:
raise FederationError(
"ERROR",
404,
"Unable to find event_id=%s from destination=%s to backfill."
% (event_id, destination),
affected=event_id,
)
# Persist the event we just fetched, including pulling all of the state
# and auth events to de-outlier it. This also sets up the necessary
# `state_groups` for the event.
await self._process_pulled_events(
destination,
[event_from_response],
# Prevent notifications going to clients
backfilled=True,
)
return event_from_response
async def _get_events_and_persist( async def _get_events_and_persist(
self, destination: str, room_id: str, event_ids: Collection[str] self, destination: str, room_id: str, event_ids: Collection[str]
) -> None: ) -> None:
@ -1495,11 +1556,8 @@ class FederationEventHandler:
) )
async def _check_event_auth( async def _check_event_auth(
self, self, origin: str, event: EventBase, context: EventContext
origin: str, ) -> None:
event: EventBase,
context: EventContext,
) -> EventContext:
""" """
Checks whether an event should be rejected (for failing auth checks). Checks whether an event should be rejected (for failing auth checks).
@ -1509,9 +1567,6 @@ class FederationEventHandler:
context: context:
The event context. The event context.
Returns:
The updated context object.
Raises: Raises:
AuthError if we were unable to find copies of the event's auth events. AuthError if we were unable to find copies of the event's auth events.
(Most other failures just cause us to set `context.rejected`.) (Most other failures just cause us to set `context.rejected`.)
@ -1526,7 +1581,7 @@ class FederationEventHandler:
logger.warning("While validating received event %r: %s", event, e) logger.warning("While validating received event %r: %s", event, e)
# TODO: use a different rejected reason here? # TODO: use a different rejected reason here?
context.rejected = RejectedReason.AUTH_ERROR context.rejected = RejectedReason.AUTH_ERROR
return context return
# next, check that we have all of the event's auth events. # next, check that we have all of the event's auth events.
# #
@ -1538,6 +1593,9 @@ class FederationEventHandler:
) )
# ... and check that the event passes auth at those auth events. # ... and check that the event passes auth at those auth events.
# https://spec.matrix.org/v1.3/server-server-api/#checks-performed-on-receipt-of-a-pdu:
# 4. Passes authorization rules based on the events auth events,
# otherwise it is rejected.
try: try:
await check_state_independent_auth_rules(self._store, event) await check_state_independent_auth_rules(self._store, event)
check_state_dependent_auth_rules(event, claimed_auth_events) check_state_dependent_auth_rules(event, claimed_auth_events)
@ -1546,55 +1604,90 @@ class FederationEventHandler:
"While checking auth of %r against auth_events: %s", event, e "While checking auth of %r against auth_events: %s", event, e
) )
context.rejected = RejectedReason.AUTH_ERROR context.rejected = RejectedReason.AUTH_ERROR
return context return
# now check auth against what we think the auth events *should* be. # now check the auth rules pass against the room state before the event
event_types = event_auth.auth_types_for_event(event.room_version, event) # https://spec.matrix.org/v1.3/server-server-api/#checks-performed-on-receipt-of-a-pdu:
prev_state_ids = await context.get_prev_state_ids( # 5. Passes authorization rules based on the state before the event,
StateFilter.from_types(event_types) # otherwise it is rejected.
) #
# ... however, if we only have partial state for the room, then there is a good
# chance that we'll be missing some of the state needed to auth the new event.
# So, we state-resolve the auth events that we are given against the state that
# we know about, which ensures things like bans are applied. (Note that we'll
# already have checked we have all the auth events, in
# _load_or_fetch_auth_events_for_event above)
if context.partial_state:
room_version = await self._store.get_room_version_id(event.room_id)
auth_events_ids = self._event_auth_handler.compute_auth_events( local_state_id_map = await context.get_prev_state_ids()
event, prev_state_ids, for_verification=True claimed_auth_events_id_map = {
) (ev.type, ev.state_key): ev.event_id for ev in claimed_auth_events
auth_events_x = await self._store.get_events(auth_events_ids) }
calculated_auth_event_map = {
(e.type, e.state_key): e for e in auth_events_x.values()
}
try: state_for_auth_id_map = (
updated_auth_events = await self._update_auth_events_for_auth( await self._state_resolution_handler.resolve_events_with_store(
event, event.room_id,
calculated_auth_event_map=calculated_auth_event_map, room_version,
[local_state_id_map, claimed_auth_events_id_map],
event_map=None,
state_res_store=StateResolutionStore(self._store),
)
) )
except Exception:
# We don't really mind if the above fails, so lets not fail
# processing if it does. However, it really shouldn't fail so
# let's still log as an exception since we'll still want to fix
# any bugs.
logger.exception(
"Failed to double check auth events for %s with remote. "
"Ignoring failure and continuing processing of event.",
event.event_id,
)
updated_auth_events = None
if updated_auth_events:
context = await self._update_context_for_auth_events(
event, context, updated_auth_events
)
auth_events_for_auth = updated_auth_events
else: else:
auth_events_for_auth = calculated_auth_event_map event_types = event_auth.auth_types_for_event(event.room_version, event)
state_for_auth_id_map = await context.get_prev_state_ids(
StateFilter.from_types(event_types)
)
calculated_auth_event_ids = self._event_auth_handler.compute_auth_events(
event, state_for_auth_id_map, for_verification=True
)
# if those are the same, we're done here.
if collections.Counter(event.auth_event_ids()) == collections.Counter(
calculated_auth_event_ids
):
return
# otherwise, re-run the auth checks based on what we calculated.
calculated_auth_events = await self._store.get_events_as_list(
calculated_auth_event_ids
)
# log the differences
claimed_auth_event_map = {(e.type, e.state_key): e for e in claimed_auth_events}
calculated_auth_event_map = {
(e.type, e.state_key): e for e in calculated_auth_events
}
logger.info(
"event's auth_events are different to our calculated auth_events. "
"Claimed but not calculated: %s. Calculated but not claimed: %s",
[
ev
for k, ev in claimed_auth_event_map.items()
if k not in calculated_auth_event_map
or calculated_auth_event_map[k].event_id != ev.event_id
],
[
ev
for k, ev in calculated_auth_event_map.items()
if k not in claimed_auth_event_map
or claimed_auth_event_map[k].event_id != ev.event_id
],
)
try: try:
check_state_dependent_auth_rules(event, auth_events_for_auth.values()) check_state_dependent_auth_rules(event, calculated_auth_events)
except AuthError as e: except AuthError as e:
logger.warning("Failed auth resolution for %r because %s", event, e) logger.warning(
"While checking auth of %r against room state before the event: %s",
event,
e,
)
context.rejected = RejectedReason.AUTH_ERROR context.rejected = RejectedReason.AUTH_ERROR
return context
async def _maybe_kick_guest_users(self, event: EventBase) -> None: async def _maybe_kick_guest_users(self, event: EventBase) -> None:
if event.type != EventTypes.GuestAccess: if event.type != EventTypes.GuestAccess:
return return
@ -1618,11 +1711,21 @@ class FederationEventHandler:
"""Checks if we should soft fail the event; if so, marks the event as """Checks if we should soft fail the event; if so, marks the event as
such. such.
Does nothing for events in rooms with partial state, since we may not have an
accurate membership event for the sender in the current state.
Args: Args:
event event
state_ids: The state at the event if we don't have all the event's prev events state_ids: The state at the event if we don't have all the event's prev events
origin: The host the event originates from. origin: The host the event originates from.
""" """
if await self._store.is_partial_state_room(event.room_id):
# We might not know the sender's membership in the current state, so don't
# soft fail anything. Even if we do have a membership for the sender in the
# current state, it may have been derived from state resolution between
# partial and full state and may not be accurate.
return
extrem_ids_list = await self._store.get_latest_event_ids_in_room(event.room_id) extrem_ids_list = await self._store.get_latest_event_ids_in_room(event.room_id)
extrem_ids = set(extrem_ids_list) extrem_ids = set(extrem_ids_list)
prev_event_ids = set(event.prev_event_ids()) prev_event_ids = set(event.prev_event_ids())
@ -1704,93 +1807,6 @@ class FederationEventHandler:
soft_failed_event_counter.inc() soft_failed_event_counter.inc()
event.internal_metadata.soft_failed = True event.internal_metadata.soft_failed = True
async def _update_auth_events_for_auth(
self,
event: EventBase,
calculated_auth_event_map: StateMap[EventBase],
) -> Optional[StateMap[EventBase]]:
"""Helper for _check_event_auth. See there for docs.
Checks whether a given event has the expected auth events. If it
doesn't then we talk to the remote server to compare state to see if
we can come to a consensus (e.g. if one server missed some valid
state).
This attempts to resolve any potential divergence of state between
servers, but is not essential and so failures should not block further
processing of the event.
Args:
event:
calculated_auth_event_map:
Our calculated auth_events based on the state of the room
at the event's position in the DAG.
Returns:
updated auth event map, or None if no changes are needed.
"""
assert not event.internal_metadata.outlier
# check for events which are in the event's claimed auth_events, but not
# in our calculated event map.
event_auth_events = set(event.auth_event_ids())
different_auth = event_auth_events.difference(
e.event_id for e in calculated_auth_event_map.values()
)
if not different_auth:
return None
logger.info(
"auth_events refers to events which are not in our calculated auth "
"chain: %s",
different_auth,
)
# XXX: currently this checks for redactions but I'm not convinced that is
# necessary?
different_events = await self._store.get_events_as_list(different_auth)
# double-check they're all in the same room - we should already have checked
# this but it doesn't hurt to check again.
for d in different_events:
assert (
d.room_id == event.room_id
), f"Event {event.event_id} refers to auth_event {d.event_id} which is in a different room"
# now we state-resolve between our own idea of the auth events, and the remote's
# idea of them.
local_state = calculated_auth_event_map.values()
remote_auth_events = dict(calculated_auth_event_map)
remote_auth_events.update({(d.type, d.state_key): d for d in different_events})
remote_state = remote_auth_events.values()
room_version = await self._store.get_room_version_id(event.room_id)
new_state = await self._state_handler.resolve_events(
room_version, (local_state, remote_state), event
)
different_state = {
(d.type, d.state_key): d
for d in new_state.values()
if calculated_auth_event_map.get((d.type, d.state_key)) != d
}
if not different_state:
logger.info("State res returned no new state")
return None
logger.info(
"After state res: updating auth_events with new state %s",
different_state.values(),
)
# take a copy of calculated_auth_event_map before we modify it.
auth_events = dict(calculated_auth_event_map)
auth_events.update(different_state)
return auth_events
async def _load_or_fetch_auth_events_for_event( async def _load_or_fetch_auth_events_for_event(
self, destination: str, event: EventBase self, destination: str, event: EventBase
) -> Collection[EventBase]: ) -> Collection[EventBase]:
@ -1888,61 +1904,6 @@ class FederationEventHandler:
await self._auth_and_persist_outliers(room_id, remote_auth_events) await self._auth_and_persist_outliers(room_id, remote_auth_events)
async def _update_context_for_auth_events(
self, event: EventBase, context: EventContext, auth_events: StateMap[EventBase]
) -> EventContext:
"""Update the state_ids in an event context after auth event resolution,
storing the changes as a new state group.
Args:
event: The event we're handling the context for
context: initial event context
auth_events: Events to update in the event context.
Returns:
new event context
"""
# exclude the state key of the new event from the current_state in the context.
if event.is_state():
event_key: Optional[Tuple[str, str]] = (event.type, event.state_key)
else:
event_key = None
state_updates = {
k: a.event_id for k, a in auth_events.items() if k != event_key
}
current_state_ids = await context.get_current_state_ids()
current_state_ids = dict(current_state_ids) # type: ignore
current_state_ids.update(state_updates)
prev_state_ids = await context.get_prev_state_ids()
prev_state_ids = dict(prev_state_ids)
prev_state_ids.update({k: a.event_id for k, a in auth_events.items()})
# create a new state group as a delta from the existing one.
prev_group = context.state_group
state_group = await self._state_storage_controller.store_state_group(
event.event_id,
event.room_id,
prev_group=prev_group,
delta_ids=state_updates,
current_state_ids=current_state_ids,
)
return EventContext.with_state(
storage=self._storage_controllers,
state_group=state_group,
state_group_before_event=context.state_group_before_event,
state_delta_due_to_event=state_updates,
prev_group=prev_group,
delta_ids=state_updates,
partial_state=context.partial_state,
)
async def _run_push_actions_and_persist_event( async def _run_push_actions_and_persist_event(
self, event: EventBase, context: EventContext, backfilled: bool = False self, event: EventBase, context: EventContext, backfilled: bool = False
) -> None: ) -> None:
@ -2093,6 +2054,10 @@ class FederationEventHandler:
event, event_pos, max_stream_token, extra_users=extra_users event, event_pos, max_stream_token, extra_users=extra_users
) )
if event.type == EventTypes.Member and event.membership == Membership.JOIN:
# TODO retrieve the previous state, and exclude join -> join transitions
self._notifier.notify_user_joined_room(event.event_id, event.room_id)
def _sanity_check_event(self, ev: EventBase) -> None: def _sanity_check_event(self, ev: EventBase) -> None:
""" """
Do some early sanity checks of a received event Do some early sanity checks of a received event

View file

@ -26,7 +26,6 @@ from synapse.api.errors import (
SynapseError, SynapseError,
) )
from synapse.api.ratelimiting import Ratelimiter from synapse.api.ratelimiting import Ratelimiter
from synapse.config.emailconfig import ThreepidBehaviour
from synapse.http import RequestTimedOutError from synapse.http import RequestTimedOutError
from synapse.http.client import SimpleHttpClient from synapse.http.client import SimpleHttpClient
from synapse.http.site import SynapseRequest from synapse.http.site import SynapseRequest
@ -163,8 +162,7 @@ class IdentityHandler:
sid: str, sid: str,
mxid: str, mxid: str,
id_server: str, id_server: str,
id_access_token: Optional[str] = None, id_access_token: str,
use_v2: bool = True,
) -> JsonDict: ) -> JsonDict:
"""Bind a 3PID to an identity server """Bind a 3PID to an identity server
@ -174,8 +172,7 @@ class IdentityHandler:
mxid: The MXID to bind the 3PID to mxid: The MXID to bind the 3PID to
id_server: The domain of the identity server to query id_server: The domain of the identity server to query
id_access_token: The access token to authenticate to the identity id_access_token: The access token to authenticate to the identity
server with, if necessary. Required if use_v2 is true server with
use_v2: Whether to use v2 Identity Service API endpoints. Defaults to True
Raises: Raises:
SynapseError: On any of the following conditions SynapseError: On any of the following conditions
@ -187,24 +184,15 @@ class IdentityHandler:
""" """
logger.debug("Proxying threepid bind request for %s to %s", mxid, id_server) logger.debug("Proxying threepid bind request for %s to %s", mxid, id_server)
# If an id_access_token is not supplied, force usage of v1
if id_access_token is None:
use_v2 = False
if not valid_id_server_location(id_server): if not valid_id_server_location(id_server):
raise SynapseError( raise SynapseError(
400, 400,
"id_server must be a valid hostname with optional port and path components", "id_server must be a valid hostname with optional port and path components",
) )
# Decide which API endpoint URLs to use
headers = {}
bind_data = {"sid": sid, "client_secret": client_secret, "mxid": mxid} bind_data = {"sid": sid, "client_secret": client_secret, "mxid": mxid}
if use_v2: bind_url = "https://%s/_matrix/identity/v2/3pid/bind" % (id_server,)
bind_url = "https://%s/_matrix/identity/v2/3pid/bind" % (id_server,) headers = {"Authorization": create_id_access_token_header(id_access_token)}
headers["Authorization"] = create_id_access_token_header(id_access_token) # type: ignore
else:
bind_url = "https://%s/_matrix/identity/api/v1/3pid/bind" % (id_server,)
try: try:
# Use the blacklisting http client as this call is only to identity servers # Use the blacklisting http client as this call is only to identity servers
@ -223,21 +211,14 @@ class IdentityHandler:
return data return data
except HttpResponseException as e: except HttpResponseException as e:
if e.code != 404 or not use_v2: logger.error("3PID bind failed with Matrix error: %r", e)
logger.error("3PID bind failed with Matrix error: %r", e) raise e.to_synapse_error()
raise e.to_synapse_error()
except RequestTimedOutError: except RequestTimedOutError:
raise SynapseError(500, "Timed out contacting identity server") raise SynapseError(500, "Timed out contacting identity server")
except CodeMessageException as e: except CodeMessageException as e:
data = json_decoder.decode(e.msg) # XXX WAT? data = json_decoder.decode(e.msg) # XXX WAT?
return data return data
logger.info("Got 404 when POSTing JSON %s, falling back to v1 URL", bind_url)
res = await self.bind_threepid(
client_secret, sid, mxid, id_server, id_access_token, use_v2=False
)
return res
async def try_unbind_threepid(self, mxid: str, threepid: dict) -> bool: async def try_unbind_threepid(self, mxid: str, threepid: dict) -> bool:
"""Attempt to remove a 3PID from an identity server, or if one is not provided, all """Attempt to remove a 3PID from an identity server, or if one is not provided, all
identity servers we're aware the binding is present on identity servers we're aware the binding is present on
@ -300,8 +281,8 @@ class IdentityHandler:
"id_server must be a valid hostname with optional port and path components", "id_server must be a valid hostname with optional port and path components",
) )
url = "https://%s/_matrix/identity/api/v1/3pid/unbind" % (id_server,) url = "https://%s/_matrix/identity/v2/3pid/unbind" % (id_server,)
url_bytes = b"/_matrix/identity/api/v1/3pid/unbind" url_bytes = b"/_matrix/identity/v2/3pid/unbind"
content = { content = {
"mxid": mxid, "mxid": mxid,
@ -434,48 +415,6 @@ class IdentityHandler:
return session_id return session_id
async def requestEmailToken(
self,
id_server: str,
email: str,
client_secret: str,
send_attempt: int,
next_link: Optional[str] = None,
) -> JsonDict:
"""
Request an external server send an email on our behalf for the purposes of threepid
validation.
Args:
id_server: The identity server to proxy to
email: The email to send the message to
client_secret: The unique client_secret sends by the user
send_attempt: Which attempt this is
next_link: A link to redirect the user to once they submit the token
Returns:
The json response body from the server
"""
params = {
"email": email,
"client_secret": client_secret,
"send_attempt": send_attempt,
}
if next_link:
params["next_link"] = next_link
try:
data = await self.http_client.post_json_get_json(
id_server + "/_matrix/identity/api/v1/validate/email/requestToken",
params,
)
return data
except HttpResponseException as e:
logger.info("Proxied requestToken failed: %r", e)
raise e.to_synapse_error()
except RequestTimedOutError:
raise SynapseError(500, "Timed out contacting identity server")
async def requestMsisdnToken( async def requestMsisdnToken(
self, self,
id_server: str, id_server: str,
@ -549,18 +488,7 @@ class IdentityHandler:
validation_session = None validation_session = None
# Try to validate as email # Try to validate as email
if self.hs.config.email.threepid_behaviour_email == ThreepidBehaviour.REMOTE: if self.hs.config.email.can_verify_email:
# Remote emails will only be used if a valid identity server is provided.
assert (
self.hs.config.registration.account_threepid_delegate_email is not None
)
# Ask our delegated email identity server
validation_session = await self.threepid_from_creds(
self.hs.config.registration.account_threepid_delegate_email,
threepid_creds,
)
elif self.hs.config.email.threepid_behaviour_email == ThreepidBehaviour.LOCAL:
# Get a validated session matching these details # Get a validated session matching these details
validation_session = await self.store.get_threepid_validation_session( validation_session = await self.store.get_threepid_validation_session(
"email", client_secret, sid=sid, validated=True "email", client_secret, sid=sid, validated=True

View file

@ -463,6 +463,7 @@ class EventCreationHandler:
) )
self._events_shard_config = self.config.worker.events_shard_config self._events_shard_config = self.config.worker.events_shard_config
self._instance_name = hs.get_instance_name() self._instance_name = hs.get_instance_name()
self._notifier = hs.get_notifier()
self.room_prejoin_state_types = self.hs.config.api.room_prejoin_state self.room_prejoin_state_types = self.hs.config.api.room_prejoin_state
@ -1452,7 +1453,12 @@ class EventCreationHandler:
if state_entry.state_group in self._external_cache_joined_hosts_updates: if state_entry.state_group in self._external_cache_joined_hosts_updates:
return return
joined_hosts = await self.store.get_joined_hosts(event.room_id, state_entry) state = await state_entry.get_state(
self._storage_controllers.state, StateFilter.all()
)
joined_hosts = await self.store.get_joined_hosts(
event.room_id, state, state_entry
)
# Note that the expiry times must be larger than the expiry time in # Note that the expiry times must be larger than the expiry time in
# _external_cache_joined_hosts_updates. # _external_cache_joined_hosts_updates.
@ -1554,6 +1560,16 @@ class EventCreationHandler:
requester, is_admin_redaction=is_admin_redaction requester, is_admin_redaction=is_admin_redaction
) )
if event.type == EventTypes.Member and event.membership == Membership.JOIN:
(
current_membership,
_,
) = await self.store.get_local_current_membership_for_user_in_room(
event.state_key, event.room_id
)
if current_membership != Membership.JOIN:
self._notifier.notify_user_joined_room(event.event_id, event.room_id)
await self._maybe_kick_guest_users(event, context) await self._maybe_kick_guest_users(event, context)
validation_override = event.sender in self.config.meow.validation_override validation_override = event.sender in self.config.meow.validation_override
@ -1861,13 +1877,8 @@ class EventCreationHandler:
# For each room we need to find a joined member we can use to send # For each room we need to find a joined member we can use to send
# the dummy event with. # the dummy event with.
latest_event_ids = await self.store.get_prev_events_for_room(room_id) members = await self.store.get_local_users_in_room(room_id)
members = await self.state.get_current_users_in_room(
room_id, latest_event_ids=latest_event_ids
)
for user_id in members: for user_id in members:
if not self.hs.is_mine_id(user_id):
continue
requester = create_requester(user_id, authenticated_entity=self.server_name) requester = create_requester(user_id, authenticated_entity=self.server_name)
try: try:
event, context = await self.create_event( event, context = await self.create_event(
@ -1878,7 +1889,6 @@ class EventCreationHandler:
"room_id": room_id, "room_id": room_id,
"sender": user_id, "sender": user_id,
}, },
prev_event_ids=latest_event_ids,
) )
event.internal_metadata.proactively_send = False event.internal_metadata.proactively_send = False

View file

@ -34,7 +34,6 @@ from typing import (
Callable, Callable,
Collection, Collection,
Dict, Dict,
FrozenSet,
Generator, Generator,
Iterable, Iterable,
List, List,
@ -42,7 +41,6 @@ from typing import (
Set, Set,
Tuple, Tuple,
Type, Type,
Union,
) )
from prometheus_client import Counter from prometheus_client import Counter
@ -68,7 +66,6 @@ from synapse.storage.databases.main import DataStore
from synapse.streams import EventSource from synapse.streams import EventSource
from synapse.types import JsonDict, StreamKeyType, UserID, get_domain_from_id from synapse.types import JsonDict, StreamKeyType, UserID, get_domain_from_id
from synapse.util.async_helpers import Linearizer from synapse.util.async_helpers import Linearizer
from synapse.util.caches.descriptors import _CacheContext, cached
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
from synapse.util.wheel_timer import WheelTimer from synapse.util.wheel_timer import WheelTimer
@ -1656,15 +1653,18 @@ class PresenceEventSource(EventSource[int, UserPresenceState]):
# doesn't return. C.f. #5503. # doesn't return. C.f. #5503.
return [], max_token return [], max_token
# Figure out which other users this user should receive updates for # Figure out which other users this user should explicitly receive
users_interested_in = await self._get_interested_in(user, explicit_room_id) # updates for
additional_users_interested_in = (
await self.get_presence_router().get_interested_users(user.to_string())
)
# We have a set of users that we're interested in the presence of. We want to # We have a set of users that we're interested in the presence of. We want to
# cross-reference that with the users that have actually changed their presence. # cross-reference that with the users that have actually changed their presence.
# Check whether this user should see all user updates # Check whether this user should see all user updates
if users_interested_in == PresenceRouter.ALL_USERS: if additional_users_interested_in == PresenceRouter.ALL_USERS:
# Provide presence state for all users # Provide presence state for all users
presence_updates = await self._filter_all_presence_updates_for_user( presence_updates = await self._filter_all_presence_updates_for_user(
user_id, include_offline, from_key user_id, include_offline, from_key
@ -1673,34 +1673,47 @@ class PresenceEventSource(EventSource[int, UserPresenceState]):
return presence_updates, max_token return presence_updates, max_token
# Make mypy happy. users_interested_in should now be a set # Make mypy happy. users_interested_in should now be a set
assert not isinstance(users_interested_in, str) assert not isinstance(additional_users_interested_in, str)
# We always care about our own presence.
additional_users_interested_in.add(user_id)
if explicit_room_id:
user_ids = await self.store.get_users_in_room(explicit_room_id)
additional_users_interested_in.update(user_ids)
# The set of users that we're interested in and that have had a presence update. # The set of users that we're interested in and that have had a presence update.
# We'll actually pull the presence updates for these users at the end. # We'll actually pull the presence updates for these users at the end.
interested_and_updated_users: Union[Set[str], FrozenSet[str]] = set() interested_and_updated_users: Collection[str]
if from_key is not None: if from_key is not None:
# First get all users that have had a presence update # First get all users that have had a presence update
updated_users = stream_change_cache.get_all_entities_changed(from_key) updated_users = stream_change_cache.get_all_entities_changed(from_key)
# Cross-reference users we're interested in with those that have had updates. # Cross-reference users we're interested in with those that have had updates.
# Use a slightly-optimised method for processing smaller sets of updates. if updated_users is not None:
if updated_users is not None and len(updated_users) < 500: # If we have the full list of changes for presence we can
# For small deltas, it's quicker to get all changes and then # simply check which ones share a room with the user.
# cross-reference with the users we're interested in
get_updates_counter.labels("stream").inc() get_updates_counter.labels("stream").inc()
for other_user_id in updated_users:
if other_user_id in users_interested_in: sharing_users = await self.store.do_users_share_a_room(
# mypy thinks this variable could be a FrozenSet as it's possibly set user_id, updated_users
# to one in the `get_entities_changed` call below, and `add()` is not )
# method on a FrozenSet. That doesn't affect us here though, as
# `interested_and_updated_users` is clearly a set() above. interested_and_updated_users = (
interested_and_updated_users.add(other_user_id) # type: ignore sharing_users.union(additional_users_interested_in)
).intersection(updated_users)
else: else:
# Too many possible updates. Find all users we can see and check # Too many possible updates. Find all users we can see and check
# if any of them have changed. # if any of them have changed.
get_updates_counter.labels("full").inc() get_updates_counter.labels("full").inc()
users_interested_in = (
await self.store.get_users_who_share_room_with_user(user_id)
)
users_interested_in.update(additional_users_interested_in)
interested_and_updated_users = ( interested_and_updated_users = (
stream_change_cache.get_entities_changed( stream_change_cache.get_entities_changed(
users_interested_in, from_key users_interested_in, from_key
@ -1709,7 +1722,10 @@ class PresenceEventSource(EventSource[int, UserPresenceState]):
else: else:
# No from_key has been specified. Return the presence for all users # No from_key has been specified. Return the presence for all users
# this user is interested in # this user is interested in
interested_and_updated_users = users_interested_in interested_and_updated_users = (
await self.store.get_users_who_share_room_with_user(user_id)
)
interested_and_updated_users.update(additional_users_interested_in)
# Retrieve the current presence state for each user # Retrieve the current presence state for each user
users_to_state = await self.get_presence_handler().current_state_for_users( users_to_state = await self.get_presence_handler().current_state_for_users(
@ -1804,62 +1820,6 @@ class PresenceEventSource(EventSource[int, UserPresenceState]):
def get_current_key(self) -> int: def get_current_key(self) -> int:
return self.store.get_current_presence_token() return self.store.get_current_presence_token()
@cached(num_args=2, cache_context=True)
async def _get_interested_in(
self,
user: UserID,
explicit_room_id: Optional[str] = None,
cache_context: Optional[_CacheContext] = None,
) -> Union[Set[str], str]:
"""Returns the set of users that the given user should see presence
updates for.
Args:
user: The user to retrieve presence updates for.
explicit_room_id: The users that are in the room will be returned.
Returns:
A set of user IDs to return presence updates for, or "ALL" to return all
known updates.
"""
user_id = user.to_string()
users_interested_in = set()
users_interested_in.add(user_id) # So that we receive our own presence
# cache_context isn't likely to ever be None due to the @cached decorator,
# but we can't have a non-optional argument after the optional argument
# explicit_room_id either. Assert cache_context is not None so we can use it
# without mypy complaining.
assert cache_context
# Check with the presence router whether we should poll additional users for
# their presence information
additional_users = await self.get_presence_router().get_interested_users(
user.to_string()
)
if additional_users == PresenceRouter.ALL_USERS:
# If the module requested that this user see the presence updates of *all*
# users, then simply return that instead of calculating what rooms this
# user shares
return PresenceRouter.ALL_USERS
# Add the additional users from the router
users_interested_in.update(additional_users)
# Find the users who share a room with this user
users_who_share_room = await self.store.get_users_who_share_room_with_user(
user_id, on_invalidate=cache_context.invalidate
)
users_interested_in.update(users_who_share_room)
if explicit_room_id:
user_ids = await self.store.get_users_in_room(
explicit_room_id, on_invalidate=cache_context.invalidate
)
users_interested_in.update(user_ids)
return users_interested_in
def handle_timeouts( def handle_timeouts(
user_states: List[UserPresenceState], user_states: List[UserPresenceState],

View file

@ -901,7 +901,11 @@ class RoomCreationHandler:
# override any attempt to set room versions via the creation_content # override any attempt to set room versions via the creation_content
creation_content["room_version"] = room_version.identifier creation_content["room_version"] = room_version.identifier
last_stream_id = await self._send_events_for_new_room( (
last_stream_id,
last_sent_event_id,
depth,
) = await self._send_events_for_new_room(
requester, requester,
room_id, room_id,
preset_config=preset_config, preset_config=preset_config,
@ -917,7 +921,7 @@ class RoomCreationHandler:
if "name" in config: if "name" in config:
name = config["name"] name = config["name"]
( (
_, name_event,
last_stream_id, last_stream_id,
) = await self.event_creation_handler.create_and_send_nonmember_event( ) = await self.event_creation_handler.create_and_send_nonmember_event(
requester, requester,
@ -929,12 +933,16 @@ class RoomCreationHandler:
"content": {"name": name}, "content": {"name": name},
}, },
ratelimit=False, ratelimit=False,
prev_event_ids=[last_sent_event_id],
depth=depth,
) )
last_sent_event_id = name_event.event_id
depth += 1
if "topic" in config: if "topic" in config:
topic = config["topic"] topic = config["topic"]
( (
_, topic_event,
last_stream_id, last_stream_id,
) = await self.event_creation_handler.create_and_send_nonmember_event( ) = await self.event_creation_handler.create_and_send_nonmember_event(
requester, requester,
@ -946,7 +954,11 @@ class RoomCreationHandler:
"content": {"topic": topic}, "content": {"topic": topic},
}, },
ratelimit=False, ratelimit=False,
prev_event_ids=[last_sent_event_id],
depth=depth,
) )
last_sent_event_id = topic_event.event_id
depth += 1
# we avoid dropping the lock between invites, as otherwise joins can # we avoid dropping the lock between invites, as otherwise joins can
# start coming in and making the createRoom slow. # start coming in and making the createRoom slow.
@ -961,7 +973,7 @@ class RoomCreationHandler:
for invitee in invite_list: for invitee in invite_list:
( (
_, member_event_id,
last_stream_id, last_stream_id,
) = await self.room_member_handler.update_membership_locked( ) = await self.room_member_handler.update_membership_locked(
requester, requester,
@ -971,7 +983,11 @@ class RoomCreationHandler:
ratelimit=False, ratelimit=False,
content=content, content=content,
new_room=True, new_room=True,
prev_event_ids=[last_sent_event_id],
depth=depth,
) )
last_sent_event_id = member_event_id
depth += 1
for invite_3pid in invite_3pid_list: for invite_3pid in invite_3pid_list:
id_server = invite_3pid["id_server"] id_server = invite_3pid["id_server"]
@ -980,7 +996,10 @@ class RoomCreationHandler:
medium = invite_3pid["medium"] medium = invite_3pid["medium"]
# Note that do_3pid_invite can raise a ShadowBanError, but this was # Note that do_3pid_invite can raise a ShadowBanError, but this was
# handled above by emptying invite_3pid_list. # handled above by emptying invite_3pid_list.
last_stream_id = await self.hs.get_room_member_handler().do_3pid_invite( (
member_event_id,
last_stream_id,
) = await self.hs.get_room_member_handler().do_3pid_invite(
room_id, room_id,
requester.user, requester.user,
medium, medium,
@ -989,7 +1008,11 @@ class RoomCreationHandler:
requester, requester,
txn_id=None, txn_id=None,
id_access_token=id_access_token, id_access_token=id_access_token,
prev_event_ids=[last_sent_event_id],
depth=depth,
) )
last_sent_event_id = member_event_id
depth += 1
result = {"room_id": room_id} result = {"room_id": room_id}
@ -1017,20 +1040,22 @@ class RoomCreationHandler:
power_level_content_override: Optional[JsonDict] = None, power_level_content_override: Optional[JsonDict] = None,
creator_join_profile: Optional[JsonDict] = None, creator_join_profile: Optional[JsonDict] = None,
ratelimit: bool = True, ratelimit: bool = True,
) -> int: ) -> Tuple[int, str, int]:
"""Sends the initial events into a new room. """Sends the initial events into a new room.
`power_level_content_override` doesn't apply when initial state has `power_level_content_override` doesn't apply when initial state has
power level state event content. power level state event content.
Returns: Returns:
The stream_id of the last event persisted. A tuple containing the stream ID, event ID and depth of the last
event sent to the room.
""" """
creator_id = creator.user.to_string() creator_id = creator.user.to_string()
event_keys = {"room_id": room_id, "sender": creator_id, "state_key": ""} event_keys = {"room_id": room_id, "sender": creator_id, "state_key": ""}
depth = 1
last_sent_event_id: Optional[str] = None last_sent_event_id: Optional[str] = None
def create(etype: str, content: JsonDict, **kwargs: Any) -> JsonDict: def create(etype: str, content: JsonDict, **kwargs: Any) -> JsonDict:
@ -1043,6 +1068,7 @@ class RoomCreationHandler:
async def send(etype: str, content: JsonDict, **kwargs: Any) -> int: async def send(etype: str, content: JsonDict, **kwargs: Any) -> int:
nonlocal last_sent_event_id nonlocal last_sent_event_id
nonlocal depth
event = create(etype, content, **kwargs) event = create(etype, content, **kwargs)
logger.debug("Sending %s in new room", etype) logger.debug("Sending %s in new room", etype)
@ -1059,9 +1085,11 @@ class RoomCreationHandler:
# Note: we don't pass state_event_ids here because this triggers # Note: we don't pass state_event_ids here because this triggers
# an additional query per event to look them up from the events table. # an additional query per event to look them up from the events table.
prev_event_ids=[last_sent_event_id] if last_sent_event_id else [], prev_event_ids=[last_sent_event_id] if last_sent_event_id else [],
depth=depth,
) )
last_sent_event_id = sent_event.event_id last_sent_event_id = sent_event.event_id
depth += 1
return last_stream_id return last_stream_id
@ -1087,6 +1115,7 @@ class RoomCreationHandler:
content=creator_join_profile, content=creator_join_profile,
new_room=True, new_room=True,
prev_event_ids=[last_sent_event_id], prev_event_ids=[last_sent_event_id],
depth=depth,
) )
last_sent_event_id = member_event_id last_sent_event_id = member_event_id
@ -1180,7 +1209,7 @@ class RoomCreationHandler:
content={"algorithm": RoomEncryptionAlgorithms.DEFAULT}, content={"algorithm": RoomEncryptionAlgorithms.DEFAULT},
) )
return last_sent_stream_id return last_sent_stream_id, last_sent_event_id, depth
def _generate_room_id(self) -> str: def _generate_room_id(self) -> str:
"""Generates a random room ID. """Generates a random room ID.
@ -1367,6 +1396,7 @@ class TimestampLookupHandler:
self.store = hs.get_datastores().main self.store = hs.get_datastores().main
self.state_handler = hs.get_state_handler() self.state_handler = hs.get_state_handler()
self.federation_client = hs.get_federation_client() self.federation_client = hs.get_federation_client()
self.federation_event_handler = hs.get_federation_event_handler()
self._storage_controllers = hs.get_storage_controllers() self._storage_controllers = hs.get_storage_controllers()
async def get_event_for_timestamp( async def get_event_for_timestamp(
@ -1462,38 +1492,68 @@ class TimestampLookupHandler:
remote_response, remote_response,
) )
# TODO: Do we want to persist this as an extremity?
# TODO: I think ideally, we would try to backfill from
# this event and run this whole
# `get_event_for_timestamp` function again to make sure
# they didn't give us an event from their gappy history.
remote_event_id = remote_response.event_id remote_event_id = remote_response.event_id
origin_server_ts = remote_response.origin_server_ts remote_origin_server_ts = remote_response.origin_server_ts
# Backfill this event so we can get a pagination token for
# it with `/context` and paginate `/messages` from this
# point.
#
# TODO: The requested timestamp may lie in a part of the
# event graph that the remote server *also* didn't have,
# in which case they will have returned another event
# which may be nowhere near the requested timestamp. In
# the future, we may need to reconcile that gap and ask
# other homeservers, and/or extend `/timestamp_to_event`
# to return events on *both* sides of the timestamp to
# help reconcile the gap faster.
remote_event = (
await self.federation_event_handler.backfill_event_id(
domain, room_id, remote_event_id
)
)
# XXX: When we see that the remote server is not trustworthy,
# maybe we should not ask them first in the future.
if remote_origin_server_ts != remote_event.origin_server_ts:
logger.info(
"get_event_for_timestamp: Remote server (%s) claimed that remote_event_id=%s occured at remote_origin_server_ts=%s but that isn't true (actually occured at %s). Their claims are dubious and we should consider not trusting them.",
domain,
remote_event_id,
remote_origin_server_ts,
remote_event.origin_server_ts,
)
# Only return the remote event if it's closer than the local event # Only return the remote event if it's closer than the local event
if not local_event or ( if not local_event or (
abs(origin_server_ts - timestamp) abs(remote_event.origin_server_ts - timestamp)
< abs(local_event.origin_server_ts - timestamp) < abs(local_event.origin_server_ts - timestamp)
): ):
return remote_event_id, origin_server_ts logger.info(
"get_event_for_timestamp: returning remote_event_id=%s (%s) since it's closer to timestamp=%s than local_event=%s (%s)",
remote_event_id,
remote_event.origin_server_ts,
timestamp,
local_event.event_id if local_event else None,
local_event.origin_server_ts if local_event else None,
)
return remote_event_id, remote_origin_server_ts
except (HttpResponseException, InvalidResponseError) as ex: except (HttpResponseException, InvalidResponseError) as ex:
# Let's not put a high priority on some other homeserver # Let's not put a high priority on some other homeserver
# failing to respond or giving a random response # failing to respond or giving a random response
logger.debug( logger.debug(
"Failed to fetch /timestamp_to_event from %s because of exception(%s) %s args=%s", "get_event_for_timestamp: Failed to fetch /timestamp_to_event from %s because of exception(%s) %s args=%s",
domain, domain,
type(ex).__name__, type(ex).__name__,
ex, ex,
ex.args, ex.args,
) )
except Exception as ex: except Exception:
# But we do want to see some exceptions in our code # But we do want to see some exceptions in our code
logger.warning( logger.warning(
"Failed to fetch /timestamp_to_event from %s because of exception(%s) %s args=%s", "get_event_for_timestamp: Failed to fetch /timestamp_to_event from %s because of exception",
domain, domain,
type(ex).__name__, exc_info=True,
ex,
ex.args,
) )
# To appease mypy, we have to add both of these conditions to check for # To appease mypy, we have to add both of these conditions to check for

View file

@ -94,12 +94,29 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
rate_hz=hs.config.ratelimiting.rc_joins_local.per_second, rate_hz=hs.config.ratelimiting.rc_joins_local.per_second,
burst_count=hs.config.ratelimiting.rc_joins_local.burst_count, burst_count=hs.config.ratelimiting.rc_joins_local.burst_count,
) )
# Tracks joins from local users to rooms this server isn't a member of.
# I.e. joins this server makes by requesting /make_join /send_join from
# another server.
self._join_rate_limiter_remote = Ratelimiter( self._join_rate_limiter_remote = Ratelimiter(
store=self.store, store=self.store,
clock=self.clock, clock=self.clock,
rate_hz=hs.config.ratelimiting.rc_joins_remote.per_second, rate_hz=hs.config.ratelimiting.rc_joins_remote.per_second,
burst_count=hs.config.ratelimiting.rc_joins_remote.burst_count, burst_count=hs.config.ratelimiting.rc_joins_remote.burst_count,
) )
# TODO: find a better place to keep this Ratelimiter.
# It needs to be
# - written to by event persistence code
# - written to by something which can snoop on replication streams
# - read by the RoomMemberHandler to rate limit joins from local users
# - read by the FederationServer to rate limit make_joins and send_joins from
# other homeservers
# I wonder if a homeserver-wide collection of rate limiters might be cleaner?
self._join_rate_per_room_limiter = Ratelimiter(
store=self.store,
clock=self.clock,
rate_hz=hs.config.ratelimiting.rc_joins_per_room.per_second,
burst_count=hs.config.ratelimiting.rc_joins_per_room.burst_count,
)
# Ratelimiter for invites, keyed by room (across all issuers, all # Ratelimiter for invites, keyed by room (across all issuers, all
# recipients). # recipients).
@ -136,6 +153,18 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
) )
self.request_ratelimiter = hs.get_request_ratelimiter() self.request_ratelimiter = hs.get_request_ratelimiter()
hs.get_notifier().add_new_join_in_room_callback(self._on_user_joined_room)
def _on_user_joined_room(self, event_id: str, room_id: str) -> None:
"""Notify the rate limiter that a room join has occurred.
Use this to inform the RoomMemberHandler about joins that have either
- taken place on another homeserver, or
- on another worker in this homeserver.
Joins actioned by this worker should use the usual `ratelimit` method, which
checks the limit and increments the counter in one go.
"""
self._join_rate_per_room_limiter.record_action(requester=None, key=room_id)
@abc.abstractmethod @abc.abstractmethod
async def _remote_join( async def _remote_join(
@ -285,6 +314,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
allow_no_prev_events: bool = False, allow_no_prev_events: bool = False,
prev_event_ids: Optional[List[str]] = None, prev_event_ids: Optional[List[str]] = None,
state_event_ids: Optional[List[str]] = None, state_event_ids: Optional[List[str]] = None,
depth: Optional[int] = None,
txn_id: Optional[str] = None, txn_id: Optional[str] = None,
ratelimit: bool = True, ratelimit: bool = True,
content: Optional[dict] = None, content: Optional[dict] = None,
@ -315,6 +345,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
prev_events are set so we need to set them ourself via this argument. prev_events are set so we need to set them ourself via this argument.
This should normally be left as None, which will cause the auth_event_ids This should normally be left as None, which will cause the auth_event_ids
to be calculated based on the room state at the prev_events. to be calculated based on the room state at the prev_events.
depth: Override the depth used to order the event in the DAG.
Should normally be set to None, which will cause the depth to be calculated
based on the prev_events.
txn_id: txn_id:
ratelimit: ratelimit:
@ -370,6 +403,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
allow_no_prev_events=allow_no_prev_events, allow_no_prev_events=allow_no_prev_events,
prev_event_ids=prev_event_ids, prev_event_ids=prev_event_ids,
state_event_ids=state_event_ids, state_event_ids=state_event_ids,
depth=depth,
require_consent=require_consent, require_consent=require_consent,
outlier=outlier, outlier=outlier,
historical=historical, historical=historical,
@ -391,6 +425,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
# up blocking profile updates. # up blocking profile updates.
if newly_joined and ratelimit: if newly_joined and ratelimit:
await self._join_rate_limiter_local.ratelimit(requester) await self._join_rate_limiter_local.ratelimit(requester)
await self._join_rate_per_room_limiter.ratelimit(
requester, key=room_id, update=False
)
result_event = await self.event_creation_handler.handle_new_client_event( result_event = await self.event_creation_handler.handle_new_client_event(
requester, requester,
@ -466,6 +503,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
allow_no_prev_events: bool = False, allow_no_prev_events: bool = False,
prev_event_ids: Optional[List[str]] = None, prev_event_ids: Optional[List[str]] = None,
state_event_ids: Optional[List[str]] = None, state_event_ids: Optional[List[str]] = None,
depth: Optional[int] = None,
) -> Tuple[str, int]: ) -> Tuple[str, int]:
"""Update a user's membership in a room. """Update a user's membership in a room.
@ -501,6 +539,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
prev_events are set so we need to set them ourself via this argument. prev_events are set so we need to set them ourself via this argument.
This should normally be left as None, which will cause the auth_event_ids This should normally be left as None, which will cause the auth_event_ids
to be calculated based on the room state at the prev_events. to be calculated based on the room state at the prev_events.
depth: Override the depth used to order the event in the DAG.
Should normally be set to None, which will cause the depth to be calculated
based on the prev_events.
Returns: Returns:
A tuple of the new event ID and stream ID. A tuple of the new event ID and stream ID.
@ -540,6 +581,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
allow_no_prev_events=allow_no_prev_events, allow_no_prev_events=allow_no_prev_events,
prev_event_ids=prev_event_ids, prev_event_ids=prev_event_ids,
state_event_ids=state_event_ids, state_event_ids=state_event_ids,
depth=depth,
) )
return result return result
@ -562,6 +604,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
allow_no_prev_events: bool = False, allow_no_prev_events: bool = False,
prev_event_ids: Optional[List[str]] = None, prev_event_ids: Optional[List[str]] = None,
state_event_ids: Optional[List[str]] = None, state_event_ids: Optional[List[str]] = None,
depth: Optional[int] = None,
) -> Tuple[str, int]: ) -> Tuple[str, int]:
"""Helper for update_membership. """Helper for update_membership.
@ -599,6 +642,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
prev_events are set so we need to set them ourself via this argument. prev_events are set so we need to set them ourself via this argument.
This should normally be left as None, which will cause the auth_event_ids This should normally be left as None, which will cause the auth_event_ids
to be calculated based on the room state at the prev_events. to be calculated based on the room state at the prev_events.
depth: Override the depth used to order the event in the DAG.
Should normally be set to None, which will cause the depth to be calculated
based on the prev_events.
Returns: Returns:
A tuple of the new event ID and stream ID. A tuple of the new event ID and stream ID.
@ -732,6 +778,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
allow_no_prev_events=allow_no_prev_events, allow_no_prev_events=allow_no_prev_events,
prev_event_ids=prev_event_ids, prev_event_ids=prev_event_ids,
state_event_ids=state_event_ids, state_event_ids=state_event_ids,
depth=depth,
content=content, content=content,
require_consent=require_consent, require_consent=require_consent,
outlier=outlier, outlier=outlier,
@ -740,14 +787,14 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
latest_event_ids = await self.store.get_prev_events_for_room(room_id) latest_event_ids = await self.store.get_prev_events_for_room(room_id)
current_state_ids = await self.state_handler.get_current_state_ids( state_before_join = await self.state_handler.compute_state_after_events(
room_id, latest_event_ids=latest_event_ids room_id, latest_event_ids
) )
# TODO: Refactor into dictionary of explicitly allowed transitions # TODO: Refactor into dictionary of explicitly allowed transitions
# between old and new state, with specific error messages for some # between old and new state, with specific error messages for some
# transitions and generic otherwise # transitions and generic otherwise
old_state_id = current_state_ids.get((EventTypes.Member, target.to_string())) old_state_id = state_before_join.get((EventTypes.Member, target.to_string()))
if old_state_id: if old_state_id:
old_state = await self.store.get_event(old_state_id, allow_none=True) old_state = await self.store.get_event(old_state_id, allow_none=True)
old_membership = old_state.content.get("membership") if old_state else None old_membership = old_state.content.get("membership") if old_state else None
@ -798,11 +845,11 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
if action == "kick": if action == "kick":
raise AuthError(403, "The target user is not in the room") raise AuthError(403, "The target user is not in the room")
is_host_in_room = await self._is_host_in_room(current_state_ids) is_host_in_room = await self._is_host_in_room(state_before_join)
if effective_membership_state == Membership.JOIN: if effective_membership_state == Membership.JOIN:
if requester.is_guest: if requester.is_guest:
guest_can_join = await self._can_guest_join(current_state_ids) guest_can_join = await self._can_guest_join(state_before_join)
if not guest_can_join: if not guest_can_join:
# This should be an auth check, but guests are a local concept, # This should be an auth check, but guests are a local concept,
# so don't really fit into the general auth process. # so don't really fit into the general auth process.
@ -840,13 +887,23 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
# Check if a remote join should be performed. # Check if a remote join should be performed.
remote_join, remote_room_hosts = await self._should_perform_remote_join( remote_join, remote_room_hosts = await self._should_perform_remote_join(
target.to_string(), room_id, remote_room_hosts, content, is_host_in_room target.to_string(),
room_id,
remote_room_hosts,
content,
is_host_in_room,
state_before_join,
) )
if remote_join: if remote_join:
if ratelimit: if ratelimit:
await self._join_rate_limiter_remote.ratelimit( await self._join_rate_limiter_remote.ratelimit(
requester, requester,
) )
await self._join_rate_per_room_limiter.ratelimit(
requester,
key=room_id,
update=False,
)
inviter = await self._get_inviter(target.to_string(), room_id) inviter = await self._get_inviter(target.to_string(), room_id)
if inviter and not self.hs.is_mine(inviter): if inviter and not self.hs.is_mine(inviter):
@ -967,6 +1024,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
ratelimit=ratelimit, ratelimit=ratelimit,
prev_event_ids=latest_event_ids, prev_event_ids=latest_event_ids,
state_event_ids=state_event_ids, state_event_ids=state_event_ids,
depth=depth,
content=content, content=content,
require_consent=require_consent, require_consent=require_consent,
outlier=outlier, outlier=outlier,
@ -979,6 +1037,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
remote_room_hosts: List[str], remote_room_hosts: List[str],
content: JsonDict, content: JsonDict,
is_host_in_room: bool, is_host_in_room: bool,
state_before_join: StateMap[str],
) -> Tuple[bool, List[str]]: ) -> Tuple[bool, List[str]]:
""" """
Check whether the server should do a remote join (as opposed to a local Check whether the server should do a remote join (as opposed to a local
@ -998,6 +1057,8 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
content: The content to use as the event body of the join. This may content: The content to use as the event body of the join. This may
be modified. be modified.
is_host_in_room: True if the host is in the room. is_host_in_room: True if the host is in the room.
state_before_join: The state before the join event (i.e. the resolution of
the states after its parent events).
Returns: Returns:
A tuple of: A tuple of:
@ -1014,20 +1075,17 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
# If the host is in the room, but not one of the authorised hosts # If the host is in the room, but not one of the authorised hosts
# for restricted join rules, a remote join must be used. # for restricted join rules, a remote join must be used.
room_version = await self.store.get_room_version(room_id) room_version = await self.store.get_room_version(room_id)
current_state_ids = await self._storage_controllers.state.get_current_state_ids(
room_id
)
# If restricted join rules are not being used, a local join can always # If restricted join rules are not being used, a local join can always
# be used. # be used.
if not await self.event_auth_handler.has_restricted_join_rules( if not await self.event_auth_handler.has_restricted_join_rules(
current_state_ids, room_version state_before_join, room_version
): ):
return False, [] return False, []
# If the user is invited to the room or already joined, the join # If the user is invited to the room or already joined, the join
# event can always be issued locally. # event can always be issued locally.
prev_member_event_id = current_state_ids.get((EventTypes.Member, user_id), None) prev_member_event_id = state_before_join.get((EventTypes.Member, user_id), None)
prev_member_event = None prev_member_event = None
if prev_member_event_id: if prev_member_event_id:
prev_member_event = await self.store.get_event(prev_member_event_id) prev_member_event = await self.store.get_event(prev_member_event_id)
@ -1042,10 +1100,10 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
# #
# If not, generate a new list of remote hosts based on which # If not, generate a new list of remote hosts based on which
# can issue invites. # can issue invites.
event_map = await self.store.get_events(current_state_ids.values()) event_map = await self.store.get_events(state_before_join.values())
current_state = { current_state = {
state_key: event_map[event_id] state_key: event_map[event_id]
for state_key, event_id in current_state_ids.items() for state_key, event_id in state_before_join.items()
} }
allowed_servers = get_servers_from_users( allowed_servers = get_servers_from_users(
get_users_which_can_issue_invite(current_state) get_users_which_can_issue_invite(current_state)
@ -1059,7 +1117,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
# Ensure the member should be allowed access via membership in a room. # Ensure the member should be allowed access via membership in a room.
await self.event_auth_handler.check_restricted_join_rules( await self.event_auth_handler.check_restricted_join_rules(
current_state_ids, room_version, user_id, prev_member_event state_before_join, room_version, user_id, prev_member_event
) )
# If this is going to be a local join, additional information must # If this is going to be a local join, additional information must
@ -1069,7 +1127,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
EventContentFields.AUTHORISING_USER EventContentFields.AUTHORISING_USER
] = await self.event_auth_handler.get_user_which_could_invite( ] = await self.event_auth_handler.get_user_which_could_invite(
room_id, room_id,
current_state_ids, state_before_join,
) )
return False, [] return False, []
@ -1322,7 +1380,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
requester: Requester, requester: Requester,
txn_id: Optional[str], txn_id: Optional[str],
id_access_token: Optional[str] = None, id_access_token: Optional[str] = None,
) -> int: prev_event_ids: Optional[List[str]] = None,
depth: Optional[int] = None,
) -> Tuple[str, int]:
"""Invite a 3PID to a room. """Invite a 3PID to a room.
Args: Args:
@ -1335,9 +1395,13 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
txn_id: The transaction ID this is part of, or None if this is not txn_id: The transaction ID this is part of, or None if this is not
part of a transaction. part of a transaction.
id_access_token: The optional identity server access token. id_access_token: The optional identity server access token.
depth: Override the depth used to order the event in the DAG.
prev_event_ids: The event IDs to use as the prev events
Should normally be set to None, which will cause the depth to be calculated
based on the prev_events.
Returns: Returns:
The new stream ID. Tuple of event ID and stream ordering position
Raises: Raises:
ShadowBanError if the requester has been shadow-banned. ShadowBanError if the requester has been shadow-banned.
@ -1383,7 +1447,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
# We don't check the invite against the spamchecker(s) here (through # We don't check the invite against the spamchecker(s) here (through
# user_may_invite) because we'll do it further down the line anyway (in # user_may_invite) because we'll do it further down the line anyway (in
# update_membership_locked). # update_membership_locked).
_, stream_id = await self.update_membership( event_id, stream_id = await self.update_membership(
requester, UserID.from_string(invitee), room_id, "invite", txn_id=txn_id requester, UserID.from_string(invitee), room_id, "invite", txn_id=txn_id
) )
else: else:
@ -1402,7 +1466,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
additional_fields=spam_check[1], additional_fields=spam_check[1],
) )
stream_id = await self._make_and_store_3pid_invite( event, stream_id = await self._make_and_store_3pid_invite(
requester, requester,
id_server, id_server,
medium, medium,
@ -1411,9 +1475,12 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
inviter, inviter,
txn_id=txn_id, txn_id=txn_id,
id_access_token=id_access_token, id_access_token=id_access_token,
prev_event_ids=prev_event_ids,
depth=depth,
) )
event_id = event.event_id
return stream_id return event_id, stream_id
async def _make_and_store_3pid_invite( async def _make_and_store_3pid_invite(
self, self,
@ -1425,7 +1492,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
user: UserID, user: UserID,
txn_id: Optional[str], txn_id: Optional[str],
id_access_token: Optional[str] = None, id_access_token: Optional[str] = None,
) -> int: prev_event_ids: Optional[List[str]] = None,
depth: Optional[int] = None,
) -> Tuple[EventBase, int]:
room_state = await self._storage_controllers.state.get_current_state( room_state = await self._storage_controllers.state.get_current_state(
room_id, room_id,
StateFilter.from_types( StateFilter.from_types(
@ -1518,8 +1587,10 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
}, },
ratelimit=False, ratelimit=False,
txn_id=txn_id, txn_id=txn_id,
prev_event_ids=prev_event_ids,
depth=depth,
) )
return stream_id return event, stream_id
async def _is_host_in_room(self, current_state_ids: StateMap[str]) -> bool: async def _is_host_in_room(self, current_state_ids: StateMap[str]) -> bool:
# Have we just created the room, and is this about to be the very # Have we just created the room, and is this about to be the very

View file

@ -23,10 +23,12 @@ from pkg_resources import parse_version
import twisted import twisted
from twisted.internet.defer import Deferred from twisted.internet.defer import Deferred
from twisted.internet.interfaces import IOpenSSLContextFactory, IReactorTCP from twisted.internet.interfaces import IOpenSSLContextFactory
from twisted.internet.ssl import optionsForClientTLS
from twisted.mail.smtp import ESMTPSender, ESMTPSenderFactory from twisted.mail.smtp import ESMTPSender, ESMTPSenderFactory
from synapse.logging.context import make_deferred_yieldable from synapse.logging.context import make_deferred_yieldable
from synapse.types import ISynapseReactor
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
@ -48,7 +50,7 @@ class _NoTLSESMTPSender(ESMTPSender):
async def _sendmail( async def _sendmail(
reactor: IReactorTCP, reactor: ISynapseReactor,
smtphost: str, smtphost: str,
smtpport: int, smtpport: int,
from_addr: str, from_addr: str,
@ -59,6 +61,7 @@ async def _sendmail(
require_auth: bool = False, require_auth: bool = False,
require_tls: bool = False, require_tls: bool = False,
enable_tls: bool = True, enable_tls: bool = True,
force_tls: bool = False,
) -> None: ) -> None:
"""A simple wrapper around ESMTPSenderFactory, to allow substitution in tests """A simple wrapper around ESMTPSenderFactory, to allow substitution in tests
@ -73,8 +76,9 @@ async def _sendmail(
password: password to give when authenticating password: password to give when authenticating
require_auth: if auth is not offered, fail the request require_auth: if auth is not offered, fail the request
require_tls: if TLS is not offered, fail the reqest require_tls: if TLS is not offered, fail the reqest
enable_tls: True to enable TLS. If this is False and require_tls is True, enable_tls: True to enable STARTTLS. If this is False and require_tls is True,
the request will fail. the request will fail.
force_tls: True to enable Implicit TLS.
""" """
msg = BytesIO(msg_bytes) msg = BytesIO(msg_bytes)
d: "Deferred[object]" = Deferred() d: "Deferred[object]" = Deferred()
@ -105,13 +109,23 @@ async def _sendmail(
# set to enable TLS. # set to enable TLS.
factory = build_sender_factory(hostname=smtphost if enable_tls else None) factory = build_sender_factory(hostname=smtphost if enable_tls else None)
reactor.connectTCP( if force_tls:
smtphost, reactor.connectSSL(
smtpport, smtphost,
factory, smtpport,
timeout=30, factory,
bindAddress=None, optionsForClientTLS(smtphost),
) timeout=30,
bindAddress=None,
)
else:
reactor.connectTCP(
smtphost,
smtpport,
factory,
timeout=30,
bindAddress=None,
)
await make_deferred_yieldable(d) await make_deferred_yieldable(d)
@ -132,6 +146,7 @@ class SendEmailHandler:
self._smtp_pass = passwd.encode("utf-8") if passwd is not None else None self._smtp_pass = passwd.encode("utf-8") if passwd is not None else None
self._require_transport_security = hs.config.email.require_transport_security self._require_transport_security = hs.config.email.require_transport_security
self._enable_tls = hs.config.email.enable_smtp_tls self._enable_tls = hs.config.email.enable_smtp_tls
self._force_tls = hs.config.email.force_tls
self._sendmail = _sendmail self._sendmail = _sendmail
@ -189,4 +204,5 @@ class SendEmailHandler:
require_auth=self._smtp_user is not None, require_auth=self._smtp_user is not None,
require_tls=self._require_transport_security, require_tls=self._require_transport_security,
enable_tls=self._enable_tls, enable_tls=self._enable_tls,
force_tls=self._force_tls,
) )

View file

@ -19,7 +19,6 @@ from twisted.web.client import PartialDownloadError
from synapse.api.constants import LoginType from synapse.api.constants import LoginType
from synapse.api.errors import Codes, LoginError, SynapseError from synapse.api.errors import Codes, LoginError, SynapseError
from synapse.config.emailconfig import ThreepidBehaviour
from synapse.util import json_decoder from synapse.util import json_decoder
if TYPE_CHECKING: if TYPE_CHECKING:
@ -153,7 +152,7 @@ class _BaseThreepidAuthChecker:
logger.info("Getting validated threepid. threepidcreds: %r", (threepid_creds,)) logger.info("Getting validated threepid. threepidcreds: %r", (threepid_creds,))
# msisdns are currently always ThreepidBehaviour.REMOTE # msisdns are currently always verified via the IS
if medium == "msisdn": if medium == "msisdn":
if not self.hs.config.registration.account_threepid_delegate_msisdn: if not self.hs.config.registration.account_threepid_delegate_msisdn:
raise SynapseError( raise SynapseError(
@ -164,18 +163,7 @@ class _BaseThreepidAuthChecker:
threepid_creds, threepid_creds,
) )
elif medium == "email": elif medium == "email":
if ( if self.hs.config.email.can_verify_email:
self.hs.config.email.threepid_behaviour_email
== ThreepidBehaviour.REMOTE
):
assert self.hs.config.registration.account_threepid_delegate_email
threepid = await identity_handler.threepid_from_creds(
self.hs.config.registration.account_threepid_delegate_email,
threepid_creds,
)
elif (
self.hs.config.email.threepid_behaviour_email == ThreepidBehaviour.LOCAL
):
threepid = None threepid = None
row = await self.store.get_threepid_validation_session( row = await self.store.get_threepid_validation_session(
medium, medium,
@ -227,10 +215,7 @@ class EmailIdentityAuthChecker(UserInteractiveAuthChecker, _BaseThreepidAuthChec
_BaseThreepidAuthChecker.__init__(self, hs) _BaseThreepidAuthChecker.__init__(self, hs)
def is_enabled(self) -> bool: def is_enabled(self) -> bool:
return self.hs.config.email.threepid_behaviour_email in ( return self.hs.config.email.can_verify_email
ThreepidBehaviour.REMOTE,
ThreepidBehaviour.LOCAL,
)
async def check_auth(self, authdict: dict, clientip: str) -> Any: async def check_auth(self, authdict: dict, clientip: str) -> Any:
return await self._check_threepid("email", authdict) return await self._check_threepid("email", authdict)

View file

@ -79,6 +79,7 @@ from synapse.types import JsonDict
from synapse.util import json_decoder from synapse.util import json_decoder
from synapse.util.async_helpers import AwakenableSleeper, timeout_deferred from synapse.util.async_helpers import AwakenableSleeper, timeout_deferred
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
from synapse.util.stringutils import parse_and_validate_server_name
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
@ -479,6 +480,14 @@ class MatrixFederationHttpClient:
RequestSendFailed: If there were problems connecting to the RequestSendFailed: If there were problems connecting to the
remote, due to e.g. DNS failures, connection timeouts etc. remote, due to e.g. DNS failures, connection timeouts etc.
""" """
# Validate server name and log if it is an invalid destination, this is
# partially to help track down code paths where we haven't validated before here
try:
parse_and_validate_server_name(request.destination)
except ValueError:
logger.exception(f"Invalid destination: {request.destination}.")
raise FederationDeniedError(request.destination)
if timeout: if timeout:
_sec_timeout = timeout / 1000 _sec_timeout = timeout / 1000
else: else:

View file

@ -84,14 +84,13 @@ the function becomes the operation name for the span.
return something_usual_and_useful return something_usual_and_useful
Operation names can be explicitly set for a function by passing the Operation names can be explicitly set for a function by using ``trace_with_opname``:
operation name to ``trace``
.. code-block:: python .. code-block:: python
from synapse.logging.opentracing import trace from synapse.logging.opentracing import trace_with_opname
@trace(opname="a_better_operation_name") @trace_with_opname("a_better_operation_name")
def interesting_badly_named_function(*args, **kwargs): def interesting_badly_named_function(*args, **kwargs):
# Does all kinds of cool and expected things # Does all kinds of cool and expected things
return something_usual_and_useful return something_usual_and_useful
@ -183,6 +182,8 @@ from typing import (
Type, Type,
TypeVar, TypeVar,
Union, Union,
cast,
overload,
) )
import attr import attr
@ -329,6 +330,7 @@ class _Sentinel(enum.Enum):
P = ParamSpec("P") P = ParamSpec("P")
R = TypeVar("R") R = TypeVar("R")
T = TypeVar("T")
def only_if_tracing(func: Callable[P, R]) -> Callable[P, Optional[R]]: def only_if_tracing(func: Callable[P, R]) -> Callable[P, Optional[R]]:
@ -344,22 +346,43 @@ def only_if_tracing(func: Callable[P, R]) -> Callable[P, Optional[R]]:
return _only_if_tracing_inner return _only_if_tracing_inner
def ensure_active_span(message: str, ret=None): @overload
def ensure_active_span(
message: str,
) -> Callable[[Callable[P, R]], Callable[P, Optional[R]]]:
...
@overload
def ensure_active_span(
message: str, ret: T
) -> Callable[[Callable[P, R]], Callable[P, Union[T, R]]]:
...
def ensure_active_span(
message: str, ret: Optional[T] = None
) -> Callable[[Callable[P, R]], Callable[P, Union[Optional[T], R]]]:
"""Executes the operation only if opentracing is enabled and there is an active span. """Executes the operation only if opentracing is enabled and there is an active span.
If there is no active span it logs message at the error level. If there is no active span it logs message at the error level.
Args: Args:
message: Message which fills in "There was no active span when trying to %s" message: Message which fills in "There was no active span when trying to %s"
in the error log if there is no active span and opentracing is enabled. in the error log if there is no active span and opentracing is enabled.
ret (object): return value if opentracing is None or there is no active span. ret: return value if opentracing is None or there is no active span.
Returns (object): The result of the func or ret if opentracing is disabled or there Returns:
The result of the func, falling back to ret if opentracing is disabled or there
was no active span. was no active span.
""" """
def ensure_active_span_inner_1(func): def ensure_active_span_inner_1(
func: Callable[P, R]
) -> Callable[P, Union[Optional[T], R]]:
@wraps(func) @wraps(func)
def ensure_active_span_inner_2(*args, **kwargs): def ensure_active_span_inner_2(
*args: P.args, **kwargs: P.kwargs
) -> Union[Optional[T], R]:
if not opentracing: if not opentracing:
return ret return ret
@ -465,7 +488,7 @@ def start_active_span(
finish_on_close: bool = True, finish_on_close: bool = True,
*, *,
tracer: Optional["opentracing.Tracer"] = None, tracer: Optional["opentracing.Tracer"] = None,
): ) -> "opentracing.Scope":
"""Starts an active opentracing span. """Starts an active opentracing span.
Records the start time for the span, and sets it as the "active span" in the Records the start time for the span, and sets it as the "active span" in the
@ -503,7 +526,7 @@ def start_active_span_follows_from(
*, *,
inherit_force_tracing: bool = False, inherit_force_tracing: bool = False,
tracer: Optional["opentracing.Tracer"] = None, tracer: Optional["opentracing.Tracer"] = None,
): ) -> "opentracing.Scope":
"""Starts an active opentracing span, with additional references to previous spans """Starts an active opentracing span, with additional references to previous spans
Args: Args:
@ -718,7 +741,9 @@ def inject_response_headers(response_headers: Headers) -> None:
response_headers.addRawHeader("Synapse-Trace-Id", f"{trace_id:x}") response_headers.addRawHeader("Synapse-Trace-Id", f"{trace_id:x}")
@ensure_active_span("get the active span context as a dict", ret={}) @ensure_active_span(
"get the active span context as a dict", ret=cast(Dict[str, str], {})
)
def get_active_span_text_map(destination: Optional[str] = None) -> Dict[str, str]: def get_active_span_text_map(destination: Optional[str] = None) -> Dict[str, str]:
""" """
Gets a span context as a dict. This can be used instead of manually Gets a span context as a dict. This can be used instead of manually
@ -798,33 +823,31 @@ def extract_text_map(carrier: Dict[str, str]) -> Optional["opentracing.SpanConte
# Tracing decorators # Tracing decorators
def trace(func=None, opname: Optional[str] = None): def trace_with_opname(opname: str) -> Callable[[Callable[P, R]], Callable[P, R]]:
""" """
Decorator to trace a function. Decorator to trace a function with a custom opname.
Sets the operation name to that of the function's or that given
as operation_name. See the module's doc string for usage See the module's doc string for usage examples.
examples.
""" """
def decorator(func): def decorator(func: Callable[P, R]) -> Callable[P, R]:
if opentracing is None: if opentracing is None:
return func # type: ignore[unreachable] return func # type: ignore[unreachable]
_opname = opname if opname else func.__name__
if inspect.iscoroutinefunction(func): if inspect.iscoroutinefunction(func):
@wraps(func) @wraps(func)
async def _trace_inner(*args, **kwargs): async def _trace_inner(*args: P.args, **kwargs: P.kwargs) -> R:
with start_active_span(_opname): with start_active_span(opname):
return await func(*args, **kwargs) return await func(*args, **kwargs) # type: ignore[misc]
else: else:
# The other case here handles both sync functions and those # The other case here handles both sync functions and those
# decorated with inlineDeferred. # decorated with inlineDeferred.
@wraps(func) @wraps(func)
def _trace_inner(*args, **kwargs): def _trace_inner(*args: P.args, **kwargs: P.kwargs) -> R:
scope = start_active_span(_opname) scope = start_active_span(opname)
scope.__enter__() scope.__enter__()
try: try:
@ -858,12 +881,21 @@ def trace(func=None, opname: Optional[str] = None):
scope.__exit__(type(e), None, e.__traceback__) scope.__exit__(type(e), None, e.__traceback__)
raise raise
return _trace_inner return _trace_inner # type: ignore[return-value]
if func: return decorator
return decorator(func)
else:
return decorator def trace(func: Callable[P, R]) -> Callable[P, R]:
"""
Decorator to trace a function.
Sets the operation name to that of the function's name.
See the module's doc string for usage examples.
"""
return trace_with_opname(func.__name__)(func)
def tag_args(func: Callable[P, R]) -> Callable[P, R]: def tag_args(func: Callable[P, R]) -> Callable[P, R]:
@ -878,9 +910,9 @@ def tag_args(func: Callable[P, R]) -> Callable[P, R]:
def _tag_args_inner(*args: P.args, **kwargs: P.kwargs) -> R: def _tag_args_inner(*args: P.args, **kwargs: P.kwargs) -> R:
argspec = inspect.getfullargspec(func) argspec = inspect.getfullargspec(func)
for i, arg in enumerate(argspec.args[1:]): for i, arg in enumerate(argspec.args[1:]):
set_tag("ARG_" + arg, args[i]) # type: ignore[index] set_tag("ARG_" + arg, str(args[i])) # type: ignore[index]
set_tag("args", args[len(argspec.args) :]) # type: ignore[index] set_tag("args", str(args[len(argspec.args) :])) # type: ignore[index]
set_tag("kwargs", kwargs) set_tag("kwargs", str(kwargs))
return func(*args, **kwargs) return func(*args, **kwargs)
return _tag_args_inner return _tag_args_inner

View file

@ -235,7 +235,7 @@ def run_as_background_process(
f"bgproc.{desc}", tags={SynapseTags.REQUEST_ID: str(context)} f"bgproc.{desc}", tags={SynapseTags.REQUEST_ID: str(context)}
) )
else: else:
ctx = nullcontext() ctx = nullcontext() # type: ignore[assignment]
with ctx: with ctx:
return await func(*args, **kwargs) return await func(*args, **kwargs)
except Exception: except Exception:

View file

@ -228,6 +228,7 @@ class Notifier:
# Called when there are new things to stream over replication # Called when there are new things to stream over replication
self.replication_callbacks: List[Callable[[], None]] = [] self.replication_callbacks: List[Callable[[], None]] = []
self._new_join_in_room_callbacks: List[Callable[[str, str], None]] = []
self._federation_client = hs.get_federation_http_client() self._federation_client = hs.get_federation_http_client()
@ -280,6 +281,19 @@ class Notifier:
""" """
self.replication_callbacks.append(cb) self.replication_callbacks.append(cb)
def add_new_join_in_room_callback(self, cb: Callable[[str, str], None]) -> None:
"""Add a callback that will be called when a user joins a room.
This only fires on genuine membership changes, e.g. "invite" -> "join".
Membership transitions like "join" -> "join" (for e.g. displayname changes) do
not trigger the callback.
When called, the callback receives two arguments: the event ID and the room ID.
It should *not* return a Deferred - if it needs to do any asynchronous work, a
background thread should be started and wrapped with run_as_background_process.
"""
self._new_join_in_room_callbacks.append(cb)
async def on_new_room_event( async def on_new_room_event(
self, self,
event: EventBase, event: EventBase,
@ -723,6 +737,10 @@ class Notifier:
for cb in self.replication_callbacks: for cb in self.replication_callbacks:
cb() cb()
def notify_user_joined_room(self, event_id: str, room_id: str) -> None:
for cb in self._new_join_in_room_callbacks:
cb(event_id, room_id)
def notify_remote_server_up(self, server: str) -> None: def notify_remote_server_up(self, server: str) -> None:
"""Notify any replication that a remote server has come back up""" """Notify any replication that a remote server has come back up"""
# We call federation_sender directly rather than registering as a # We call federation_sender directly rather than registering as a

View file

@ -131,6 +131,13 @@ class BulkPushRuleEvaluator:
local_users = await self.store.get_local_users_in_room(event.room_id) local_users = await self.store.get_local_users_in_room(event.room_id)
# Filter out appservice users.
local_users = [
u
for u in local_users
if not self.store.get_if_app_services_interested_in_user(u)
]
# if this event is an invite event, we may need to run rules for the user # if this event is an invite event, we may need to run rules for the user
# who's been invited, otherwise they won't get told they've been invited # who's been invited, otherwise they won't get told they've been invited
if event.type == EventTypes.Member and event.membership == Membership.INVITE: if event.type == EventTypes.Member and event.membership == Membership.INVITE:

View file

@ -328,7 +328,7 @@ class PusherPool:
return None return None
try: try:
p = self.pusher_factory.create_pusher(pusher_config) pusher = self.pusher_factory.create_pusher(pusher_config)
except PusherConfigException as e: except PusherConfigException as e:
logger.warning( logger.warning(
"Pusher incorrectly configured id=%i, user=%s, appid=%s, pushkey=%s: %s", "Pusher incorrectly configured id=%i, user=%s, appid=%s, pushkey=%s: %s",
@ -346,23 +346,28 @@ class PusherPool:
) )
return None return None
if not p: if not pusher:
return None return None
appid_pushkey = "%s:%s" % (pusher_config.app_id, pusher_config.pushkey) appid_pushkey = "%s:%s" % (pusher.app_id, pusher.pushkey)
byuser = self.pushers.setdefault(pusher_config.user_name, {}) byuser = self.pushers.setdefault(pusher.user_id, {})
if appid_pushkey in byuser: if appid_pushkey in byuser:
byuser[appid_pushkey].on_stop() previous_pusher = byuser[appid_pushkey]
byuser[appid_pushkey] = p previous_pusher.on_stop()
synapse_pushers.labels(type(p).__name__, p.app_id).inc() synapse_pushers.labels(
type(previous_pusher).__name__, previous_pusher.app_id
).dec()
byuser[appid_pushkey] = pusher
synapse_pushers.labels(type(pusher).__name__, pusher.app_id).inc()
# Check if there *may* be push to process. We do this as this check is a # Check if there *may* be push to process. We do this as this check is a
# lot cheaper to do than actually fetching the exact rows we need to # lot cheaper to do than actually fetching the exact rows we need to
# push. # push.
user_id = pusher_config.user_name user_id = pusher.user_id
last_stream_ordering = pusher_config.last_stream_ordering last_stream_ordering = pusher.last_stream_ordering
if last_stream_ordering: if last_stream_ordering:
have_notifs = await self.store.get_if_maybe_push_in_range_for_user( have_notifs = await self.store.get_if_maybe_push_in_range_for_user(
user_id, last_stream_ordering user_id, last_stream_ordering
@ -372,9 +377,9 @@ class PusherPool:
# risk missing push. # risk missing push.
have_notifs = True have_notifs = True
p.on_started(have_notifs) pusher.on_started(have_notifs)
return p return pusher
async def remove_pusher(self, app_id: str, pushkey: str, user_id: str) -> None: async def remove_pusher(self, app_id: str, pushkey: str, user_id: str) -> None:
appid_pushkey = "%s:%s" % (app_id, pushkey) appid_pushkey = "%s:%s" % (app_id, pushkey)

View file

@ -29,7 +29,7 @@ from synapse.http import RequestTimedOutError
from synapse.http.server import HttpServer, is_method_cancellable from synapse.http.server import HttpServer, is_method_cancellable
from synapse.http.site import SynapseRequest from synapse.http.site import SynapseRequest
from synapse.logging import opentracing from synapse.logging import opentracing
from synapse.logging.opentracing import trace from synapse.logging.opentracing import trace_with_opname
from synapse.types import JsonDict from synapse.types import JsonDict
from synapse.util.caches.response_cache import ResponseCache from synapse.util.caches.response_cache import ResponseCache
from synapse.util.stringutils import random_string from synapse.util.stringutils import random_string
@ -196,7 +196,7 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
"ascii" "ascii"
) )
@trace(opname="outgoing_replication_request") @trace_with_opname("outgoing_replication_request")
async def send_request(*, instance_name: str = "master", **kwargs: Any) -> Any: async def send_request(*, instance_name: str = "master", **kwargs: Any) -> Any:
with outgoing_gauge.track_inprogress(): with outgoing_gauge.track_inprogress():
if instance_name == local_instance_name: if instance_name == local_instance_name:

View file

@ -1,58 +0,0 @@
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Optional
from synapse.storage.database import DatabasePool, LoggingDatabaseConnection
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
from synapse.storage.engines import PostgresEngine
from synapse.storage.util.id_generators import MultiWriterIdGenerator
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class BaseSlavedStore(CacheInvalidationWorkerStore):
def __init__(
self,
database: DatabasePool,
db_conn: LoggingDatabaseConnection,
hs: "HomeServer",
):
super().__init__(database, db_conn, hs)
if isinstance(self.database_engine, PostgresEngine):
self._cache_id_gen: Optional[
MultiWriterIdGenerator
] = MultiWriterIdGenerator(
db_conn,
database,
stream_name="caches",
instance_name=hs.get_instance_name(),
tables=[
(
"cache_invalidation_stream_by_instance",
"instance_name",
"stream_id",
)
],
sequence_name="cache_invalidation_stream_seq",
writers=[],
)
else:
self._cache_id_gen = None
self.hs = hs

View file

@ -1,22 +0,0 @@
# Copyright 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.storage.databases.main.account_data import AccountDataWorkerStore
from synapse.storage.databases.main.tags import TagsWorkerStore
class SlavedAccountDataStore(TagsWorkerStore, AccountDataWorkerStore, BaseSlavedStore):
pass

View file

@ -1,25 +0,0 @@
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.storage.databases.main.appservice import (
ApplicationServiceTransactionWorkerStore,
ApplicationServiceWorkerStore,
)
class SlavedApplicationServiceStore(
ApplicationServiceTransactionWorkerStore, ApplicationServiceWorkerStore
):
pass

View file

@ -1,20 +0,0 @@
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.storage.databases.main.deviceinbox import DeviceInboxWorkerStore
class SlavedDeviceInboxStore(DeviceInboxWorkerStore, BaseSlavedStore):
pass

View file

@ -14,7 +14,6 @@
from typing import TYPE_CHECKING, Any, Iterable from typing import TYPE_CHECKING, Any, Iterable
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
from synapse.replication.tcp.streams._base import DeviceListsStream, UserSignatureStream from synapse.replication.tcp.streams._base import DeviceListsStream, UserSignatureStream
from synapse.storage.database import DatabasePool, LoggingDatabaseConnection from synapse.storage.database import DatabasePool, LoggingDatabaseConnection
@ -24,7 +23,7 @@ if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
class SlavedDeviceStore(DeviceWorkerStore, BaseSlavedStore): class SlavedDeviceStore(DeviceWorkerStore):
def __init__( def __init__(
self, self,
database: DatabasePool, database: DatabasePool,

View file

@ -1,21 +0,0 @@
# Copyright 2015, 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.storage.databases.main.directory import DirectoryWorkerStore
from ._base import BaseSlavedStore
class DirectoryStore(DirectoryWorkerStore, BaseSlavedStore):
pass

View file

@ -29,8 +29,6 @@ from synapse.storage.databases.main.stream import StreamWorkerStore
from synapse.storage.databases.main.user_erasure_store import UserErasureWorkerStore from synapse.storage.databases.main.user_erasure_store import UserErasureWorkerStore
from synapse.util.caches.stream_change_cache import StreamChangeCache from synapse.util.caches.stream_change_cache import StreamChangeCache
from ._base import BaseSlavedStore
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
@ -56,7 +54,6 @@ class SlavedEventStore(
EventsWorkerStore, EventsWorkerStore,
UserErasureWorkerStore, UserErasureWorkerStore,
RelationsWorkerStore, RelationsWorkerStore,
BaseSlavedStore,
): ):
def __init__( def __init__(
self, self,

View file

@ -14,16 +14,15 @@
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from synapse.storage._base import SQLBaseStore
from synapse.storage.database import DatabasePool, LoggingDatabaseConnection from synapse.storage.database import DatabasePool, LoggingDatabaseConnection
from synapse.storage.databases.main.filtering import FilteringStore from synapse.storage.databases.main.filtering import FilteringStore
from ._base import BaseSlavedStore
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
class SlavedFilteringStore(BaseSlavedStore): class SlavedFilteringStore(SQLBaseStore):
def __init__( def __init__(
self, self,
database: DatabasePool, database: DatabasePool,

View file

@ -1,20 +0,0 @@
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.storage.databases.main.profile import ProfileWorkerStore
class SlavedProfileStore(ProfileWorkerStore, BaseSlavedStore):
pass

View file

@ -18,14 +18,13 @@ from synapse.replication.tcp.streams import PushersStream
from synapse.storage.database import DatabasePool, LoggingDatabaseConnection from synapse.storage.database import DatabasePool, LoggingDatabaseConnection
from synapse.storage.databases.main.pusher import PusherWorkerStore from synapse.storage.databases.main.pusher import PusherWorkerStore
from ._base import BaseSlavedStore
from ._slaved_id_tracker import SlavedIdTracker from ._slaved_id_tracker import SlavedIdTracker
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
class SlavedPusherStore(PusherWorkerStore, BaseSlavedStore): class SlavedPusherStore(PusherWorkerStore):
def __init__( def __init__(
self, self,
database: DatabasePool, database: DatabasePool,

View file

@ -1,22 +0,0 @@
# Copyright 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.storage.databases.main.receipts import ReceiptsWorkerStore
from ._base import BaseSlavedStore
class SlavedReceiptsStore(ReceiptsWorkerStore, BaseSlavedStore):
pass

View file

@ -1,21 +0,0 @@
# Copyright 2015, 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.storage.databases.main.registration import RegistrationWorkerStore
from ._base import BaseSlavedStore
class SlavedRegistrationStore(RegistrationWorkerStore, BaseSlavedStore):
pass

View file

@ -21,7 +21,7 @@ from twisted.internet.interfaces import IAddress, IConnector
from twisted.internet.protocol import ReconnectingClientFactory from twisted.internet.protocol import ReconnectingClientFactory
from twisted.python.failure import Failure from twisted.python.failure import Failure
from synapse.api.constants import EventTypes, ReceiptTypes from synapse.api.constants import EventTypes, Membership, ReceiptTypes
from synapse.federation import send_queue from synapse.federation import send_queue
from synapse.federation.sender import FederationSender from synapse.federation.sender import FederationSender
from synapse.logging.context import PreserveLoggingContext, make_deferred_yieldable from synapse.logging.context import PreserveLoggingContext, make_deferred_yieldable
@ -219,6 +219,21 @@ class ReplicationDataHandler:
membership=row.data.membership, membership=row.data.membership,
) )
# If this event is a join, make a note of it so we have an accurate
# cross-worker room rate limit.
# TODO: Erik said we should exclude rows that came from ex_outliers
# here, but I don't see how we can determine that. I guess we could
# add a flag to row.data?
if (
row.data.type == EventTypes.Member
and row.data.membership == Membership.JOIN
and not row.data.outlier
):
# TODO retrieve the previous state, and exclude join -> join transitions
self.notifier.notify_user_joined_room(
row.data.event_id, row.data.room_id
)
await self._presence_handler.process_replication_rows( await self._presence_handler.process_replication_rows(
stream_name, instance_name, token, rows stream_name, instance_name, token, rows
) )

View file

@ -98,6 +98,7 @@ class EventsStreamEventRow(BaseEventsStreamRow):
relates_to: Optional[str] relates_to: Optional[str]
membership: Optional[str] membership: Optional[str]
rejected: bool rejected: bool
outlier: bool
@attr.s(slots=True, frozen=True, auto_attribs=True) @attr.s(slots=True, frozen=True, auto_attribs=True)

View file

@ -138,7 +138,7 @@
<div class="username_input" id="username_input"> <div class="username_input" id="username_input">
<label for="field-username">Username (required)</label> <label for="field-username">Username (required)</label>
<div class="prefix">@</div> <div class="prefix">@</div>
<input type="text" name="username" id="field-username" value="{{ user_attributes.localpart }}" autofocus> <input type="text" name="username" id="field-username" value="{{ user_attributes.localpart }}" autofocus autocorrect="off" autocapitalize="none">
<div class="postfix">:{{ server_name }}</div> <div class="postfix">:{{ server_name }}</div>
</div> </div>
<output for="username_input" id="field-username-output"></output> <output for="username_input" id="field-username-output"></output>

View file

@ -373,6 +373,7 @@ class UserRestServletV2(RestServlet):
if ( if (
self.hs.config.email.email_enable_notifs self.hs.config.email.email_enable_notifs
and self.hs.config.email.email_notif_for_new_users and self.hs.config.email.email_notif_for_new_users
and medium == "email"
): ):
await self.pusher_pool.add_pusher( await self.pusher_pool.add_pusher(
user_id=user_id, user_id=user_id,

View file

@ -28,7 +28,6 @@ from synapse.api.errors import (
SynapseError, SynapseError,
ThreepidValidationError, ThreepidValidationError,
) )
from synapse.config.emailconfig import ThreepidBehaviour
from synapse.handlers.ui_auth import UIAuthSessionDataConstants from synapse.handlers.ui_auth import UIAuthSessionDataConstants
from synapse.http.server import HttpServer, finish_request, respond_with_html from synapse.http.server import HttpServer, finish_request, respond_with_html
from synapse.http.servlet import ( from synapse.http.servlet import (
@ -64,7 +63,7 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
self.config = hs.config self.config = hs.config
self.identity_handler = hs.get_identity_handler() self.identity_handler = hs.get_identity_handler()
if self.config.email.threepid_behaviour_email == ThreepidBehaviour.LOCAL: if self.config.email.can_verify_email:
self.mailer = Mailer( self.mailer = Mailer(
hs=self.hs, hs=self.hs,
app_name=self.config.email.email_app_name, app_name=self.config.email.email_app_name,
@ -73,11 +72,10 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
) )
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]: async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
if self.config.email.threepid_behaviour_email == ThreepidBehaviour.OFF: if not self.config.email.can_verify_email:
if self.config.email.local_threepid_handling_disabled_due_to_email_config: logger.warning(
logger.warning( "User password resets have been disabled due to lack of email config"
"User password resets have been disabled due to lack of email config" )
)
raise SynapseError( raise SynapseError(
400, "Email-based password resets have been disabled on this server" 400, "Email-based password resets have been disabled on this server"
) )
@ -129,35 +127,21 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
raise SynapseError(400, "Email not found", Codes.THREEPID_NOT_FOUND) raise SynapseError(400, "Email not found", Codes.THREEPID_NOT_FOUND)
if self.config.email.threepid_behaviour_email == ThreepidBehaviour.REMOTE: # Send password reset emails from Synapse
assert self.hs.config.registration.account_threepid_delegate_email sid = await self.identity_handler.send_threepid_validation(
email,
# Have the configured identity server handle the request client_secret,
ret = await self.identity_handler.requestEmailToken( send_attempt,
self.hs.config.registration.account_threepid_delegate_email, self.mailer.send_password_reset_mail,
email, next_link,
client_secret, )
send_attempt,
next_link,
)
else:
# Send password reset emails from Synapse
sid = await self.identity_handler.send_threepid_validation(
email,
client_secret,
send_attempt,
self.mailer.send_password_reset_mail,
next_link,
)
# Wrap the session id in a JSON object
ret = {"sid": sid}
threepid_send_requests.labels(type="email", reason="password_reset").observe( threepid_send_requests.labels(type="email", reason="password_reset").observe(
send_attempt send_attempt
) )
return 200, ret # Wrap the session id in a JSON object
return 200, {"sid": sid}
class PasswordRestServlet(RestServlet): class PasswordRestServlet(RestServlet):
@ -349,7 +333,7 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
self.identity_handler = hs.get_identity_handler() self.identity_handler = hs.get_identity_handler()
self.store = self.hs.get_datastores().main self.store = self.hs.get_datastores().main
if self.config.email.threepid_behaviour_email == ThreepidBehaviour.LOCAL: if self.config.email.can_verify_email:
self.mailer = Mailer( self.mailer = Mailer(
hs=self.hs, hs=self.hs,
app_name=self.config.email.email_app_name, app_name=self.config.email.email_app_name,
@ -358,11 +342,10 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
) )
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]: async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
if self.config.email.threepid_behaviour_email == ThreepidBehaviour.OFF: if not self.config.email.can_verify_email:
if self.config.email.local_threepid_handling_disabled_due_to_email_config: logger.warning(
logger.warning( "Adding emails have been disabled due to lack of an email config"
"Adding emails have been disabled due to lack of an email config" )
)
raise SynapseError( raise SynapseError(
400, "Adding an email to your account is disabled on this server" 400, "Adding an email to your account is disabled on this server"
) )
@ -413,35 +396,20 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
raise SynapseError(400, "Email is already in use", Codes.THREEPID_IN_USE) raise SynapseError(400, "Email is already in use", Codes.THREEPID_IN_USE)
if self.config.email.threepid_behaviour_email == ThreepidBehaviour.REMOTE: sid = await self.identity_handler.send_threepid_validation(
assert self.hs.config.registration.account_threepid_delegate_email email,
client_secret,
# Have the configured identity server handle the request send_attempt,
ret = await self.identity_handler.requestEmailToken( self.mailer.send_add_threepid_mail,
self.hs.config.registration.account_threepid_delegate_email, next_link,
email, )
client_secret,
send_attempt,
next_link,
)
else:
# Send threepid validation emails from Synapse
sid = await self.identity_handler.send_threepid_validation(
email,
client_secret,
send_attempt,
self.mailer.send_add_threepid_mail,
next_link,
)
# Wrap the session id in a JSON object
ret = {"sid": sid}
threepid_send_requests.labels(type="email", reason="add_threepid").observe( threepid_send_requests.labels(type="email", reason="add_threepid").observe(
send_attempt send_attempt
) )
return 200, ret # Wrap the session id in a JSON object
return 200, {"sid": sid}
class MsisdnThreepidRequestTokenRestServlet(RestServlet): class MsisdnThreepidRequestTokenRestServlet(RestServlet):
@ -534,26 +502,19 @@ class AddThreepidEmailSubmitTokenServlet(RestServlet):
self.config = hs.config self.config = hs.config
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.store = hs.get_datastores().main self.store = hs.get_datastores().main
if self.config.email.threepid_behaviour_email == ThreepidBehaviour.LOCAL: if self.config.email.can_verify_email:
self._failure_email_template = ( self._failure_email_template = (
self.config.email.email_add_threepid_template_failure_html self.config.email.email_add_threepid_template_failure_html
) )
async def on_GET(self, request: Request) -> None: async def on_GET(self, request: Request) -> None:
if self.config.email.threepid_behaviour_email == ThreepidBehaviour.OFF: if not self.config.email.can_verify_email:
if self.config.email.local_threepid_handling_disabled_due_to_email_config: logger.warning(
logger.warning( "Adding emails have been disabled due to lack of an email config"
"Adding emails have been disabled due to lack of an email config" )
)
raise SynapseError( raise SynapseError(
400, "Adding an email to your account is disabled on this server" 400, "Adding an email to your account is disabled on this server"
) )
elif self.config.email.threepid_behaviour_email == ThreepidBehaviour.REMOTE:
raise SynapseError(
400,
"This homeserver is not validating threepids. Use an identity server "
"instead.",
)
sid = parse_string(request, "sid", required=True) sid = parse_string(request, "sid", required=True)
token = parse_string(request, "token", required=True) token = parse_string(request, "token", required=True)
@ -743,10 +704,12 @@ class ThreepidBindRestServlet(RestServlet):
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]: async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
body = parse_json_object_from_request(request) body = parse_json_object_from_request(request)
assert_params_in_dict(body, ["id_server", "sid", "client_secret"]) assert_params_in_dict(
body, ["id_server", "sid", "id_access_token", "client_secret"]
)
id_server = body["id_server"] id_server = body["id_server"]
sid = body["sid"] sid = body["sid"]
id_access_token = body.get("id_access_token") # optional id_access_token = body["id_access_token"]
client_secret = body["client_secret"] client_secret = body["client_secret"]
assert_valid_client_secret(client_secret) assert_valid_client_secret(client_secret)

View file

@ -26,7 +26,7 @@ from synapse.http.servlet import (
parse_string, parse_string,
) )
from synapse.http.site import SynapseRequest from synapse.http.site import SynapseRequest
from synapse.logging.opentracing import log_kv, set_tag, trace from synapse.logging.opentracing import log_kv, set_tag, trace_with_opname
from synapse.types import JsonDict, StreamToken from synapse.types import JsonDict, StreamToken
from ._base import client_patterns, interactive_auth_handler from ._base import client_patterns, interactive_auth_handler
@ -71,7 +71,7 @@ class KeyUploadServlet(RestServlet):
self.e2e_keys_handler = hs.get_e2e_keys_handler() self.e2e_keys_handler = hs.get_e2e_keys_handler()
self.device_handler = hs.get_device_handler() self.device_handler = hs.get_device_handler()
@trace(opname="upload_keys") @trace_with_opname("upload_keys")
async def on_POST( async def on_POST(
self, request: SynapseRequest, device_id: Optional[str] self, request: SynapseRequest, device_id: Optional[str]
) -> Tuple[int, JsonDict]: ) -> Tuple[int, JsonDict]:
@ -208,7 +208,9 @@ class KeyChangesServlet(RestServlet):
# We want to enforce they do pass us one, but we ignore it and return # We want to enforce they do pass us one, but we ignore it and return
# changes after the "to" as well as before. # changes after the "to" as well as before.
set_tag("to", parse_string(request, "to")) #
# XXX This does not enforce that "to" is passed.
set_tag("to", str(parse_string(request, "to")))
from_token = await StreamToken.from_string(self.store, from_token_string) from_token = await StreamToken.from_string(self.store, from_token_string)

View file

@ -28,7 +28,7 @@ from typing import (
from typing_extensions import TypedDict from typing_extensions import TypedDict
from synapse.api.errors import Codes, LoginError, SynapseError from synapse.api.errors import Codes, InvalidClientTokenError, LoginError, SynapseError
from synapse.api.ratelimiting import Ratelimiter from synapse.api.ratelimiting import Ratelimiter
from synapse.api.urls import CLIENT_API_PREFIX from synapse.api.urls import CLIENT_API_PREFIX
from synapse.appservice import ApplicationService from synapse.appservice import ApplicationService
@ -172,7 +172,13 @@ class LoginRestServlet(RestServlet):
try: try:
if login_submission["type"] == LoginRestServlet.APPSERVICE_TYPE: if login_submission["type"] == LoginRestServlet.APPSERVICE_TYPE:
appservice = self.auth.get_appservice_by_req(request) requester = await self.auth.get_user_by_req(request)
appservice = requester.app_service
if appservice is None:
raise InvalidClientTokenError(
"This login method is only valid for application services"
)
if appservice.is_rate_limited(): if appservice.is_rate_limited():
await self._address_ratelimiter.ratelimit( await self._address_ratelimiter.ratelimit(

View file

@ -40,6 +40,12 @@ class ReadMarkerRestServlet(RestServlet):
self.read_marker_handler = hs.get_read_marker_handler() self.read_marker_handler = hs.get_read_marker_handler()
self.presence_handler = hs.get_presence_handler() self.presence_handler = hs.get_presence_handler()
self._known_receipt_types = {ReceiptTypes.READ, ReceiptTypes.FULLY_READ}
if hs.config.experimental.msc2285_enabled:
self._known_receipt_types.add(ReceiptTypes.READ_PRIVATE)
self._known_receipt_types.add("com.beeper.read.extra")
self._known_receipt_types.add("com.beeper.fully_read.extra")
async def on_POST( async def on_POST(
self, request: SynapseRequest, room_id: str self, request: SynapseRequest, room_id: str
) -> Tuple[int, JsonDict]: ) -> Tuple[int, JsonDict]:
@ -49,15 +55,7 @@ class ReadMarkerRestServlet(RestServlet):
body = parse_json_object_from_request(request) body = parse_json_object_from_request(request)
valid_receipt_types = { unrecognized_types = set(body.keys()) - self._known_receipt_types
ReceiptTypes.READ,
ReceiptTypes.FULLY_READ,
ReceiptTypes.READ_PRIVATE,
"com.beeper.read.extra",
"com.beeper.fully_read.extra",
}
unrecognized_types = set(body.keys()) - valid_receipt_types
if unrecognized_types: if unrecognized_types:
# It's fine if there are unrecognized receipt types, but let's log # It's fine if there are unrecognized receipt types, but let's log
# it to help debug clients that have typoed the receipt type. # it to help debug clients that have typoed the receipt type.
@ -67,36 +65,28 @@ class ReadMarkerRestServlet(RestServlet):
# types. # types.
logger.info("Ignoring unrecognized receipt types: %s", unrecognized_types) logger.info("Ignoring unrecognized receipt types: %s", unrecognized_types)
read_event_id = body.get(ReceiptTypes.READ, None) for receipt_type in self._known_receipt_types:
read_extra = body.get("com.beeper.read.extra", None) event_id = body.get(receipt_type, None)
# TODO Add validation to reject non-string event IDs.
if not event_id:
continue
extra_content = body.get(receipt_type.replace("m.", "com.beeper.") + ".extra", None)
if read_event_id: if receipt_type == ReceiptTypes.FULLY_READ:
await self.receipts_handler.received_client_receipt( await self.read_marker_handler.received_client_read_marker(
room_id, room_id,
ReceiptTypes.READ, user_id=requester.user.to_string(),
user_id=requester.user.to_string(), event_id=event_id,
event_id=read_event_id, extra_content=extra_content,
extra_content=read_extra, )
) else:
await self.receipts_handler.received_client_receipt(
read_private_event_id = body.get(ReceiptTypes.READ_PRIVATE, None) room_id,
if read_private_event_id and self.config.experimental.msc2285_enabled: receipt_type,
await self.receipts_handler.received_client_receipt( user_id=requester.user.to_string(),
room_id, event_id=event_id,
ReceiptTypes.READ_PRIVATE, extra_content=extra_content,
user_id=requester.user.to_string(), )
event_id=read_private_event_id,
)
read_marker_event_id = body.get(ReceiptTypes.FULLY_READ, None)
read_marker_extra = body.get("com.beeper.fully_read.extra", None)
if read_marker_event_id:
await self.read_marker_handler.received_client_read_marker(
room_id,
user_id=requester.user.to_string(),
event_id=read_marker_event_id,
extra_content=read_marker_extra,
)
return 200, {} return 200, {}

View file

@ -39,31 +39,27 @@ class ReceiptRestServlet(RestServlet):
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
super().__init__() super().__init__()
self.hs = hs
self.auth = hs.get_auth() self.auth = hs.get_auth()
self.receipts_handler = hs.get_receipts_handler() self.receipts_handler = hs.get_receipts_handler()
self.read_marker_handler = hs.get_read_marker_handler() self.read_marker_handler = hs.get_read_marker_handler()
self.presence_handler = hs.get_presence_handler() self.presence_handler = hs.get_presence_handler()
self._known_receipt_types = {ReceiptTypes.READ}
if hs.config.experimental.msc2285_enabled:
self._known_receipt_types.update(
(ReceiptTypes.READ_PRIVATE, ReceiptTypes.FULLY_READ)
)
async def on_POST( async def on_POST(
self, request: SynapseRequest, room_id: str, receipt_type: str, event_id: str self, request: SynapseRequest, room_id: str, receipt_type: str, event_id: str
) -> Tuple[int, JsonDict]: ) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request) requester = await self.auth.get_user_by_req(request)
if self.hs.config.experimental.msc2285_enabled and receipt_type not in [ if receipt_type not in self._known_receipt_types:
ReceiptTypes.READ,
ReceiptTypes.READ_PRIVATE,
ReceiptTypes.FULLY_READ,
]:
raise SynapseError( raise SynapseError(
400, 400,
"Receipt type must be 'm.read', 'org.matrix.msc2285.read.private' or 'm.fully_read'", f"Receipt type must be {', '.join(self._known_receipt_types)}",
) )
elif (
not self.hs.config.experimental.msc2285_enabled
and receipt_type != ReceiptTypes.READ
):
raise SynapseError(400, "Receipt type must be 'm.read'")
body = parse_json_object_from_request(request, allow_empty_body=False) body = parse_json_object_from_request(request, allow_empty_body=False)

View file

@ -31,7 +31,6 @@ from synapse.api.errors import (
) )
from synapse.api.ratelimiting import Ratelimiter from synapse.api.ratelimiting import Ratelimiter
from synapse.config import ConfigError from synapse.config import ConfigError
from synapse.config.emailconfig import ThreepidBehaviour
from synapse.config.homeserver import HomeServerConfig from synapse.config.homeserver import HomeServerConfig
from synapse.config.ratelimiting import FederationRateLimitConfig from synapse.config.ratelimiting import FederationRateLimitConfig
from synapse.config.server import is_threepid_reserved from synapse.config.server import is_threepid_reserved
@ -74,7 +73,7 @@ class EmailRegisterRequestTokenRestServlet(RestServlet):
self.identity_handler = hs.get_identity_handler() self.identity_handler = hs.get_identity_handler()
self.config = hs.config self.config = hs.config
if self.hs.config.email.threepid_behaviour_email == ThreepidBehaviour.LOCAL: if self.hs.config.email.can_verify_email:
self.mailer = Mailer( self.mailer = Mailer(
hs=self.hs, hs=self.hs,
app_name=self.config.email.email_app_name, app_name=self.config.email.email_app_name,
@ -83,13 +82,10 @@ class EmailRegisterRequestTokenRestServlet(RestServlet):
) )
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]: async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
if self.hs.config.email.threepid_behaviour_email == ThreepidBehaviour.OFF: if not self.hs.config.email.can_verify_email:
if ( logger.warning(
self.hs.config.email.local_threepid_handling_disabled_due_to_email_config "Email registration has been disabled due to lack of email config"
): )
logger.warning(
"Email registration has been disabled due to lack of email config"
)
raise SynapseError( raise SynapseError(
400, "Email-based registration has been disabled on this server" 400, "Email-based registration has been disabled on this server"
) )
@ -138,35 +134,21 @@ class EmailRegisterRequestTokenRestServlet(RestServlet):
raise SynapseError(400, "Email is already in use", Codes.THREEPID_IN_USE) raise SynapseError(400, "Email is already in use", Codes.THREEPID_IN_USE)
if self.config.email.threepid_behaviour_email == ThreepidBehaviour.REMOTE: # Send registration emails from Synapse
assert self.hs.config.registration.account_threepid_delegate_email sid = await self.identity_handler.send_threepid_validation(
email,
# Have the configured identity server handle the request client_secret,
ret = await self.identity_handler.requestEmailToken( send_attempt,
self.hs.config.registration.account_threepid_delegate_email, self.mailer.send_registration_mail,
email, next_link,
client_secret, )
send_attempt,
next_link,
)
else:
# Send registration emails from Synapse
sid = await self.identity_handler.send_threepid_validation(
email,
client_secret,
send_attempt,
self.mailer.send_registration_mail,
next_link,
)
# Wrap the session id in a JSON object
ret = {"sid": sid}
threepid_send_requests.labels(type="email", reason="register").observe( threepid_send_requests.labels(type="email", reason="register").observe(
send_attempt send_attempt
) )
return 200, ret # Wrap the session id in a JSON object
return 200, {"sid": sid}
class MsisdnRegisterRequestTokenRestServlet(RestServlet): class MsisdnRegisterRequestTokenRestServlet(RestServlet):
@ -260,7 +242,7 @@ class RegistrationSubmitTokenServlet(RestServlet):
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.store = hs.get_datastores().main self.store = hs.get_datastores().main
if self.config.email.threepid_behaviour_email == ThreepidBehaviour.LOCAL: if self.config.email.can_verify_email:
self._failure_email_template = ( self._failure_email_template = (
self.config.email.email_registration_template_failure_html self.config.email.email_registration_template_failure_html
) )
@ -270,11 +252,10 @@ class RegistrationSubmitTokenServlet(RestServlet):
raise SynapseError( raise SynapseError(
400, "This medium is currently not supported for registration" 400, "This medium is currently not supported for registration"
) )
if self.config.email.threepid_behaviour_email == ThreepidBehaviour.OFF: if not self.config.email.can_verify_email:
if self.config.email.local_threepid_handling_disabled_due_to_email_config: logger.warning(
logger.warning( "User registration via email has been disabled due to lack of email config"
"User registration via email has been disabled due to lack of email config" )
)
raise SynapseError( raise SynapseError(
400, "Email-based registration is disabled on this server" 400, "Email-based registration is disabled on this server"
) )

View file

@ -13,7 +13,7 @@
# limitations under the License. # limitations under the License.
import logging import logging
from typing import TYPE_CHECKING, Optional, Tuple from typing import TYPE_CHECKING, Optional, Tuple, cast
from synapse.api.errors import Codes, NotFoundError, SynapseError from synapse.api.errors import Codes, NotFoundError, SynapseError
from synapse.http.server import HttpServer from synapse.http.server import HttpServer
@ -127,7 +127,7 @@ class RoomKeysServlet(RestServlet):
requester = await self.auth.get_user_by_req(request, allow_guest=False) requester = await self.auth.get_user_by_req(request, allow_guest=False)
user_id = requester.user.to_string() user_id = requester.user.to_string()
body = parse_json_object_from_request(request) body = parse_json_object_from_request(request)
version = parse_string(request, "version") version = parse_string(request, "version", required=True)
if session_id: if session_id:
body = {"sessions": {session_id: body}} body = {"sessions": {session_id: body}}
@ -196,8 +196,11 @@ class RoomKeysServlet(RestServlet):
user_id = requester.user.to_string() user_id = requester.user.to_string()
version = parse_string(request, "version", required=True) version = parse_string(request, "version", required=True)
room_keys = await self.e2e_room_keys_handler.get_room_keys( room_keys = cast(
user_id, version, room_id, session_id JsonDict,
await self.e2e_room_keys_handler.get_room_keys(
user_id, version, room_id, session_id
),
) )
# Convert room_keys to the right format to return. # Convert room_keys to the right format to return.
@ -240,7 +243,7 @@ class RoomKeysServlet(RestServlet):
requester = await self.auth.get_user_by_req(request, allow_guest=False) requester = await self.auth.get_user_by_req(request, allow_guest=False)
user_id = requester.user.to_string() user_id = requester.user.to_string()
version = parse_string(request, "version") version = parse_string(request, "version", required=True)
ret = await self.e2e_room_keys_handler.delete_room_keys( ret = await self.e2e_room_keys_handler.delete_room_keys(
user_id, version, room_id, session_id user_id, version, room_id, session_id

View file

@ -19,7 +19,7 @@ from synapse.http import servlet
from synapse.http.server import HttpServer from synapse.http.server import HttpServer
from synapse.http.servlet import assert_params_in_dict, parse_json_object_from_request from synapse.http.servlet import assert_params_in_dict, parse_json_object_from_request
from synapse.http.site import SynapseRequest from synapse.http.site import SynapseRequest
from synapse.logging.opentracing import set_tag, trace from synapse.logging.opentracing import set_tag, trace_with_opname
from synapse.rest.client.transactions import HttpTransactionCache from synapse.rest.client.transactions import HttpTransactionCache
from synapse.types import JsonDict from synapse.types import JsonDict
@ -43,7 +43,7 @@ class SendToDeviceRestServlet(servlet.RestServlet):
self.txns = HttpTransactionCache(hs) self.txns = HttpTransactionCache(hs)
self.device_message_handler = hs.get_device_message_handler() self.device_message_handler = hs.get_device_message_handler()
@trace(opname="sendToDevice") @trace_with_opname("sendToDevice")
def on_PUT( def on_PUT(
self, request: SynapseRequest, message_type: str, txn_id: str self, request: SynapseRequest, message_type: str, txn_id: str
) -> Awaitable[Tuple[int, JsonDict]]: ) -> Awaitable[Tuple[int, JsonDict]]:

View file

@ -37,7 +37,7 @@ from synapse.handlers.sync import (
from synapse.http.server import HttpServer from synapse.http.server import HttpServer
from synapse.http.servlet import RestServlet, parse_boolean, parse_integer, parse_string from synapse.http.servlet import RestServlet, parse_boolean, parse_integer, parse_string
from synapse.http.site import SynapseRequest from synapse.http.site import SynapseRequest
from synapse.logging.opentracing import trace from synapse.logging.opentracing import trace_with_opname
from synapse.types import JsonDict, StreamToken from synapse.types import JsonDict, StreamToken
from synapse.util import json_decoder from synapse.util import json_decoder
@ -210,7 +210,7 @@ class SyncRestServlet(RestServlet):
logger.debug("Event formatting complete") logger.debug("Event formatting complete")
return 200, response_content return 200, response_content
@trace(opname="sync.encode_response") @trace_with_opname("sync.encode_response")
async def encode_response( async def encode_response(
self, self,
time_now: int, time_now: int,
@ -315,7 +315,7 @@ class SyncRestServlet(RestServlet):
] ]
} }
@trace(opname="sync.encode_joined") @trace_with_opname("sync.encode_joined")
async def encode_joined( async def encode_joined(
self, self,
rooms: List[JoinedSyncResult], rooms: List[JoinedSyncResult],
@ -340,7 +340,7 @@ class SyncRestServlet(RestServlet):
return joined return joined
@trace(opname="sync.encode_invited") @trace_with_opname("sync.encode_invited")
async def encode_invited( async def encode_invited(
self, self,
rooms: List[InvitedSyncResult], rooms: List[InvitedSyncResult],
@ -371,7 +371,7 @@ class SyncRestServlet(RestServlet):
return invited return invited
@trace(opname="sync.encode_knocked") @trace_with_opname("sync.encode_knocked")
async def encode_knocked( async def encode_knocked(
self, self,
rooms: List[KnockedSyncResult], rooms: List[KnockedSyncResult],
@ -420,7 +420,7 @@ class SyncRestServlet(RestServlet):
return knocked return knocked
@trace(opname="sync.encode_archived") @trace_with_opname("sync.encode_archived")
async def encode_archived( async def encode_archived(
self, self,
rooms: List[ArchivedSyncResult], rooms: List[ArchivedSyncResult],

View file

@ -108,10 +108,64 @@ class MediaInfo:
class PreviewUrlResource(DirectServeJsonResource): class PreviewUrlResource(DirectServeJsonResource):
""" """
Generating URL previews is a complicated task which many potential pitfalls. The `GET /_matrix/media/r0/preview_url` endpoint provides a generic preview API
for URLs which outputs Open Graph (https://ogp.me/) responses (with some Matrix
specific additions).
See docs/development/url_previews.md for discussion of the design and This does have trade-offs compared to other designs:
algorithm followed in this module.
* Pros:
* Simple and flexible; can be used by any clients at any point
* Cons:
* If each homeserver provides one of these independently, all the homeservers in a
room may needlessly DoS the target URI
* The URL metadata must be stored somewhere, rather than just using Matrix
itself to store the media.
* Matrix cannot be used to distribute the metadata between homeservers.
When Synapse is asked to preview a URL it does the following:
1. Checks against a URL blacklist (defined as `url_preview_url_blacklist` in the
config).
2. Checks the URL against an in-memory cache and returns the result if it exists. (This
is also used to de-duplicate processing of multiple in-flight requests at once.)
3. Kicks off a background process to generate a preview:
1. Checks URL and timestamp against the database cache and returns the result if it
has not expired and was successful (a 2xx return code).
2. Checks if the URL matches an oEmbed (https://oembed.com/) pattern. If it
does, update the URL to download.
3. Downloads the URL and stores it into a file via the media storage provider
and saves the local media metadata.
4. If the media is an image:
1. Generates thumbnails.
2. Generates an Open Graph response based on image properties.
5. If the media is HTML:
1. Decodes the HTML via the stored file.
2. Generates an Open Graph response from the HTML.
3. If a JSON oEmbed URL was found in the HTML via autodiscovery:
1. Downloads the URL and stores it into a file via the media storage provider
and saves the local media metadata.
2. Convert the oEmbed response to an Open Graph response.
3. Override any Open Graph data from the HTML with data from oEmbed.
4. If an image exists in the Open Graph response:
1. Downloads the URL and stores it into a file via the media storage
provider and saves the local media metadata.
2. Generates thumbnails.
3. Updates the Open Graph response based on image properties.
6. If the media is JSON and an oEmbed URL was found:
1. Convert the oEmbed response to an Open Graph response.
2. If a thumbnail or image is in the oEmbed response:
1. Downloads the URL and stores it into a file via the media storage
provider and saves the local media metadata.
2. Generates thumbnails.
3. Updates the Open Graph response based on image properties.
7. Stores the result in the database cache.
4. Returns the result.
The in-memory cache expires after 1 hour.
Expired entries in the database cache (and their associated media files) are
deleted every 10 seconds. The default expiration time is 1 hour from download.
""" """
isLeaf = True isLeaf = True

View file

@ -17,9 +17,11 @@
import logging import logging
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple
from synapse.api.errors import SynapseError from synapse.api.errors import Codes, SynapseError, cs_error
from synapse.config.repository import THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP
from synapse.http.server import ( from synapse.http.server import (
DirectServeJsonResource, DirectServeJsonResource,
respond_with_json,
set_corp_headers, set_corp_headers,
set_cors_headers, set_cors_headers,
) )
@ -309,6 +311,19 @@ class ThumbnailResource(DirectServeJsonResource):
url_cache: True if this is from a URL cache. url_cache: True if this is from a URL cache.
server_name: The server name, if this is a remote thumbnail. server_name: The server name, if this is a remote thumbnail.
""" """
logger.debug(
"_select_and_respond_with_thumbnail: media_id=%s desired=%sx%s (%s) thumbnail_infos=%s",
media_id,
desired_width,
desired_height,
desired_method,
thumbnail_infos,
)
# If `dynamic_thumbnails` is enabled, we expect Synapse to go down a
# different code path to handle it.
assert not self.dynamic_thumbnails
if thumbnail_infos: if thumbnail_infos:
file_info = self._select_thumbnail( file_info = self._select_thumbnail(
desired_width, desired_width,
@ -384,8 +399,29 @@ class ThumbnailResource(DirectServeJsonResource):
file_info.thumbnail.length, file_info.thumbnail.length,
) )
else: else:
# This might be because:
# 1. We can't create thumbnails for the given media (corrupted or
# unsupported file type), or
# 2. The thumbnailing process never ran or errored out initially
# when the media was first uploaded (these bugs should be
# reported and fixed).
# Note that we don't attempt to generate a thumbnail now because
# `dynamic_thumbnails` is disabled.
logger.info("Failed to find any generated thumbnails") logger.info("Failed to find any generated thumbnails")
respond_404(request)
respond_with_json(
request,
400,
cs_error(
"Cannot find any thumbnails for the requested media (%r). This might mean the media is not a supported_media_format=(%s) or that thumbnailing failed for some other reason. (Dynamic thumbnails are disabled on this server.)"
% (
request.postpath,
", ".join(THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP.keys()),
),
code=Codes.UNKNOWN,
),
send_cors=True,
)
def _select_thumbnail( def _select_thumbnail(
self, self,

View file

@ -17,7 +17,6 @@ from typing import TYPE_CHECKING, Tuple
from twisted.web.server import Request from twisted.web.server import Request
from synapse.api.errors import ThreepidValidationError from synapse.api.errors import ThreepidValidationError
from synapse.config.emailconfig import ThreepidBehaviour
from synapse.http.server import DirectServeHtmlResource from synapse.http.server import DirectServeHtmlResource
from synapse.http.servlet import parse_string from synapse.http.servlet import parse_string
from synapse.util.stringutils import assert_valid_client_secret from synapse.util.stringutils import assert_valid_client_secret
@ -46,9 +45,6 @@ class PasswordResetSubmitTokenResource(DirectServeHtmlResource):
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.store = hs.get_datastores().main self.store = hs.get_datastores().main
self._local_threepid_handling_disabled_due_to_email_config = (
hs.config.email.local_threepid_handling_disabled_due_to_email_config
)
self._confirmation_email_template = ( self._confirmation_email_template = (
hs.config.email.email_password_reset_template_confirmation_html hs.config.email.email_password_reset_template_confirmation_html
) )
@ -59,8 +55,8 @@ class PasswordResetSubmitTokenResource(DirectServeHtmlResource):
hs.config.email.email_password_reset_template_failure_html hs.config.email.email_password_reset_template_failure_html
) )
# This resource should not be mounted if threepid behaviour is not LOCAL # This resource should only be mounted if email validation is enabled
assert hs.config.email.threepid_behaviour_email == ThreepidBehaviour.LOCAL assert hs.config.email.can_verify_email
async def _async_render_GET(self, request: Request) -> Tuple[int, bytes]: async def _async_render_GET(self, request: Request) -> Tuple[int, bytes]:
sid = parse_string(request, "sid", required=True) sid = parse_string(request, "sid", required=True)

Some files were not shown because too many files have changed in this diff Show more