Synapse 1.37.0 (2021-06-29)

===========================
 
 This release deprecates the current spam checker interface. See the [upgrade notes](https://matrix-org.github.io/synapse/develop/upgrade#deprecation-of-the-current-spam-checker-interface) for more information on how to update to the new generic module interface.
 
 This release also removes support for fetching and renewing TLS certificates using the ACME v1 protocol, which has been fully decommissioned by Let's Encrypt on June 1st 2021. Admins previously using this feature should use a [reverse proxy](https://matrix-org.github.io/synapse/develop/reverse_proxy.html) to handle TLS termination, or use an external ACME client (such as [certbot](https://certbot.eff.org/)) to retrieve a certificate and key and provide them to Synapse using the `tls_certificate_path` and `tls_private_key_path` configuration settings.
 
 Synapse 1.37.0rc1 (2021-06-24)
 ==============================
 
 Features
 --------
 
 - Implement "room knocking" as per [MSC2403](https://github.com/matrix-org/matrix-doc/pull/2403). Contributed by @Sorunome and anoa. ([\#6739](https://github.com/matrix-org/synapse/issues/6739), [\#9359](https://github.com/matrix-org/synapse/issues/9359), [\#10167](https://github.com/matrix-org/synapse/issues/10167), [\#10212](https://github.com/matrix-org/synapse/issues/10212), [\#10227](https://github.com/matrix-org/synapse/issues/10227))
 - Add experimental support for backfilling history into rooms ([MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716)). ([\#9247](https://github.com/matrix-org/synapse/issues/9247))
 - Implement a generic interface for third-party plugin modules. ([\#10062](https://github.com/matrix-org/synapse/issues/10062), [\#10206](https://github.com/matrix-org/synapse/issues/10206))
 - Implement config option `sso.update_profile_information` to sync SSO users' profile information with the identity provider each time they login. Currently only displayname is supported. ([\#10108](https://github.com/matrix-org/synapse/issues/10108))
 - Ensure that errors during startup are written to the logs and the console. ([\#10191](https://github.com/matrix-org/synapse/issues/10191))
 
 Bugfixes
 --------
 
 - Fix a bug introduced in Synapse v1.25.0 that prevented the `ip_range_whitelist` configuration option from working for federation and identity servers. Contributed by @mikure. ([\#10115](https://github.com/matrix-org/synapse/issues/10115))
 - Remove a broken import line in Synapse's `admin_cmd` worker. Broke in Synapse v1.33.0. ([\#10154](https://github.com/matrix-org/synapse/issues/10154))
 - Fix a bug introduced in Synapse v1.21.0 which could cause `/sync` to return immediately with an empty response. ([\#10157](https://github.com/matrix-org/synapse/issues/10157), [\#10158](https://github.com/matrix-org/synapse/issues/10158))
 - Fix a minor bug in the response to `/_matrix/client/r0/user/{user}/openid/request_token` causing `expires_in` to be a float instead of an integer. Contributed by @lukaslihotzki. ([\#10175](https://github.com/matrix-org/synapse/issues/10175))
 - Always require users to re-authenticate for dangerous operations: deactivating an account, modifying an account password, and adding 3PIDs. ([\#10184](https://github.com/matrix-org/synapse/issues/10184))
 - Fix a bug introduced in Synpase v1.7.2 where remote server count metrics collection would be incorrectly delayed on startup. Found by @heftig. ([\#10195](https://github.com/matrix-org/synapse/issues/10195))
 - Fix a bug introduced in Synapse v1.35.1 where an `allow` key of a `m.room.join_rules` event could be applied for incorrect room versions and configurations. ([\#10208](https://github.com/matrix-org/synapse/issues/10208))
 - Fix performance regression in responding to user key requests over federation. Introduced in Synapse v1.34.0rc1. ([\#10221](https://github.com/matrix-org/synapse/issues/10221))
 
 Improved Documentation
 ----------------------
 
 - Add a new guide to decoding request logs. ([\#8436](https://github.com/matrix-org/synapse/issues/8436))
 - Mention in the sample homeserver config that you may need to configure max upload size in your reverse proxy. Contributed by @aaronraimist. ([\#10122](https://github.com/matrix-org/synapse/issues/10122))
 - Fix broken links in documentation. ([\#10180](https://github.com/matrix-org/synapse/issues/10180))
 - Deploy a snapshot of the documentation website upon each new Synapse release. ([\#10198](https://github.com/matrix-org/synapse/issues/10198))
 
 Deprecations and Removals
 -------------------------
 
 - The current spam checker interface is deprecated in favour of a new generic modules system. See the [upgrade notes](https://matrix-org.github.io/synapse/develop/upgrade#deprecation-of-the-current-spam-checker-interface) for more information on how to update to the new system. ([\#10062](https://github.com/matrix-org/synapse/issues/10062), [\#10210](https://github.com/matrix-org/synapse/issues/10210), [\#10238](https://github.com/matrix-org/synapse/issues/10238))
 - Stop supporting the unstable spaces prefixes from MSC1772. ([\#10161](https://github.com/matrix-org/synapse/issues/10161))
 - Remove Synapse's support for automatically fetching and renewing certificates using the ACME v1 protocol. This protocol has been fully turned off by Let's Encrypt for existing installations on June 1st 2021. Admins previously using this feature should use a [reverse proxy](https://matrix-org.github.io/synapse/develop/reverse_proxy.html) to handle TLS termination, or use an external ACME client (such as [certbot](https://certbot.eff.org/)) to retrieve a certificate and key and provide them to Synapse using the `tls_certificate_path` and `tls_private_key_path` configuration settings. ([\#10194](https://github.com/matrix-org/synapse/issues/10194))
 
 Internal Changes
 ----------------
 
 - Update the database schema versioning to support gradual migration away from legacy tables. ([\#9933](https://github.com/matrix-org/synapse/issues/9933))
 - Add type hints to the federation servlets. ([\#10080](https://github.com/matrix-org/synapse/issues/10080))
 - Improve OpenTracing for event persistence. ([\#10134](https://github.com/matrix-org/synapse/issues/10134), [\#10193](https://github.com/matrix-org/synapse/issues/10193))
 - Clean up the interface for injecting OpenTracing over HTTP. ([\#10143](https://github.com/matrix-org/synapse/issues/10143))
 - Limit the number of in-flight `/keys/query` requests from a single device. ([\#10144](https://github.com/matrix-org/synapse/issues/10144))
 - Refactor EventPersistenceQueue. ([\#10145](https://github.com/matrix-org/synapse/issues/10145))
 - Document `SYNAPSE_TEST_LOG_LEVEL` to see the logger output when running tests. ([\#10148](https://github.com/matrix-org/synapse/issues/10148))
 - Update the Complement build tags in GitHub Actions to test currently experimental features. ([\#10155](https://github.com/matrix-org/synapse/issues/10155))
 - Add a `synapse_federation_soft_failed_events_total` metric to track how often events are soft failed. ([\#10156](https://github.com/matrix-org/synapse/issues/10156))
 - Fetch the corresponding complement branch when performing CI. ([\#10160](https://github.com/matrix-org/synapse/issues/10160))
 - Add some developer documentation about boolean columns in database schemas. ([\#10164](https://github.com/matrix-org/synapse/issues/10164))
 - Add extra logging fields to better debug where events are being soft failed. ([\#10168](https://github.com/matrix-org/synapse/issues/10168))
 - Add debug logging for when we enter and exit `Measure` blocks. ([\#10183](https://github.com/matrix-org/synapse/issues/10183))
 - Improve comments in structured logging code. ([\#10188](https://github.com/matrix-org/synapse/issues/10188))
 - Update [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083) support with modifications from the MSC. ([\#10189](https://github.com/matrix-org/synapse/issues/10189))
 - Remove redundant DNS lookup limiter. ([\#10190](https://github.com/matrix-org/synapse/issues/10190))
 - Upgrade `black` linting tool to 21.6b0. ([\#10197](https://github.com/matrix-org/synapse/issues/10197))
 - Expose OpenTracing trace id in response headers. ([\#10199](https://github.com/matrix-org/synapse/issues/10199))
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEdVkXOgzrGzds0jtrHgFcFF8ZFs0FAmDa5WkACgkQHgFcFF8Z
 Fs23pw//W58kYtnlEcCQeiE2LlGjVAp4ifCOCcGo3OP+RldPMnnHX6jnw681xiNf
 rZSzdgYDkcK1C3vdkfD3w03tTrhxZIfluOkKg1SR2aaNpYNRFSzzvnTyBQmDO/es
 kLDcof2ZAjx4JdctC0oQ6Ebw9hVOAZnUmXKn1Yog/Mcq9dbQpRJEh0v20X39Kr33
 Rv7R7TZkSGcPBKzpBKs4hbjQvVTuiDE2ukI4b92DeSQGof9W9wiG4aIHrHKE3Z9U
 rFXn+KQWBxKX/x3kjuS5QCCFI5rqDjge8cGKxX356h0l1Bci6AKnHDROv8ssnJrB
 L1j2Hd+Ctfsfzt4UUmXjvU8OQScHzEISf2w/ccW/jOQ7kZZumBojl4jDS9mCRvrn
 eLFHCSU2JFvy7LNh7D9GOzHuJ/dsIy7TKzR8+WsSXTRHHOjEtf/izn4JA12zx3Ap
 vKIiCYSwgTRZNQ4FCa+wAcoD7Yb+Zfx6bIlgEy3Y5FRAZqd5KbwnDfNB8xdGpLbC
 KV4bFxXuHlwV/nm3nauE1n+zOREB4V0VzWY0oSjKPnh7Go6FG3+8LdzxvpeB9IAc
 8JL1KXpaQURgiNdHS3cfBwHz8+6lnzjD5aJR+uafgQozusvSdfUHF59gyXkp2HNJ
 JDGkfWqMbTBUj48XFThGlcyYhuCdh5MxWmDh+VQ21Fw9ipBrQ2w=
 =z/yL
 -----END PGP SIGNATURE-----
gpgsig -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEdVkXOgzrGzds0jtrHgFcFF8ZFs0FAmEvSo4ACgkQHgFcFF8Z
 Fs1ZUBAAqMQHQVMx/FW5KKCe9LYun81nuelUvr1jcVhYDCm3lbDeMC+UN/Ceg5y3
 /SOGY5yi4p/l5BEYYF3PW/1GPmybGvLA+2I7fVbgCeoE+9zdnIbSrek9t6ydTFOp
 3UY0jQxq77R8OzR0unIYbD8sizyAjIZW5KJzn3rPzB3O5uXVcJMxqsY3oSo6iAh3
 oJJ0Q0darkWZHebcs283Vx5Iu09VsSF9MOes+Xpubxp4bn8hJI1icneAnqH49hij
 VM4F5Y+3CzYyzWHh+/fJVd+QgyzqAGS/epJrFFdleTpYiRYPTmmBhsaHjQI/mC0y
 qsVw5a6K48QFobG9xY9YCfL/0w8FeShrEAh1iu56xGZXqKlCwLnnurvbNs0sMUoG
 S00z5uJ7SxWfpVmwMd4indnnC6hcEz+LUNorCE6IdjhTHri0hISxksDoGd/w+kYG
 YzASVugCeq8ALDQU+VgiIrLc6lu6VdbVT91x+qm2IZzllgEIspcsI2d1s5D1mh2E
 QGyVhc2fjHuBBHNpEWcmst/Uat26U0YHc1pNpHVhVrQk2dogTSGX/oIXYR+VEJHK
 Px2kGHUWc4Cal5AUwuf4ZuGswmUu7q9ErGxAqIQoqX3SuHp8JtUBgNw/TjzOseRB
 VWYOK8exhh7ryjaFTgmN7VSEHHZ5grS+rCVbzdfPTvCLeCEBEbU=
 =3fzf
 -----END PGP SIGNATURE-----

Merge tag 'v1.37.0' into babolivier/dinsic_1.41.0

Synapse 1.37.0 (2021-06-29)
===========================

This release deprecates the current spam checker interface. See the [upgrade notes](https://matrix-org.github.io/synapse/develop/upgrade#deprecation-of-the-current-spam-checker-interface) for more information on how to update to the new generic module interface.

This release also removes support for fetching and renewing TLS certificates using the ACME v1 protocol, which has been fully decommissioned by Let's Encrypt on June 1st 2021. Admins previously using this feature should use a [reverse proxy](https://matrix-org.github.io/synapse/develop/reverse_proxy.html) to handle TLS termination, or use an external ACME client (such as [certbot](https://certbot.eff.org/)) to retrieve a certificate and key and provide them to Synapse using the `tls_certificate_path` and `tls_private_key_path` configuration settings.

Synapse 1.37.0rc1 (2021-06-24)
==============================

Features
--------

- Implement "room knocking" as per [MSC2403](https://github.com/matrix-org/matrix-doc/pull/2403). Contributed by @Sorunome and anoa. ([\#6739](https://github.com/matrix-org/synapse/issues/6739), [\#9359](https://github.com/matrix-org/synapse/issues/9359), [\#10167](https://github.com/matrix-org/synapse/issues/10167), [\#10212](https://github.com/matrix-org/synapse/issues/10212), [\#10227](https://github.com/matrix-org/synapse/issues/10227))
- Add experimental support for backfilling history into rooms ([MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716)). ([\#9247](https://github.com/matrix-org/synapse/issues/9247))
- Implement a generic interface for third-party plugin modules. ([\#10062](https://github.com/matrix-org/synapse/issues/10062), [\#10206](https://github.com/matrix-org/synapse/issues/10206))
- Implement config option `sso.update_profile_information` to sync SSO users' profile information with the identity provider each time they login. Currently only displayname is supported. ([\#10108](https://github.com/matrix-org/synapse/issues/10108))
- Ensure that errors during startup are written to the logs and the console. ([\#10191](https://github.com/matrix-org/synapse/issues/10191))

Bugfixes
--------

- Fix a bug introduced in Synapse v1.25.0 that prevented the `ip_range_whitelist` configuration option from working for federation and identity servers. Contributed by @mikure. ([\#10115](https://github.com/matrix-org/synapse/issues/10115))
- Remove a broken import line in Synapse's `admin_cmd` worker. Broke in Synapse v1.33.0. ([\#10154](https://github.com/matrix-org/synapse/issues/10154))
- Fix a bug introduced in Synapse v1.21.0 which could cause `/sync` to return immediately with an empty response. ([\#10157](https://github.com/matrix-org/synapse/issues/10157), [\#10158](https://github.com/matrix-org/synapse/issues/10158))
- Fix a minor bug in the response to `/_matrix/client/r0/user/{user}/openid/request_token` causing `expires_in` to be a float instead of an integer. Contributed by @lukaslihotzki. ([\#10175](https://github.com/matrix-org/synapse/issues/10175))
- Always require users to re-authenticate for dangerous operations: deactivating an account, modifying an account password, and adding 3PIDs. ([\#10184](https://github.com/matrix-org/synapse/issues/10184))
- Fix a bug introduced in Synpase v1.7.2 where remote server count metrics collection would be incorrectly delayed on startup. Found by @heftig. ([\#10195](https://github.com/matrix-org/synapse/issues/10195))
- Fix a bug introduced in Synapse v1.35.1 where an `allow` key of a `m.room.join_rules` event could be applied for incorrect room versions and configurations. ([\#10208](https://github.com/matrix-org/synapse/issues/10208))
- Fix performance regression in responding to user key requests over federation. Introduced in Synapse v1.34.0rc1. ([\#10221](https://github.com/matrix-org/synapse/issues/10221))

Improved Documentation
----------------------

- Add a new guide to decoding request logs. ([\#8436](https://github.com/matrix-org/synapse/issues/8436))
- Mention in the sample homeserver config that you may need to configure max upload size in your reverse proxy. Contributed by @aaronraimist. ([\#10122](https://github.com/matrix-org/synapse/issues/10122))
- Fix broken links in documentation. ([\#10180](https://github.com/matrix-org/synapse/issues/10180))
- Deploy a snapshot of the documentation website upon each new Synapse release. ([\#10198](https://github.com/matrix-org/synapse/issues/10198))

Deprecations and Removals
-------------------------

- The current spam checker interface is deprecated in favour of a new generic modules system. See the [upgrade notes](https://matrix-org.github.io/synapse/develop/upgrade#deprecation-of-the-current-spam-checker-interface) for more information on how to update to the new system. ([\#10062](https://github.com/matrix-org/synapse/issues/10062), [\#10210](https://github.com/matrix-org/synapse/issues/10210), [\#10238](https://github.com/matrix-org/synapse/issues/10238))
- Stop supporting the unstable spaces prefixes from MSC1772. ([\#10161](https://github.com/matrix-org/synapse/issues/10161))
- Remove Synapse's support for automatically fetching and renewing certificates using the ACME v1 protocol. This protocol has been fully turned off by Let's Encrypt for existing installations on June 1st 2021. Admins previously using this feature should use a [reverse proxy](https://matrix-org.github.io/synapse/develop/reverse_proxy.html) to handle TLS termination, or use an external ACME client (such as [certbot](https://certbot.eff.org/)) to retrieve a certificate and key and provide them to Synapse using the `tls_certificate_path` and `tls_private_key_path` configuration settings. ([\#10194](https://github.com/matrix-org/synapse/issues/10194))

Internal Changes
----------------

- Update the database schema versioning to support gradual migration away from legacy tables. ([\#9933](https://github.com/matrix-org/synapse/issues/9933))
- Add type hints to the federation servlets. ([\#10080](https://github.com/matrix-org/synapse/issues/10080))
- Improve OpenTracing for event persistence. ([\#10134](https://github.com/matrix-org/synapse/issues/10134), [\#10193](https://github.com/matrix-org/synapse/issues/10193))
- Clean up the interface for injecting OpenTracing over HTTP. ([\#10143](https://github.com/matrix-org/synapse/issues/10143))
- Limit the number of in-flight `/keys/query` requests from a single device. ([\#10144](https://github.com/matrix-org/synapse/issues/10144))
- Refactor EventPersistenceQueue. ([\#10145](https://github.com/matrix-org/synapse/issues/10145))
- Document `SYNAPSE_TEST_LOG_LEVEL` to see the logger output when running tests. ([\#10148](https://github.com/matrix-org/synapse/issues/10148))
- Update the Complement build tags in GitHub Actions to test currently experimental features. ([\#10155](https://github.com/matrix-org/synapse/issues/10155))
- Add a `synapse_federation_soft_failed_events_total` metric to track how often events are soft failed. ([\#10156](https://github.com/matrix-org/synapse/issues/10156))
- Fetch the corresponding complement branch when performing CI. ([\#10160](https://github.com/matrix-org/synapse/issues/10160))
- Add some developer documentation about boolean columns in database schemas. ([\#10164](https://github.com/matrix-org/synapse/issues/10164))
- Add extra logging fields to better debug where events are being soft failed. ([\#10168](https://github.com/matrix-org/synapse/issues/10168))
- Add debug logging for when we enter and exit `Measure` blocks. ([\#10183](https://github.com/matrix-org/synapse/issues/10183))
- Improve comments in structured logging code. ([\#10188](https://github.com/matrix-org/synapse/issues/10188))
- Update [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083) support with modifications from the MSC. ([\#10189](https://github.com/matrix-org/synapse/issues/10189))
- Remove redundant DNS lookup limiter. ([\#10190](https://github.com/matrix-org/synapse/issues/10190))
- Upgrade `black` linting tool to 21.6b0. ([\#10197](https://github.com/matrix-org/synapse/issues/10197))
- Expose OpenTracing trace id in response headers. ([\#10199](https://github.com/matrix-org/synapse/issues/10199))
This commit is contained in:
Brendan Abolivier 2021-09-01 10:40:29 +01:00
commit cba616de41
135 changed files with 3386 additions and 2059 deletions

View file

@ -3,7 +3,10 @@ name: Deploy the documentation
on:
push:
branches:
# For bleeding-edge documentation
- develop
# For documentation specific to a release
- 'release-v*'
workflow_dispatch:
@ -22,6 +25,7 @@ jobs:
- name: Build the documentation
run: mdbook build
# Deploy to the latest documentation directories
- name: Deploy latest documentation
uses: peaceiris/actions-gh-pages@068dc23d9710f1ba62e86896f84735d869951305 # v3.8.0
with:
@ -29,3 +33,32 @@ jobs:
keep_files: true
publish_dir: ./book
destination_dir: ./develop
- name: Get the current Synapse version
id: vars
# The $GITHUB_REF value for a branch looks like `refs/heads/release-v1.2`. We do some
# shell magic to remove the "refs/heads/release-v" bit from this, to end up with "1.2",
# our major/minor version number, and set this to a var called `branch-version`.
#
# We then use some python to get Synapse's full version string, which may look
# like "1.2.3rc4". We set this to a var called `synapse-version`. We use this
# to determine if this release is still an RC, and if so block deployment.
run: |
echo ::set-output name=branch-version::${GITHUB_REF#refs/heads/release-v}
echo ::set-output name=synapse-version::`python3 -c 'import synapse; print(synapse.__version__)'`
# Deploy to the version-specific directory
- name: Deploy release-specific documentation
# We only carry out this step if we're running on a release branch,
# and the current Synapse version does not have "rc" in the name.
#
# The result is that only full releases are deployed, but can be
# updated if the release branch gets retroactive fixes.
if: ${{ startsWith( github.ref, 'refs/heads/release-v' ) && !contains( steps.vars.outputs.synapse-version, 'rc') }}
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
keep_files: true
publish_dir: ./book
# The resulting documentation will end up in a directory named `vX.Y`.
destination_dir: ./v${{ steps.vars.outputs.branch-version }}

View file

@ -305,11 +305,29 @@ jobs:
with:
path: synapse
- name: Run actions/checkout@v2 for complement
uses: actions/checkout@v2
with:
repository: "matrix-org/complement"
path: complement
# Attempt to check out the same branch of Complement as the PR. If it
# doesn't exist, fallback to master.
- name: Checkout complement
shell: bash
run: |
mkdir -p complement
# Attempt to use the version of complement which best matches the current
# build. Depending on whether this is a PR or release, etc. we need to
# use different fallbacks.
#
# 1. First check if there's a similarly named branch (GITHUB_HEAD_REF
# for pull requests, otherwise GITHUB_REF).
# 2. Attempt to use the base branch, e.g. when merging into release-vX.Y
# (GITHUB_BASE_REF for pull requests).
# 3. Use the default complement branch ("master").
for BRANCH_NAME in "$GITHUB_HEAD_REF" "$GITHUB_BASE_REF" "${GITHUB_REF#refs/heads/}" "master"; do
# Skip empty branch names and merge commits.
if [[ -z "$BRANCH_NAME" || $BRANCH_NAME =~ ^refs/pull/.* ]]; then
continue
fi
(wget -O - "https://github.com/matrix-org/complement/archive/$BRANCH_NAME.tar.gz" | tar -xz --strip-components=1 -C complement) && break
done
# Build initial Synapse image
- run: docker build -t matrixdotorg/synapse:latest -f docker/Dockerfile .
@ -322,7 +340,7 @@ jobs:
working-directory: complement/dockerfiles
# Run Complement
- run: go test -v -tags synapse_blacklist ./tests
- run: go test -v -tags synapse_blacklist,msc2403,msc2946,msc3083 ./tests
env:
COMPLEMENT_BASE_IMAGE: complement-synapse:latest
working-directory: complement

View file

@ -1,3 +1,76 @@
Synapse 1.37.0 (2021-06-29)
===========================
This release deprecates the current spam checker interface. See the [upgrade notes](https://matrix-org.github.io/synapse/develop/upgrade#deprecation-of-the-current-spam-checker-interface) for more information on how to update to the new generic module interface.
This release also removes support for fetching and renewing TLS certificates using the ACME v1 protocol, which has been fully decommissioned by Let's Encrypt on June 1st 2021. Admins previously using this feature should use a [reverse proxy](https://matrix-org.github.io/synapse/develop/reverse_proxy.html) to handle TLS termination, or use an external ACME client (such as [certbot](https://certbot.eff.org/)) to retrieve a certificate and key and provide them to Synapse using the `tls_certificate_path` and `tls_private_key_path` configuration settings.
Synapse 1.37.0rc1 (2021-06-24)
==============================
Features
--------
- Implement "room knocking" as per [MSC2403](https://github.com/matrix-org/matrix-doc/pull/2403). Contributed by @Sorunome and anoa. ([\#6739](https://github.com/matrix-org/synapse/issues/6739), [\#9359](https://github.com/matrix-org/synapse/issues/9359), [\#10167](https://github.com/matrix-org/synapse/issues/10167), [\#10212](https://github.com/matrix-org/synapse/issues/10212), [\#10227](https://github.com/matrix-org/synapse/issues/10227))
- Add experimental support for backfilling history into rooms ([MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716)). ([\#9247](https://github.com/matrix-org/synapse/issues/9247))
- Implement a generic interface for third-party plugin modules. ([\#10062](https://github.com/matrix-org/synapse/issues/10062), [\#10206](https://github.com/matrix-org/synapse/issues/10206))
- Implement config option `sso.update_profile_information` to sync SSO users' profile information with the identity provider each time they login. Currently only displayname is supported. ([\#10108](https://github.com/matrix-org/synapse/issues/10108))
- Ensure that errors during startup are written to the logs and the console. ([\#10191](https://github.com/matrix-org/synapse/issues/10191))
Bugfixes
--------
- Fix a bug introduced in Synapse v1.25.0 that prevented the `ip_range_whitelist` configuration option from working for federation and identity servers. Contributed by @mikure. ([\#10115](https://github.com/matrix-org/synapse/issues/10115))
- Remove a broken import line in Synapse's `admin_cmd` worker. Broke in Synapse v1.33.0. ([\#10154](https://github.com/matrix-org/synapse/issues/10154))
- Fix a bug introduced in Synapse v1.21.0 which could cause `/sync` to return immediately with an empty response. ([\#10157](https://github.com/matrix-org/synapse/issues/10157), [\#10158](https://github.com/matrix-org/synapse/issues/10158))
- Fix a minor bug in the response to `/_matrix/client/r0/user/{user}/openid/request_token` causing `expires_in` to be a float instead of an integer. Contributed by @lukaslihotzki. ([\#10175](https://github.com/matrix-org/synapse/issues/10175))
- Always require users to re-authenticate for dangerous operations: deactivating an account, modifying an account password, and adding 3PIDs. ([\#10184](https://github.com/matrix-org/synapse/issues/10184))
- Fix a bug introduced in Synpase v1.7.2 where remote server count metrics collection would be incorrectly delayed on startup. Found by @heftig. ([\#10195](https://github.com/matrix-org/synapse/issues/10195))
- Fix a bug introduced in Synapse v1.35.1 where an `allow` key of a `m.room.join_rules` event could be applied for incorrect room versions and configurations. ([\#10208](https://github.com/matrix-org/synapse/issues/10208))
- Fix performance regression in responding to user key requests over federation. Introduced in Synapse v1.34.0rc1. ([\#10221](https://github.com/matrix-org/synapse/issues/10221))
Improved Documentation
----------------------
- Add a new guide to decoding request logs. ([\#8436](https://github.com/matrix-org/synapse/issues/8436))
- Mention in the sample homeserver config that you may need to configure max upload size in your reverse proxy. Contributed by @aaronraimist. ([\#10122](https://github.com/matrix-org/synapse/issues/10122))
- Fix broken links in documentation. ([\#10180](https://github.com/matrix-org/synapse/issues/10180))
- Deploy a snapshot of the documentation website upon each new Synapse release. ([\#10198](https://github.com/matrix-org/synapse/issues/10198))
Deprecations and Removals
-------------------------
- The current spam checker interface is deprecated in favour of a new generic modules system. See the [upgrade notes](https://matrix-org.github.io/synapse/develop/upgrade#deprecation-of-the-current-spam-checker-interface) for more information on how to update to the new system. ([\#10062](https://github.com/matrix-org/synapse/issues/10062), [\#10210](https://github.com/matrix-org/synapse/issues/10210), [\#10238](https://github.com/matrix-org/synapse/issues/10238))
- Stop supporting the unstable spaces prefixes from MSC1772. ([\#10161](https://github.com/matrix-org/synapse/issues/10161))
- Remove Synapse's support for automatically fetching and renewing certificates using the ACME v1 protocol. This protocol has been fully turned off by Let's Encrypt for existing installations on June 1st 2021. Admins previously using this feature should use a [reverse proxy](https://matrix-org.github.io/synapse/develop/reverse_proxy.html) to handle TLS termination, or use an external ACME client (such as [certbot](https://certbot.eff.org/)) to retrieve a certificate and key and provide them to Synapse using the `tls_certificate_path` and `tls_private_key_path` configuration settings. ([\#10194](https://github.com/matrix-org/synapse/issues/10194))
Internal Changes
----------------
- Update the database schema versioning to support gradual migration away from legacy tables. ([\#9933](https://github.com/matrix-org/synapse/issues/9933))
- Add type hints to the federation servlets. ([\#10080](https://github.com/matrix-org/synapse/issues/10080))
- Improve OpenTracing for event persistence. ([\#10134](https://github.com/matrix-org/synapse/issues/10134), [\#10193](https://github.com/matrix-org/synapse/issues/10193))
- Clean up the interface for injecting OpenTracing over HTTP. ([\#10143](https://github.com/matrix-org/synapse/issues/10143))
- Limit the number of in-flight `/keys/query` requests from a single device. ([\#10144](https://github.com/matrix-org/synapse/issues/10144))
- Refactor EventPersistenceQueue. ([\#10145](https://github.com/matrix-org/synapse/issues/10145))
- Document `SYNAPSE_TEST_LOG_LEVEL` to see the logger output when running tests. ([\#10148](https://github.com/matrix-org/synapse/issues/10148))
- Update the Complement build tags in GitHub Actions to test currently experimental features. ([\#10155](https://github.com/matrix-org/synapse/issues/10155))
- Add a `synapse_federation_soft_failed_events_total` metric to track how often events are soft failed. ([\#10156](https://github.com/matrix-org/synapse/issues/10156))
- Fetch the corresponding complement branch when performing CI. ([\#10160](https://github.com/matrix-org/synapse/issues/10160))
- Add some developer documentation about boolean columns in database schemas. ([\#10164](https://github.com/matrix-org/synapse/issues/10164))
- Add extra logging fields to better debug where events are being soft failed. ([\#10168](https://github.com/matrix-org/synapse/issues/10168))
- Add debug logging for when we enter and exit `Measure` blocks. ([\#10183](https://github.com/matrix-org/synapse/issues/10183))
- Improve comments in structured logging code. ([\#10188](https://github.com/matrix-org/synapse/issues/10188))
- Update [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083) support with modifications from the MSC. ([\#10189](https://github.com/matrix-org/synapse/issues/10189))
- Remove redundant DNS lookup limiter. ([\#10190](https://github.com/matrix-org/synapse/issues/10190))
- Upgrade `black` linting tool to 21.6b0. ([\#10197](https://github.com/matrix-org/synapse/issues/10197))
- Expose OpenTracing trace id in response headers. ([\#10199](https://github.com/matrix-org/synapse/issues/10199))
Synapse 1.36.0 (2021-06-15)
===========================

View file

@ -173,12 +173,19 @@ source ./env/bin/activate
trial tests.rest.admin.test_room tests.handlers.test_admin.ExfiltrateData.test_invite
```
If your tests fail, you may wish to look at the logs:
If your tests fail, you may wish to look at the logs (the default log level is `ERROR`):
```sh
less _trial_temp/test.log
```
To increase the log level for the tests, set `SYNAPSE_TEST_LOG_LEVEL`:
```sh
SYNAPSE_TEST_LOG_LEVEL=DEBUG trial tests
```
## Run the integration tests.
The integration tests are a more comprehensive suite of tests. They

View file

@ -442,10 +442,7 @@ so, you will need to edit `homeserver.yaml`, as follows:
- You will also need to uncomment the `tls_certificate_path` and
`tls_private_key_path` lines under the `TLS` section. You will need to manage
provisioning of these certificates yourself — Synapse had built-in ACME
support, but the ACMEv1 protocol Synapse implements is deprecated, not
allowed by LetsEncrypt for new sites, and will break for existing sites in
late 2020. See [ACME.md](docs/ACME.md).
provisioning of these certificates yourself.
If you are using your own certificate, be sure to use a `.pem` file that
includes the full certificate chain including any intermediate certificates

View file

@ -142,13 +142,6 @@ the form of::
As when logging in, you will need to specify a "Custom server". Specify your
desired ``localpart`` in the 'User name' box.
ACME setup
==========
For details on having Synapse manage your federation TLS certificates
automatically, please see `<docs/ACME.md>`_.
Security note
=============
@ -293,18 +286,6 @@ try installing the failing modules individually::
pip install -e "module-name"
Once this is done, you may wish to run Synapse's unit tests to
check that everything is installed correctly::
python -m twisted.trial tests
This should end with a 'PASSED' result (note that exact numbers will
differ)::
Ran 1337 tests in 716.064s
PASSED (skips=15, successes=1322)
We recommend using the demo which starts 3 federated instances running on ports `8080` - `8082`
./demo/start.sh
@ -324,6 +305,23 @@ If you just want to start a single instance of the app and run it directly::
python -m synapse.app.homeserver --config-path homeserver.yaml
Running the unit tests
======================
After getting up and running, you may wish to run Synapse's unit tests to
check that everything is installed correctly::
trial tests
This should end with a 'PASSED' result (note that exact numbers will
differ)::
Ran 1337 tests in 716.064s
PASSED (skips=15, successes=1322)
For more tips on running the unit tests, like running a specific test or
to see the logging output, see the `CONTRIBUTING doc <CONTRIBUTING.md#run-the-unit-tests>`_.
Running the Integration Tests

View file

@ -85,6 +85,23 @@ for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
Upgrading to v1.37.0
====================
Deprecation of the current spam checker interface
-------------------------------------------------
The current spam checker interface is deprecated in favour of a new generic modules system.
Authors of spam checker modules can refer to `this documentation <https://matrix-org.github.io/synapse/develop/modules.html#porting-an-existing-module-that-uses-the-old-interface>`_
to update their modules. Synapse administrators can refer to `this documentation <https://matrix-org.github.io/synapse/develop/modules.html#using-modules>`_
to update their configuration once the modules they are using have been updated.
We plan to remove support for the current spam checker interface in August 2021.
More module interfaces will be ported over to this new generic system in future versions
of Synapse.
Upgrading to v1.34.0
====================

View file

@ -46,14 +46,14 @@ class CursesStdIO:
self.callback = callback
def fileno(self):
""" We want to select on FD 0 """
"""We want to select on FD 0"""
return 0
def connectionLost(self, reason):
self.close()
def print_line(self, text):
""" add a line to the internal list of lines"""
"""add a line to the internal list of lines"""
self.lines.append(text)
self.redraw()
@ -92,7 +92,7 @@ class CursesStdIO:
)
def doRead(self):
""" Input is ready! """
"""Input is ready!"""
curses.noecho()
c = self.stdscr.getch() # read a character
@ -132,7 +132,7 @@ class CursesStdIO:
return "CursesStdIO"
def close(self):
""" clean up """
"""clean up"""
curses.nocbreak()
self.stdscr.keypad(0)

6
debian/changelog vendored
View file

@ -1,3 +1,9 @@
matrix-synapse-py3 (1.37.0) stable; urgency=medium
* New synapse release 1.37.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 29 Jun 2021 10:15:25 +0100
matrix-synapse-py3 (1.36.0) stable; urgency=medium
* New synapse release 1.36.0.

View file

@ -7,12 +7,6 @@
tls_certificate_path: "/data/{{ SYNAPSE_SERVER_NAME }}.tls.crt"
tls_private_key_path: "/data/{{ SYNAPSE_SERVER_NAME }}.tls.key"
{% if SYNAPSE_ACME %}
acme:
enabled: true
port: 8009
{% endif %}
{% endif %}
## Server ##

View file

@ -1,161 +0,0 @@
# ACME
From version 1.0 (June 2019) onwards, Synapse requires valid TLS
certificates for communication between servers (by default on port
`8448`) in addition to those that are client-facing (port `443`). To
help homeserver admins fulfil this new requirement, Synapse v0.99.0
introduced support for automatically provisioning certificates through
[Let's Encrypt](https://letsencrypt.org/) using the ACME protocol.
## Deprecation of ACME v1
In [March 2019](https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430),
Let's Encrypt announced that they were deprecating version 1 of the ACME
protocol, with the plan to disable the use of it for new accounts in
November 2019, for new domains in June 2020, and for existing accounts and
domains in June 2021.
Synapse doesn't currently support version 2 of the ACME protocol, which
means that:
* for existing installs, Synapse's built-in ACME support will continue
to work until June 2021.
* for new installs, this feature will not work at all.
Either way, it is recommended to move from Synapse's ACME support
feature to an external automated tool such as [certbot](https://github.com/certbot/certbot)
(or browse [this list](https://letsencrypt.org/fr/docs/client-options/)
for an alternative ACME client).
It's also recommended to use a reverse proxy for the server-facing
communications (more documentation about this can be found
[here](/docs/reverse_proxy.md)) as well as the client-facing ones and
have it serve the certificates.
In case you can't do that and need Synapse to serve them itself, make
sure to set the `tls_certificate_path` configuration setting to the path
of the certificate (make sure to use the certificate containing the full
certification chain, e.g. `fullchain.pem` if using certbot) and
`tls_private_key_path` to the path of the matching private key. Note
that in this case you will need to restart Synapse after each
certificate renewal so that Synapse stops using the old certificate.
If you still want to use Synapse's built-in ACME support, the rest of
this document explains how to set it up.
## Initial setup
In the case that your `server_name` config variable is the same as
the hostname that the client connects to, then the same certificate can be
used between client and federation ports without issue.
If your configuration file does not already have an `acme` section, you can
generate an example config by running the `generate_config` executable. For
example:
```
~/synapse/env3/bin/generate_config
```
You will need to provide Let's Encrypt (or another ACME provider) access to
your Synapse ACME challenge responder on port 80, at the domain of your
homeserver. This requires you to either change the port of the ACME listener
provided by Synapse to a high port and reverse proxy to it, or use a tool
like `authbind` to allow Synapse to listen on port 80 without root access.
(Do not run Synapse with root permissions!) Detailed instructions are
available under "ACME setup" below.
If you already have certificates, you will need to back up or delete them
(files `example.com.tls.crt` and `example.com.tls.key` in Synapse's root
directory), Synapse's ACME implementation will not overwrite them.
## ACME setup
The main steps for enabling ACME support in short summary are:
1. Allow Synapse to listen for incoming ACME challenges.
1. Enable ACME support in `homeserver.yaml`.
1. Move your old certificates (files `example.com.tls.crt` and `example.com.tls.key` out of the way if they currently exist at the paths specified in `homeserver.yaml`.
1. Restart Synapse.
Detailed instructions for each step are provided below.
### Listening on port 80
In order for Synapse to complete the ACME challenge to provision a
certificate, it needs access to port 80. Typically listening on port 80 is
only granted to applications running as root. There are thus two solutions to
this problem.
#### Using a reverse proxy
A reverse proxy such as Apache or nginx allows a single process (the web
server) to listen on port 80 and proxy traffic to the appropriate program
running on your server. It is the recommended method for setting up ACME as
it allows you to use your existing webserver while also allowing Synapse to
provision certificates as needed.
For nginx users, add the following line to your existing `server` block:
```
location /.well-known/acme-challenge {
proxy_pass http://localhost:8009;
}
```
For Apache, add the following to your existing webserver config:
```
ProxyPass /.well-known/acme-challenge http://localhost:8009/.well-known/acme-challenge
```
Make sure to restart/reload your webserver after making changes.
Now make the relevant changes in `homeserver.yaml` to enable ACME support:
```
acme:
enabled: true
port: 8009
```
#### Authbind
`authbind` allows a program which does not run as root to bind to
low-numbered ports in a controlled way. The setup is simpler, but requires a
webserver not to already be running on port 80. **This includes every time
Synapse renews a certificate**, which may be cumbersome if you usually run a
web server on port 80. Nevertheless, if you're sure port 80 is not being used
for any other purpose then all that is necessary is the following:
Install `authbind`. For example, on Debian/Ubuntu:
```
sudo apt-get install authbind
```
Allow `authbind` to bind port 80:
```
sudo touch /etc/authbind/byport/80
sudo chmod 777 /etc/authbind/byport/80
```
When Synapse is started, use the following syntax:
```
authbind --deep <synapse start command>
```
Make the relevant changes in `homeserver.yaml` to enable ACME support:
```
acme:
enabled: true
```
### (Re)starting synapse
Ensure that the certificate paths specified in `homeserver.yaml` (`tls_certificate_path` and `tls_private_key_path`) do not currently point to any files. Synapse will not provision certificates if files exist, as it does not want to overwrite existing certificates.
Finally, start/restart Synapse.

View file

@ -101,15 +101,6 @@ In this case, your `server_name` points to the host where your Synapse is
running. There is no need to create a `.well-known` URI or an SRV record, but
you will need to give Synapse a valid, signed, certificate.
The easiest way to do that is with Synapse's built-in ACME (Let's Encrypt)
support. Full details are in [ACME.md](./ACME.md) but, in a nutshell:
1. Allow Synapse to listen on port 80 with `authbind`, or forward it from a
reverse proxy.
2. Enable acme support in `homeserver.yaml`.
3. Move your old certificates out of the way.
4. Restart Synapse.
### If you do have an SRV record currently
If you are using an SRV record, your matrix domain (`server_name`) may not
@ -130,15 +121,9 @@ In this situation, you have three choices for how to proceed:
#### Option 1: give Synapse a certificate for your matrix domain
Synapse 1.0 will expect your server to present a TLS certificate for your
`server_name` (`example.com` in the above example). You can achieve this by
doing one of the following:
* Acquire a certificate for the `server_name` yourself (for example, using
`certbot`), and give it and the key to Synapse via `tls_certificate_path`
and `tls_private_key_path`, or:
* Use Synapse's [ACME support](./ACME.md), and forward port 80 on the
`server_name` domain to your Synapse instance.
`server_name` (`example.com` in the above example). You can achieve this by acquiring a
certificate for the `server_name` yourself (for example, using `certbot`), and giving it
and the key to Synapse via `tls_certificate_path` and `tls_private_key_path`.
#### Option 2: run Synapse behind a reverse proxy
@ -161,10 +146,9 @@ You can do this with a `.well-known` file as follows:
with Synapse 0.34 and earlier.
2. Give Synapse a certificate corresponding to the target domain
(`customer.example.net` in the above example). You can either use Synapse's
built-in [ACME support](./ACME.md) for this (via the `domain` parameter in
the `acme` section), or acquire a certificate yourself and give it to
Synapse via `tls_certificate_path` and `tls_private_key_path`.
(`customer.example.net` in the above example). You can do this by acquire a
certificate for the target domain and giving it to Synapse via `tls_certificate_path`
and `tls_private_key_path`.
3. Restart Synapse to ensure the new certificate is loaded.

View file

@ -35,7 +35,7 @@
- [URL Previews](url_previews.md)
- [User Directory](user_directory.md)
- [Message Retention Policies](message_retention_policies.md)
- [Pluggable Modules]()
- [Pluggable Modules](modules.md)
- [Third Party Rules]()
- [Spam Checker](spam_checker.md)
- [Presence Router](presence_router_module.md)
@ -61,6 +61,7 @@
- [Server Version](admin_api/version_api.md)
- [Manhole](manhole.md)
- [Monitoring](metrics-howto.md)
- [Request log format](usage/administration/request_log.md)
- [Scripts]()
# Development
@ -69,6 +70,7 @@
- [Git Usage](dev/git.md)
- [Testing]()
- [OpenTracing](opentracing.md)
- [Database Schemas](development/database_schema.md)
- [Synapse Architecture]()
- [Log Contexts](log_contexts.md)
- [Replication](replication.md)
@ -84,4 +86,4 @@
- [Scripts]()
# Other
- [Dependency Deprecation Policy](deprecation_policy.md)
- [Dependency Deprecation Policy](deprecation_policy.md)

View file

@ -2,7 +2,7 @@ Admin APIs
==========
**Note**: The latest documentation can be viewed `here <https://matrix-org.github.io/synapse>`_.
See `docs/README.md <../docs/README.md>`_ for more information.
See `docs/README.md <../README.md>`_ for more information.
**Please update links to point to the website instead.** Existing files in this directory
are preserved to maintain historical links, but may be moved in the future.
@ -10,5 +10,5 @@ are preserved to maintain historical links, but may be moved in the future.
This directory includes documentation for the various synapse specific admin
APIs available. Updates to the existing Admin API documentation should still
be made to these files, but any new documentation files should instead be placed under
`docs/usage/administration/admin_api <../docs/usage/administration/admin_api>`_.
`docs/usage/administration/admin_api <../usage/administration/admin_api>`_.

View file

@ -11,4 +11,4 @@ POST /_synapse/admin/v1/delete_group/<group_id>
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [Admin API](../../usage/administration/admin_api).
server admin: see [Admin API](../usage/administration/admin_api).

View file

@ -7,7 +7,7 @@ The api is:
GET /_synapse/admin/v1/event_reports?from=0&limit=10
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [Admin API](../../usage/administration/admin_api).
server admin: see [Admin API](../usage/administration/admin_api).
It returns a JSON body like the following:
@ -95,7 +95,7 @@ The api is:
GET /_synapse/admin/v1/event_reports/<report_id>
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [Admin API](../../usage/administration/admin_api).
server admin: see [Admin API](../usage/administration/admin_api).
It returns a JSON body like the following:

View file

@ -28,7 +28,7 @@ The API is:
GET /_synapse/admin/v1/room/<room_id>/media
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [Admin API](../../usage/administration/admin_api).
server admin: see [Admin API](../usage/administration/admin_api).
The API returns a JSON body like the following:
```json
@ -311,7 +311,7 @@ The following fields are returned in the JSON response body:
* `deleted`: integer - The number of media items successfully deleted
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [Admin API](../../usage/administration/admin_api).
server admin: see [Admin API](../usage/administration/admin_api).
If the user re-requests purged remote media, synapse will re-request the media
from the originating server.

View file

@ -17,7 +17,7 @@ POST /_synapse/admin/v1/purge_history/<room_id>[/<event_id>]
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
By default, events sent by local users are not deleted, as they may represent
the only copies of this content in existence. (Events sent by remote users are

View file

@ -24,7 +24,7 @@ POST /_synapse/admin/v1/join/<room_id_or_alias>
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [Admin API](../../usage/administration/admin_api).
server admin: see [Admin API](../usage/administration/admin_api).
Response:

View file

@ -443,7 +443,7 @@ with a body of:
```
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see [Admin API](../../usage/administration/admin_api).
server admin: see [Admin API](../usage/administration/admin_api).
A response body like the following is returned:

View file

@ -10,7 +10,7 @@ GET /_synapse/admin/v1/statistics/users/media
```
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../../usage/administration/admin_api).
for a server admin: see [Admin API](../usage/administration/admin_api).
A response body like the following is returned:

View file

@ -11,7 +11,7 @@ GET /_synapse/admin/v2/users/<user_id>
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
It returns a JSON body like the following:
@ -78,7 +78,7 @@ with a body of:
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
URL parameters:
@ -119,7 +119,7 @@ GET /_synapse/admin/v2/users?from=0&limit=10&guests=false
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
A response body like the following is returned:
@ -237,7 +237,7 @@ See also: [Client Server
API Whois](https://matrix.org/docs/spec/client_server/r0.6.1#get-matrix-client-r0-admin-whois-userid).
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
It returns a JSON body like the following:
@ -294,7 +294,7 @@ with a body of:
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
The erase parameter is optional and defaults to `false`.
An empty body may be passed for backwards compatibility.
@ -339,7 +339,7 @@ with a body of:
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
The parameter `new_password` is required.
The parameter `logout_devices` is optional and defaults to `true`.
@ -354,7 +354,7 @@ GET /_synapse/admin/v1/users/<user_id>/admin
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
A response body like the following is returned:
@ -384,7 +384,7 @@ with a body of:
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
## List room memberships of a user
@ -398,7 +398,7 @@ GET /_synapse/admin/v1/users/<user_id>/joined_rooms
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
A response body like the following is returned:
@ -443,7 +443,7 @@ GET /_synapse/admin/v1/users/<user_id>/media
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
A response body like the following is returned:
@ -591,7 +591,7 @@ GET /_synapse/admin/v2/users/<user_id>/devices
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
A response body like the following is returned:
@ -659,7 +659,7 @@ POST /_synapse/admin/v2/users/<user_id>/delete_devices
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
An empty JSON dict is returned.
@ -683,7 +683,7 @@ GET /_synapse/admin/v2/users/<user_id>/devices/<device_id>
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
A response body like the following is returned:
@ -731,7 +731,7 @@ PUT /_synapse/admin/v2/users/<user_id>/devices/<device_id>
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
An empty JSON dict is returned.
@ -760,7 +760,7 @@ DELETE /_synapse/admin/v2/users/<user_id>/devices/<device_id>
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
An empty JSON dict is returned.
@ -781,7 +781,7 @@ GET /_synapse/admin/v1/users/<user_id>/pushers
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
A response body like the following is returned:
@ -872,7 +872,7 @@ POST /_synapse/admin/v1/users/<user_id>/shadow_ban
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
An empty JSON dict is returned.
@ -897,7 +897,7 @@ GET /_synapse/admin/v1/users/<user_id>/override_ratelimit
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
A response body like the following is returned:
@ -939,7 +939,7 @@ POST /_synapse/admin/v1/users/<user_id>/override_ratelimit
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
A response body like the following is returned:
@ -984,7 +984,7 @@ DELETE /_synapse/admin/v1/users/<user_id>/override_ratelimit
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../../usage/administration/admin_api)
server admin: [Admin API](../usage/administration/admin_api)
An empty JSON dict is returned.

View file

@ -24,8 +24,8 @@ To enable this, first create templates for the policy and success pages.
These should be stored on the local filesystem.
These templates use the [Jinja2](http://jinja.pocoo.org) templating language,
and [docs/privacy_policy_templates](privacy_policy_templates) gives
examples of the sort of thing that can be done.
and [docs/privacy_policy_templates](https://github.com/matrix-org/synapse/tree/develop/docs/privacy_policy_templates/)
gives examples of the sort of thing that can be done.
Note that the templates must be stored under a name giving the language of the
template - currently this must always be `en` (for "English");

View file

@ -0,0 +1,137 @@
# Synapse database schema files
Synapse's database schema is stored in the `synapse.storage.schema` module.
## Logical databases
Synapse supports splitting its datastore across multiple physical databases (which can
be useful for large installations), and the schema files are therefore split according
to the logical database they apply to.
At the time of writing, the following "logical" databases are supported:
* `state` - used to store Matrix room state (more specifically, `state_groups`,
their relationships and contents).
* `main` - stores everything else.
Additionally, the `common` directory contains schema files for tables which must be
present on *all* physical databases.
## Synapse schema versions
Synapse manages its database schema via "schema versions". These are mainly used to
help avoid confusion if the Synapse codebase is rolled back after the database is
updated. They work as follows:
* The Synapse codebase defines a constant `synapse.storage.schema.SCHEMA_VERSION`
which represents the expectations made about the database by that version. For
example, as of Synapse v1.36, this is `59`.
* The database stores a "compatibility version" in
`schema_compat_version.compat_version` which defines the `SCHEMA_VERSION` of the
oldest version of Synapse which will work with the database. On startup, if
`compat_version` is found to be newer than `SCHEMA_VERSION`, Synapse will refuse to
start.
Synapse automatically updates this field from
`synapse.storage.schema.SCHEMA_COMPAT_VERSION`.
* Whenever a backwards-incompatible change is made to the database format (normally
via a `delta` file), `synapse.storage.schema.SCHEMA_COMPAT_VERSION` is also updated
so that administrators can not accidentally roll back to a too-old version of Synapse.
Generally, the goal is to maintain compatibility with at least one or two previous
releases of Synapse, so any substantial change tends to require multiple releases and a
bit of forward-planning to get right.
As a worked example: we want to remove the `room_stats_historical` table. Here is how it
might pan out.
1. Replace any code that *reads* from `room_stats_historical` with alternative
implementations, but keep writing to it in case of rollback to an earlier version.
Also, increase `synapse.storage.schema.SCHEMA_VERSION`. In this
instance, there is no existing code which reads from `room_stats_historical`, so
our starting point is:
v1.36.0: `SCHEMA_VERSION=59`, `SCHEMA_COMPAT_VERSION=59`
2. Next (say in Synapse v1.37.0): remove the code that *writes* to
`room_stats_historical`, but dont yet remove the table in case of rollback to
v1.36.0. Again, we increase `synapse.storage.schema.SCHEMA_VERSION`, but
because we have not broken compatibility with v1.36, we do not yet update
`SCHEMA_COMPAT_VERSION`. We now have:
v1.37.0: `SCHEMA_VERSION=60`, `SCHEMA_COMPAT_VERSION=59`.
3. Later (say in Synapse v1.38.0): we can remove the table altogether. This will
break compatibility with v1.36.0, so we must update `SCHEMA_COMPAT_VERSION` accordingly.
There is no need to update `synapse.storage.schema.SCHEMA_VERSION`, since there is no
change to the Synapse codebase here. So we end up with:
v1.38.0: `SCHEMA_VERSION=60`, `SCHEMA_COMPAT_VERSION=60`.
If in doubt about whether to update `SCHEMA_VERSION` or not, it is generally best to
lean towards doing so.
## Full schema dumps
In the `full_schemas` directories, only the most recently-numbered snapshot is used
(`54` at the time of writing). Older snapshots (eg, `16`) are present for historical
reference only.
### Building full schema dumps
If you want to recreate these schemas, they need to be made from a database that
has had all background updates run.
To do so, use `scripts-dev/make_full_schema.sh`. This will produce new
`full.sql.postgres` and `full.sql.sqlite` files.
Ensure postgres is installed, then run:
./scripts-dev/make_full_schema.sh -p postgres_username -o output_dir/
NB at the time of writing, this script predates the split into separate `state`/`main`
databases so will require updates to handle that correctly.
## Boolean columns
Boolean columns require special treatment, since SQLite treats booleans the
same as integers.
There are three separate aspects to this:
* Any new boolean column must be added to the `BOOLEAN_COLUMNS` list in
`scripts/synapse_port_db`. This tells the port script to cast the integer
value from SQLite to a boolean before writing the value to the postgres
database.
* Before SQLite 3.23, `TRUE` and `FALSE` were not recognised as constants by
SQLite, and the `IS [NOT] TRUE`/`IS [NOT] FALSE` operators were not
supported. This makes it necessary to avoid using `TRUE` and `FALSE`
constants in SQL commands.
For example, to insert a `TRUE` value into the database, write:
```python
txn.execute("INSERT INTO tbl(col) VALUES (?)", (True, ))
```
* Default values for new boolean columns present a particular
difficulty. Generally it is best to create separate schema files for
Postgres and SQLite. For example:
```sql
# in 00delta.sql.postgres:
ALTER TABLE tbl ADD COLUMN col BOOLEAN DEFAULT FALSE;
```
```sql
# in 00delta.sql.sqlite:
ALTER TABLE tbl ADD COLUMN col BOOLEAN DEFAULT 0;
```
Note that there is a particularly insidious failure mode here: the Postgres
flavour will be accepted by SQLite 3.22, but will give a column whose
default value is the **string** `"FALSE"` - which, when cast back to a boolean
in Python, evaluates to `True`.

View file

@ -14,7 +14,7 @@ you set the `server_name` to match your machine's public DNS hostname.
For this default configuration to work, you will need to listen for TLS
connections on port 8448. The preferred way to do that is by using a
reverse proxy: see [reverse_proxy.md](<reverse_proxy.md>) for instructions
reverse proxy: see [reverse_proxy.md](reverse_proxy.md) for instructions
on how to correctly set one up.
In some cases you might not want to run Synapse on the machine that has
@ -44,7 +44,7 @@ a complicated dance which requires connections in both directions).
Another common problem is that people on other servers can't join rooms that
you invite them to. This can be caused by an incorrectly-configured reverse
proxy: see [reverse_proxy.md](<reverse_proxy.md>) for instructions on how to correctly
proxy: see [reverse_proxy.md](reverse_proxy.md) for instructions on how to correctly
configure a reverse proxy.
### Known issues
@ -63,4 +63,4 @@ release of Synapse.
If you want to get up and running quickly with a trio of homeservers in a
private federation, there is a script in the `demo` directory. This is mainly
useful just for development purposes. See [demo/README](<../demo/README>).
useful just for development purposes. See [demo/README](https://github.com/matrix-org/synapse/tree/develop/demo/).

View file

@ -51,7 +51,7 @@ clients.
Support for this feature can be enabled and configured in the
`retention` section of the Synapse configuration file (see the
[sample file](https://github.com/matrix-org/synapse/blob/v1.7.3/docs/sample_config.yaml#L332-L393)).
[sample file](https://github.com/matrix-org/synapse/blob/v1.36.0/docs/sample_config.yaml#L451-L518)).
To enable support for message retention policies, set the setting
`enabled` in this section to `true`.
@ -87,7 +87,7 @@ expired events from the database. They are only run if support for
message retention policies is enabled in the server's configuration. If
no configuration for purge jobs is configured by the server admin,
Synapse will use a default configuration, which is described in the
[sample configuration file](https://github.com/matrix-org/synapse/blob/master/docs/sample_config.yaml#L332-L393).
[sample configuration file](https://github.com/matrix-org/synapse/blob/v1.36.0/docs/sample_config.yaml#L451-L518).
Some server admins might want a finer control on when events are removed
depending on an event's room's policy. This can be done by setting the

View file

@ -72,8 +72,7 @@
## Monitoring workers
To monitor a Synapse installation using
[workers](https://github.com/matrix-org/synapse/blob/master/docs/workers.md),
To monitor a Synapse installation using [workers](workers.md),
every worker needs to be monitored independently, in addition to
the main homeserver process. This is because workers don't send
their metrics to the main homeserver process, but expose them

258
docs/modules.md Normal file
View file

@ -0,0 +1,258 @@
# Modules
Synapse supports extending its functionality by configuring external modules.
## Using modules
To use a module on Synapse, add it to the `modules` section of the configuration file:
```yaml
modules:
- module: my_super_module.MySuperClass
config:
do_thing: true
- module: my_other_super_module.SomeClass
config: {}
```
Each module is defined by a path to a Python class as well as a configuration. This
information for a given module should be available in the module's own documentation.
**Note**: When using third-party modules, you effectively allow someone else to run
custom code on your Synapse homeserver. Server admins are encouraged to verify the
provenance of the modules they use on their homeserver and make sure the modules aren't
running malicious code on their instance.
Also note that we are currently in the process of migrating module interfaces to this
system. While some interfaces might be compatible with it, others still require
configuring modules in another part of Synapse's configuration file. Currently, only the
spam checker interface is compatible with this new system.
## Writing a module
A module is a Python class that uses Synapse's module API to interact with the
homeserver. It can register callbacks that Synapse will call on specific operations, as
well as web resources to attach to Synapse's web server.
When instantiated, a module is given its parsed configuration as well as an instance of
the `synapse.module_api.ModuleApi` class. The configuration is a dictionary, and is
either the output of the module's `parse_config` static method (see below), or the
configuration associated with the module in Synapse's configuration file.
See the documentation for the `ModuleApi` class
[here](https://github.com/matrix-org/synapse/blob/master/synapse/module_api/__init__.py).
### Handling the module's configuration
A module can implement the following static method:
```python
@staticmethod
def parse_config(config: dict) -> dict
```
This method is given a dictionary resulting from parsing the YAML configuration for the
module. It may modify it (for example by parsing durations expressed as strings (e.g.
"5d") into milliseconds, etc.), and return the modified dictionary. It may also verify
that the configuration is correct, and raise an instance of
`synapse.module_api.errors.ConfigError` if not.
### Registering a web resource
Modules can register web resources onto Synapse's web server using the following module
API method:
```python
def ModuleApi.register_web_resource(path: str, resource: IResource)
```
The path is the full absolute path to register the resource at. For example, if you
register a resource for the path `/_synapse/client/my_super_module/say_hello`, Synapse
will serve it at `http(s)://[HS_URL]/_synapse/client/my_super_module/say_hello`. Note
that Synapse does not allow registering resources for several sub-paths in the `/_matrix`
namespace (such as anything under `/_matrix/client` for example). It is strongly
recommended that modules register their web resources under the `/_synapse/client`
namespace.
The provided resource is a Python class that implements Twisted's [IResource](https://twistedmatrix.com/documents/current/api/twisted.web.resource.IResource.html)
interface (such as [Resource](https://twistedmatrix.com/documents/current/api/twisted.web.resource.Resource.html)).
Only one resource can be registered for a given path. If several modules attempt to
register a resource for the same path, the module that appears first in Synapse's
configuration file takes priority.
Modules **must** register their web resources in their `__init__` method.
### Registering a callback
Modules can use Synapse's module API to register callbacks. Callbacks are functions that
Synapse will call when performing specific actions. Callbacks must be asynchronous, and
are split in categories. A single module may implement callbacks from multiple categories,
and is under no obligation to implement all callbacks from the categories it registers
callbacks for.
#### Spam checker callbacks
To register one of the callbacks described in this section, a module needs to use the
module API's `register_spam_checker_callbacks` method. The callback functions are passed
to `register_spam_checker_callbacks` as keyword arguments, with the callback name as the
argument name and the function as its value. This is demonstrated in the example below.
The available spam checker callbacks are:
```python
async def check_event_for_spam(event: "synapse.events.EventBase") -> Union[bool, str]
```
Called when receiving an event from a client or via federation. The module can return
either a `bool` to indicate whether the event must be rejected because of spam, or a `str`
to indicate the event must be rejected because of spam and to give a rejection reason to
forward to clients.
```python
async def user_may_invite(inviter: str, invitee: str, room_id: str) -> bool
```
Called when processing an invitation. The module must return a `bool` indicating whether
the inviter can invite the invitee to the given room. Both inviter and invitee are
represented by their Matrix user ID (i.e. `@alice:example.com`).
```python
async def user_may_create_room(user: str) -> bool
```
Called when processing a room creation request. The module must return a `bool` indicating
whether the given user (represented by their Matrix user ID) is allowed to create a room.
```python
async def user_may_create_room_alias(user: str, room_alias: "synapse.types.RoomAlias") -> bool
```
Called when trying to associate an alias with an existing room. The module must return a
`bool` indicating whether the given user (represented by their Matrix user ID) is allowed
to set the given alias.
```python
async def user_may_publish_room(user: str, room_id: str) -> bool
```
Called when trying to publish a room to the homeserver's public rooms directory. The
module must return a `bool` indicating whether the given user (represented by their
Matrix user ID) is allowed to publish the given room.
```python
async def check_username_for_spam(user_profile: Dict[str, str]) -> bool
```
Called when computing search results in the user directory. The module must return a
`bool` indicating whether the given user profile can appear in search results. The profile
is represented as a dictionary with the following keys:
* `user_id`: The Matrix ID for this user.
* `display_name`: The user's display name.
* `avatar_url`: The `mxc://` URL to the user's avatar.
The module is given a copy of the original dictionary, so modifying it from within the
module cannot modify a user's profile when included in user directory search results.
```python
async def check_registration_for_spam(
email_threepid: Optional[dict],
username: Optional[str],
request_info: Collection[Tuple[str, str]],
auth_provider_id: Optional[str] = None,
) -> "synapse.spam_checker_api.RegistrationBehaviour"
```
Called when registering a new user. The module must return a `RegistrationBehaviour`
indicating whether the registration can go through or must be denied, or whether the user
may be allowed to register but will be shadow banned.
The arguments passed to this callback are:
* `email_threepid`: The email address used for registering, if any.
* `username`: The username the user would like to register. Can be `None`, meaning that
Synapse will generate one later.
* `request_info`: A collection of tuples, which first item is a user agent, and which
second item is an IP address. These user agents and IP addresses are the ones that were
used during the registration process.
* `auth_provider_id`: The identifier of the SSO authentication provider, if any.
```python
async def check_media_file_for_spam(
file_wrapper: "synapse.rest.media.v1.media_storage.ReadableFileWrapper",
file_info: "synapse.rest.media.v1._base.FileInfo"
) -> bool
```
Called when storing a local or remote file. The module must return a boolean indicating
whether the given file can be stored in the homeserver's media store.
### Porting an existing module that uses the old interface
In order to port a module that uses Synapse's old module interface, its author needs to:
* ensure the module's callbacks are all asynchronous.
* register their callbacks using one or more of the `register_[...]_callbacks` methods
from the `ModuleApi` class in the module's `__init__` method (see [this section](#registering-a-web-resource)
for more info).
Additionally, if the module is packaged with an additional web resource, the module
should register this resource in its `__init__` method using the `register_web_resource`
method from the `ModuleApi` class (see [this section](#registering-a-web-resource) for
more info).
The module's author should also update any example in the module's configuration to only
use the new `modules` section in Synapse's configuration file (see [this section](#using-modules)
for more info).
### Example
The example below is a module that implements the spam checker callback
`user_may_create_room` to deny room creation to user `@evilguy:example.com`, and registers
a web resource to the path `/_synapse/client/demo/hello` that returns a JSON object.
```python
import json
from twisted.web.resource import Resource
from twisted.web.server import Request
from synapse.module_api import ModuleApi
class DemoResource(Resource):
def __init__(self, config):
super(DemoResource, self).__init__()
self.config = config
def render_GET(self, request: Request):
name = request.args.get(b"name")[0]
request.setHeader(b"Content-Type", b"application/json")
return json.dumps({"hello": name})
class DemoModule:
def __init__(self, config: dict, api: ModuleApi):
self.config = config
self.api = api
self.api.register_web_resource(
path="/_synapse/client/demo/hello",
resource=DemoResource(self.config),
)
self.api.register_spam_checker_callbacks(
user_may_create_room=self.user_may_create_room,
)
@staticmethod
def parse_config(config):
return config
async def user_may_create_room(self, user: str) -> bool:
if user == "@evilguy:example.com":
return False
return True
```

View file

@ -30,7 +30,7 @@ presence to (for those users that the receiving user is considered interested in
It does not include state for users who are currently offline, and it can only be
called on workers that support sending federation. Additionally, this method must
only be called from the process that has been configured to write to the
the [presence stream](https://github.com/matrix-org/synapse/blob/master/docs/workers.md#stream-writers).
the [presence stream](workers.md#stream-writers).
By default, this is the main process, but another worker can be configured to do
so.

View file

@ -21,7 +21,7 @@ port 8448. Where these are different, we refer to the 'client port' and the
'federation port'. See [the Matrix
specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names)
for more details of the algorithm used for federation connections, and
[delegate.md](<delegate.md>) for instructions on setting up delegation.
[delegate.md](delegate.md) for instructions on setting up delegation.
**NOTE**: Your reverse proxy must not `canonicalise` or `normalise`
the requested URI in any way (for example, by decoding `%xx` escapes).

View file

@ -31,6 +31,22 @@
#
# [1] https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html
## Modules ##
# Server admins can expand Synapse's functionality with external modules.
#
# See https://matrix-org.github.io/synapse/develop/modules.html for more
# documentation on how to configure or create custom modules for Synapse.
#
modules:
# - module: my_super_module.MySuperClass
# config:
# do_thing: true
# - module: my_other_super_module.SomeClass
# config: {}
## Server ##
# The public-facing domain of the server
@ -620,13 +636,9 @@ retention:
# This certificate, as of Synapse 1.0, will need to be a valid and verifiable
# certificate, signed by a recognised Certificate Authority.
#
# See 'ACME support' below to enable auto-provisioning this certificate via
# Let's Encrypt.
#
# If supplying your own, be sure to use a `.pem` file that includes the
# full certificate chain including any intermediate certificates (for
# instance, if using certbot, use `fullchain.pem` as your certificate,
# not `cert.pem`).
# Be sure to use a `.pem` file that includes the full certificate chain including
# any intermediate certificates (for instance, if using certbot, use
# `fullchain.pem` as your certificate, not `cert.pem`).
#
#tls_certificate_path: "CONFDIR/SERVERNAME.tls.crt"
@ -677,80 +689,6 @@ retention:
# - myCA2.pem
# - myCA3.pem
# ACME support: This will configure Synapse to request a valid TLS certificate
# for your configured `server_name` via Let's Encrypt.
#
# Note that ACME v1 is now deprecated, and Synapse currently doesn't support
# ACME v2. This means that this feature currently won't work with installs set
# up after November 2019. For more info, and alternative solutions, see
# https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1
#
# Note that provisioning a certificate in this way requires port 80 to be
# routed to Synapse so that it can complete the http-01 ACME challenge.
# By default, if you enable ACME support, Synapse will attempt to listen on
# port 80 for incoming http-01 challenges - however, this will likely fail
# with 'Permission denied' or a similar error.
#
# There are a couple of potential solutions to this:
#
# * If you already have an Apache, Nginx, or similar listening on port 80,
# you can configure Synapse to use an alternate port, and have your web
# server forward the requests. For example, assuming you set 'port: 8009'
# below, on Apache, you would write:
#
# ProxyPass /.well-known/acme-challenge http://localhost:8009/.well-known/acme-challenge
#
# * Alternatively, you can use something like `authbind` to give Synapse
# permission to listen on port 80.
#
acme:
# ACME support is disabled by default. Set this to `true` and uncomment
# tls_certificate_path and tls_private_key_path above to enable it.
#
enabled: false
# Endpoint to use to request certificates. If you only want to test,
# use Let's Encrypt's staging url:
# https://acme-staging.api.letsencrypt.org/directory
#
#url: https://acme-v01.api.letsencrypt.org/directory
# Port number to listen on for the HTTP-01 challenge. Change this if
# you are forwarding connections through Apache/Nginx/etc.
#
port: 80
# Local addresses to listen on for incoming connections.
# Again, you may want to change this if you are forwarding connections
# through Apache/Nginx/etc.
#
bind_addresses: ['::', '0.0.0.0']
# How many days remaining on a certificate before it is renewed.
#
reprovision_threshold: 30
# The domain that the certificate should be for. Normally this
# should be the same as your Matrix domain (i.e., 'server_name'), but,
# by putting a file at 'https://<server_name>/.well-known/matrix/server',
# you can delegate incoming traffic to another server. If you do that,
# you should give the target of the delegation here.
#
# For example: if your 'server_name' is 'example.com', but
# 'https://example.com/.well-known/matrix/server' delegates to
# 'matrix.example.com', you should put 'matrix.example.com' here.
#
# If not set, defaults to your 'server_name'.
#
domain: matrix.example.com
# file to use for the account key. This will be generated if it doesn't
# exist.
#
# If unspecified, we will use CONFDIR/client.key.
#
account_key_file: DATADIR/acme_account.key
## Federation ##
@ -1028,6 +966,10 @@ media_store_path: "DATADIR/media_store"
# The largest allowed upload size in bytes
#
# If you are using a reverse proxy you may also need to set this value in
# your reverse proxy's config. Notably Nginx has a small max body size by default.
# See https://matrix-org.github.io/synapse/develop/reverse_proxy.html.
#
#max_upload_size: 50M
# The largest allowed size for a user avatar. If not defined, no
@ -2274,6 +2216,17 @@ sso:
# - https://riot.im/develop
# - https://my.custom.client/
# Uncomment to keep a user's profile fields in sync with information from
# the identity provider. Currently only syncing the displayname is
# supported. Fields are checked on every SSO login, and are updated
# if necessary.
#
# Note that enabling this option will override user profile information,
# regardless of whether users have opted-out of syncing that
# information when first signing in. Defaults to false.
#
#update_profile_information: true
# Directory in which Synapse will try to find the template files below.
# If not set, or the files named below are not found within the template
# directory, default templates from within the Synapse package will be used.
@ -2555,6 +2508,10 @@ ui_auth:
# the user-interactive authentication process, by allowing for multiple
# (and potentially different) operations to use the same validation session.
#
# This is ignored for potentially "dangerous" operations (including
# deactivating an account, modifying an account password, and
# adding a 3PID).
#
# Uncomment below to allow for credential validation to last for 15
# seconds.
#
@ -2802,19 +2759,6 @@ push:
#group_unread_count_by_room: false
# Spam checkers are third-party modules that can block specific actions
# of local users, such as creating rooms and registering undesirable
# usernames, as well as remote users by redacting incoming events.
#
spam_checker:
#- module: "my_custom_project.SuperSpamChecker"
# config:
# example_option: 'things'
#- module: "some_other_project.BadEventStopper"
# config:
# example_stop_events_from: ['@bad:example.com']
## Rooms ##
# Controls whether locally-created rooms should be end-to-end encrypted by

View file

@ -1,3 +1,7 @@
**Note: this page of the Synapse documentation is now deprecated. For up to date
documentation on setting up or writing a spam checker module, please see
[this page](https://matrix-org.github.io/synapse/develop/modules.html).**
# Handling spam in Synapse
Synapse has support to customize spam checking behavior. It can plug into a

View file

@ -108,7 +108,7 @@ A custom mapping provider must specify the following methods:
Synapse has a built-in OpenID mapping provider if a custom provider isn't
specified in the config. It is located at
[`synapse.handlers.oidc.JinjaOidcMappingProvider`](../synapse/handlers/oidc.py).
[`synapse.handlers.oidc.JinjaOidcMappingProvider`](https://github.com/matrix-org/synapse/blob/develop/synapse/handlers/oidc.py).
## SAML Mapping Providers
@ -194,4 +194,4 @@ A custom mapping provider must specify the following methods:
Synapse has a built-in SAML mapping provider if a custom provider isn't
specified in the config. It is located at
[`synapse.handlers.saml.DefaultSamlMappingProvider`](../synapse/handlers/saml.py).
[`synapse.handlers.saml.DefaultSamlMappingProvider`](https://github.com/matrix-org/synapse/blob/develop/synapse/handlers/saml.py).

View file

@ -6,16 +6,18 @@ well as a `matrix-synapse-worker@` service template for any workers you
require. Additionally, to group the required services, it sets up a
`matrix-synapse.target`.
See the folder [system](system) for the systemd unit files.
See the folder [system](https://github.com/matrix-org/synapse/tree/develop/docs/systemd-with-workers/system/)
for the systemd unit files.
The folder [workers](workers) contains an example configuration for the
`federation_reader` worker.
The folder [workers](https://github.com/matrix-org/synapse/tree/develop/docs/systemd-with-workers/workers/)
contains an example configuration for the `federation_reader` worker.
## Synapse configuration files
See [workers.md](../workers.md) for information on how to set up the
configuration files and reverse-proxy correctly. You can find an example worker
config in the [workers](workers) folder.
config in the [workers](https://github.com/matrix-org/synapse/tree/develop/docs/systemd-with-workers/workers/)
folder.
Systemd manages daemonization itself, so ensure that none of the configuration
files set either `daemonize` or `worker_daemonize`.
@ -29,8 +31,8 @@ There is no need for a separate configuration file for the master process.
## Set up
1. Adjust synapse configuration files as above.
1. Copy the `*.service` and `*.target` files in [system](system) to
`/etc/systemd/system`.
1. Copy the `*.service` and `*.target` files in [system](https://github.com/matrix-org/synapse/tree/develop/docs/systemd-with-workers/system/)
to `/etc/systemd/system`.
1. Run `systemctl daemon-reload` to tell systemd to load the new unit files.
1. Run `systemctl enable matrix-synapse.service`. This will configure the
synapse master process to be started as part of the `matrix-synapse.target`

View file

@ -0,0 +1,44 @@
# Request log format
HTTP request logs are written by synapse (see [`site.py`](../synapse/http/site.py) for details).
See the following for how to decode the dense data available from the default logging configuration.
```
2020-10-01 12:00:00,000 - synapse.access.http.8008 - 311 - INFO - PUT-1000- 192.168.0.1 - 8008 - {another-matrix-server.com} Processed request: 0.100sec/-0.000sec (0.000sec, 0.000sec) (0.001sec/0.090sec/3) 11B !200 "PUT /_matrix/federation/v1/send/1600000000000 HTTP/1.1" "Synapse/1.20.1" [0 dbevts]
-AAAAAAAAAAAAAAAAAAAAA- -BBBBBBBBBBBBBBBBBBBBBB- -C- -DD- -EEEEEE- -FFFFFFFFF- -GG- -HHHHHHHHHHHHHHHHHHHHHHH- -IIIIII- -JJJJJJJ- -KKKKKK-, -LLLLLL- -MMMMMMM- -NNNNNN- O -P- -QQ- -RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR- -SSSSSSSSSSSS- -TTTTTT-
```
| Part | Explanation |
| ----- | ------------ |
| AAAA | Timestamp request was logged (not recieved) |
| BBBB | Logger name (`synapse.access.(http\|https).<tag>`, where 'tag' is defined in the `listeners` config section, normally the port) |
| CCCC | Line number in code |
| DDDD | Log Level |
| EEEE | Request Identifier (This identifier is shared by related log lines)|
| FFFF | Source IP (Or X-Forwarded-For if enabled) |
| GGGG | Server Port |
| HHHH | Federated Server or Local User making request (blank if unauthenticated or not supplied) |
| IIII | Total Time to process the request |
| JJJJ | Time to send response over network once generated (this may be negative if the socket is closed before the response is generated)|
| KKKK | Userland CPU time |
| LLLL | System CPU time |
| MMMM | Total time waiting for a free DB connection from the pool across all parallel DB work from this request |
| NNNN | Total time waiting for response to DB queries across all parallel DB work from this request |
| OOOO | Count of DB transactions performed |
| PPPP | Response body size |
| QQQQ | Response status code (prefixed with ! if the socket was closed before the response was generated) |
| RRRR | Request |
| SSSS | User-agent |
| TTTT | Events fetched from DB to service this request (note that this does not include events fetched from the cache) |
MMMM / NNNN can be greater than IIII if there are multiple slow database queries
running in parallel.
Some actions can result in multiple identical http requests, which will return
the same data, but only the first request will report time/transactions in
`KKKK`/`LLLL`/`MMMM`/`NNNN`/`OOOO` - the others will be awaiting the first query to return a
response and will simultaneously return with the first request, but with very
small processing times.

View file

@ -16,7 +16,7 @@ workers only work with PostgreSQL-based Synapse deployments. SQLite should only
be used for demo purposes and any admin considering workers should already be
running PostgreSQL.
See also https://matrix.org/blog/2020/11/03/how-we-fixed-synapses-scalability
See also [Matrix.org blog post](https://matrix.org/blog/2020/11/03/how-we-fixed-synapses-scalability)
for a higher level overview.
## Main process/worker communication

View file

@ -176,9 +176,6 @@ ignore_missing_imports = True
[mypy-josepy.*]
ignore_missing_imports = True
[mypy-txacme.*]
ignore_missing_imports = True
[mypy-pympler.*]
ignore_missing_imports = True

View file

@ -65,4 +65,4 @@ if [[ -n "$1" ]]; then
fi
# Run the tests!
go test -v -tags synapse_blacklist,msc2946,msc3083 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests
go test -v -tags synapse_blacklist,msc2946,msc3083,msc2716 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests

View file

@ -97,7 +97,7 @@ CONDITIONAL_REQUIREMENTS["all"] = list(ALL_OPTIONAL_REQUIREMENTS)
# We pin black so that our tests don't start failing on new releases.
CONDITIONAL_REQUIREMENTS["lint"] = [
"isort==5.7.0",
"black==20.8b1",
"black==21.6b0",
"flake8-comprehensions",
"flake8-bugbear==21.3.2",
"flake8",

View file

@ -47,7 +47,7 @@ try:
except ImportError:
pass
__version__ = "1.36.0"
__version__ = "1.37.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View file

@ -92,11 +92,8 @@ class Auth:
async def check_from_context(
self, room_version: str, event, context, do_sig_check=True
) -> None:
prev_state_ids = await context.get_prev_state_ids()
auth_events_ids = self.compute_auth_events(
event, prev_state_ids, for_verification=True
)
auth_events_by_id = await self.store.get_events(auth_events_ids)
auth_event_ids = event.auth_event_ids()
auth_events_by_id = await self.store.get_events(auth_event_ids)
auth_events = {(e.type, e.state_key): e for e in auth_events_by_id.values()}
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
@ -207,7 +204,7 @@ class Auth:
request.requester = user_id
if user_id in self._force_tracing_for_users:
opentracing.set_tag(opentracing.tags.SAMPLING_PRIORITY, 1)
opentracing.force_tracing()
opentracing.set_tag("authenticated_entity", user_id)
opentracing.set_tag("user_id", user_id)
opentracing.set_tag("appservice_id", app_service.id)
@ -260,7 +257,7 @@ class Auth:
request.requester = requester
if user_info.token_owner in self._force_tracing_for_users:
opentracing.set_tag(opentracing.tags.SAMPLING_PRIORITY, 1)
opentracing.force_tracing()
opentracing.set_tag("authenticated_entity", user_info.token_owner)
opentracing.set_tag("user_id", user_info.user_id)
if device_id:

View file

@ -65,6 +65,12 @@ class JoinRules:
MSC3083_RESTRICTED = "restricted"
class RestrictedJoinRuleTypes:
"""Understood types for the allow rules in restricted join rules."""
ROOM_MEMBERSHIP = "m.room_membership"
class LoginType:
PASSWORD = "m.login.password"
EMAIL_IDENTITY = "m.login.email.identity"
@ -112,8 +118,9 @@ class EventTypes:
SpaceChild = "m.space.child"
SpaceParent = "m.space.parent"
MSC1772_SPACE_CHILD = "org.matrix.msc1772.space.child"
MSC1772_SPACE_PARENT = "org.matrix.msc1772.space.parent"
MSC2716_INSERTION = "org.matrix.msc2716.insertion"
MSC2716_MARKER = "org.matrix.msc2716.marker"
class ToDeviceEventTypes:
@ -180,7 +187,18 @@ class EventContentFields:
# cf https://github.com/matrix-org/matrix-doc/pull/1772
ROOM_TYPE = "type"
MSC1772_ROOM_TYPE = "org.matrix.msc1772.type"
# Used on normal messages to indicate they were historically imported after the fact
MSC2716_HISTORICAL = "org.matrix.msc2716.historical"
# For "insertion" events
MSC2716_NEXT_CHUNK_ID = "org.matrix.msc2716.next_chunk_id"
# Used on normal message events to indicate where the chunk connects to
MSC2716_CHUNK_ID = "org.matrix.msc2716.chunk_id"
# For "marker" events
MSC2716_MARKER_INSERTION = "org.matrix.msc2716.marker.insertion"
MSC2716_MARKER_INSERTION_PREV_EVENTS = (
"org.matrix.msc2716.marker.insertion_prev_events"
)
class RoomEncryptionAlgorithms:

View file

@ -450,7 +450,7 @@ class IncompatibleRoomVersionError(SynapseError):
super().__init__(
code=400,
msg="Your homeserver does not support the features required to "
"join this room",
"interact with this room",
errcode=Codes.INCOMPATIBLE_ROOM_VERSION,
)

View file

@ -56,7 +56,7 @@ class RoomVersion:
state_res = attr.ib(type=int) # one of the StateResolutionVersions
enforce_key_validity = attr.ib(type=bool)
# Before MSC2261/MSC2432, m.room.aliases had special auth rules and redaction rules
# Before MSC2432, m.room.aliases had special auth rules and redaction rules
special_case_aliases_auth = attr.ib(type=bool)
# Strictly enforce canonicaljson, do not allow:
# * Integers outside the range of [-2 ^ 53 + 1, 2 ^ 53 - 1]
@ -72,7 +72,7 @@ class RoomVersion:
msc3083_join_rules = attr.ib(type=bool)
# MSC2403: Allows join_rules to be set to 'knock', changes auth rules to allow sending
# m.room.membership event with membership 'knock'.
allow_knocking = attr.ib(type=bool)
msc2403_knocking = attr.ib(type=bool)
class RoomVersions:
@ -86,8 +86,8 @@ class RoomVersions:
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2176_redaction_rules=False,
allow_knocking=False,
msc3083_join_rules=False,
msc2403_knocking=False,
)
V2 = RoomVersion(
"2",
@ -99,8 +99,8 @@ class RoomVersions:
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2176_redaction_rules=False,
allow_knocking=False,
msc3083_join_rules=False,
msc2403_knocking=False,
)
V3 = RoomVersion(
"3",
@ -112,8 +112,8 @@ class RoomVersions:
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2176_redaction_rules=False,
allow_knocking=False,
msc3083_join_rules=False,
msc2403_knocking=False,
)
V4 = RoomVersion(
"4",
@ -125,8 +125,8 @@ class RoomVersions:
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2176_redaction_rules=False,
allow_knocking=False,
msc3083_join_rules=False,
msc2403_knocking=False,
)
V5 = RoomVersion(
"5",
@ -138,8 +138,8 @@ class RoomVersions:
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2176_redaction_rules=False,
allow_knocking=False,
msc3083_join_rules=False,
msc2403_knocking=False,
)
V6 = RoomVersion(
"6",
@ -151,21 +151,8 @@ class RoomVersions:
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2176_redaction_rules=False,
allow_knocking=False,
msc3083_join_rules=False,
)
V7 = RoomVersion(
"7",
RoomDisposition.UNSTABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2176_redaction_rules=False,
allow_knocking=True,
msc3083_join_rules=False,
msc2403_knocking=False,
)
MSC2176 = RoomVersion(
"org.matrix.msc2176",
@ -178,6 +165,7 @@ class RoomVersions:
limit_notifications_power_levels=True,
msc2176_redaction_rules=True,
msc3083_join_rules=False,
msc2403_knocking=False,
)
MSC3083 = RoomVersion(
"org.matrix.msc3083",
@ -190,7 +178,20 @@ class RoomVersions:
limit_notifications_power_levels=True,
msc2176_redaction_rules=False,
msc3083_join_rules=True,
allow_knocking=False,
msc2403_knocking=False,
)
V7 = RoomVersion(
"7",
RoomDisposition.STABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc2403_knocking=True,
)
@ -206,5 +207,7 @@ KNOWN_ROOM_VERSIONS = {
RoomVersions.V7,
RoomVersions.MSC2176,
RoomVersions.MSC3083,
RoomVersions.V7,
)
# Note that we do not include MSC2043 here unless it is enabled in the config.
} # type: Dict[str, RoomVersion]

View file

@ -26,7 +26,9 @@ from typing import Awaitable, Callable, Iterable
from cryptography.utils import CryptographyDeprecationWarning
from typing_extensions import NoReturn
import twisted
from twisted.internet import defer, error, reactor
from twisted.logger import LoggingFile, LogLevel
from twisted.protocols.tls import TLSMemoryBIOFactory
import synapse
@ -35,10 +37,10 @@ from synapse.app import check_bind_error
from synapse.app.phone_stats_home import start_phone_stats_home
from synapse.config.homeserver import HomeServerConfig
from synapse.crypto import context_factory
from synapse.events.spamcheck import load_legacy_spam_checkers
from synapse.logging.context import PreserveLoggingContext
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.metrics.jemalloc import setup_jemalloc_stats
from synapse.util.async_helpers import Linearizer
from synapse.util.daemonize import daemonize_process
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
@ -112,8 +114,6 @@ def start_reactor(
run_command (Callable[]): callable that actually runs the reactor
"""
install_dns_limiter(reactor)
def run():
logger.info("Running")
setup_jemalloc_stats()
@ -141,7 +141,7 @@ def start_reactor(
def quit_with_error(error_string: str) -> NoReturn:
message_lines = error_string.split("\n")
line_length = max(len(line) for line in message_lines if len(line) < 80) + 2
line_length = min(max(len(line) for line in message_lines), 80) + 2
sys.stderr.write("*" * line_length + "\n")
for line in message_lines:
sys.stderr.write(" %s\n" % (line.rstrip(),))
@ -149,6 +149,30 @@ def quit_with_error(error_string: str) -> NoReturn:
sys.exit(1)
def handle_startup_exception(e: Exception) -> NoReturn:
# Exceptions that occur between setting up the logging and forking or starting
# the reactor are written to the logs, followed by a summary to stderr.
logger.exception("Exception during startup")
quit_with_error(
f"Error during initialisation:\n {e}\nThere may be more information in the logs."
)
def redirect_stdio_to_logs() -> None:
streams = [("stdout", LogLevel.info), ("stderr", LogLevel.error)]
for (stream, level) in streams:
oldStream = getattr(sys, stream)
loggingFile = LoggingFile(
logger=twisted.logger.Logger(namespace=stream),
level=level,
encoding=getattr(oldStream, "encoding", None),
)
setattr(sys, stream, loggingFile)
print("Redirected stdout/stderr to logs")
def register_start(cb: Callable[..., Awaitable], *args, **kwargs) -> None:
"""Register a callback with the reactor, to be called once it is running
@ -292,8 +316,7 @@ async def start(hs: "synapse.server.HomeServer"):
"""
Start a Synapse server or worker.
Should be called once the reactor is running and (if we're using ACME) the
TLS certificates are in place.
Should be called once the reactor is running.
Will start the main HTTP listeners and do some other startup tasks, and then
notify systemd.
@ -334,6 +357,14 @@ async def start(hs: "synapse.server.HomeServer"):
# Start the tracer
synapse.logging.opentracing.init_tracer(hs) # type: ignore[attr-defined] # noqa
# Instantiate the modules so they can register their web resources to the module API
# before we start the listeners.
module_api = hs.get_module_api()
for module, config in hs.config.modules.loaded_modules:
module(config=config, api=module_api)
load_legacy_spam_checkers(hs)
# It is now safe to start your Synapse.
hs.start_listening()
hs.get_datastore().db_pool.start_profiling()
@ -398,107 +429,6 @@ def setup_sdnotify(hs):
)
def install_dns_limiter(reactor, max_dns_requests_in_flight=100):
"""Replaces the resolver with one that limits the number of in flight DNS
requests.
This is to workaround https://twistedmatrix.com/trac/ticket/9620, where we
can run out of file descriptors and infinite loop if we attempt to do too
many DNS queries at once
XXX: I'm confused by this. reactor.nameResolver does not use twisted.names unless
you explicitly install twisted.names as the resolver; rather it uses a GAIResolver
backed by the reactor's default threadpool (which is limited to 10 threads). So
(a) I don't understand why twisted ticket 9620 is relevant, and (b) I don't
understand why we would run out of FDs if we did too many lookups at once.
-- richvdh 2020/08/29
"""
new_resolver = _LimitedHostnameResolver(
reactor.nameResolver, max_dns_requests_in_flight
)
reactor.installNameResolver(new_resolver)
class _LimitedHostnameResolver:
"""Wraps a IHostnameResolver, limiting the number of in-flight DNS lookups."""
def __init__(self, resolver, max_dns_requests_in_flight):
self._resolver = resolver
self._limiter = Linearizer(
name="dns_client_limiter", max_count=max_dns_requests_in_flight
)
def resolveHostName(
self,
resolutionReceiver,
hostName,
portNumber=0,
addressTypes=None,
transportSemantics="TCP",
):
# We need this function to return `resolutionReceiver` so we do all the
# actual logic involving deferreds in a separate function.
# even though this is happening within the depths of twisted, we need to drop
# our logcontext before starting _resolve, otherwise: (a) _resolve will drop
# the logcontext if it returns an incomplete deferred; (b) _resolve will
# call the resolutionReceiver *with* a logcontext, which it won't be expecting.
with PreserveLoggingContext():
self._resolve(
resolutionReceiver,
hostName,
portNumber,
addressTypes,
transportSemantics,
)
return resolutionReceiver
@defer.inlineCallbacks
def _resolve(
self,
resolutionReceiver,
hostName,
portNumber=0,
addressTypes=None,
transportSemantics="TCP",
):
with (yield self._limiter.queue(())):
# resolveHostName doesn't return a Deferred, so we need to hook into
# the receiver interface to get told when resolution has finished.
deferred = defer.Deferred()
receiver = _DeferredResolutionReceiver(resolutionReceiver, deferred)
self._resolver.resolveHostName(
receiver, hostName, portNumber, addressTypes, transportSemantics
)
yield deferred
class _DeferredResolutionReceiver:
"""Wraps a IResolutionReceiver and simply resolves the given deferred when
resolution is complete
"""
def __init__(self, receiver, deferred):
self._receiver = receiver
self._deferred = deferred
def resolutionBegan(self, resolutionInProgress):
self._receiver.resolutionBegan(resolutionInProgress)
def addressResolved(self, address):
self._receiver.addressResolved(address)
def resolutionComplete(self):
self._deferred.callback(())
self._receiver.resolutionComplete()
sdnotify_sockaddr = os.getenv("NOTIFY_SOCKET")

View file

@ -36,7 +36,6 @@ from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
@ -54,7 +53,6 @@ class AdminCmdSlavedStore(
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedFilteringStore,
SlavedPresenceStore,
SlavedGroupServerStore,
SlavedDeviceInboxStore,
SlavedDeviceStore,

View file

@ -32,7 +32,12 @@ from synapse.api.urls import (
SERVER_KEY_V2_PREFIX,
)
from synapse.app import _base
from synapse.app._base import max_request_body_size, register_start
from synapse.app._base import (
handle_startup_exception,
max_request_body_size,
redirect_stdio_to_logs,
register_start,
)
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
@ -354,6 +359,10 @@ class GenericWorkerServer(HomeServer):
if name == "replication":
resources[REPLICATION_PREFIX] = ReplicationRestResource(self)
# Attach additional resources registered by modules.
resources.update(self._module_web_resources)
self._module_web_resources_consumed = True
root_resource = create_resource_tree(resources, OptionsResource())
_base.listen_tcp(
@ -465,14 +474,21 @@ def start(config_options):
setup_logging(hs, config, use_worker_options=True)
hs.setup()
try:
hs.setup()
# Ensure the replication streamer is always started in case we write to any
# streams. Will no-op if no streams can be written to by this worker.
hs.get_replication_streamer()
# Ensure the replication streamer is always started in case we write to any
# streams. Will no-op if no streams can be written to by this worker.
hs.get_replication_streamer()
except Exception as e:
handle_startup_exception(e)
register_start(_base.start, hs)
# redirect stdio to the logs, if configured.
if not hs.config.no_redirect_stdio:
redirect_stdio_to_logs()
_base.start_worker_reactor("synapse-generic-worker", config)

View file

@ -37,10 +37,11 @@ from synapse.api.urls import (
)
from synapse.app import _base
from synapse.app._base import (
handle_startup_exception,
listen_ssl,
listen_tcp,
max_request_body_size,
quit_with_error,
redirect_stdio_to_logs,
register_start,
)
from synapse.config._base import ConfigError
@ -69,8 +70,6 @@ from synapse.rest.synapse.client import build_synapse_client_resource_tree
from synapse.rest.well_known import WellKnownResource
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.storage.engines import IncorrectDatabaseSetup
from synapse.storage.prepare_database import UpgradeDatabaseException
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.module_loader import load_module
from synapse.util.versionstring import get_version_string
@ -124,6 +123,10 @@ class SynapseHomeServer(HomeServer):
)
resources[path] = resource
# Attach additional resources registered by modules.
resources.update(self._module_web_resources)
self._module_web_resources_consumed = True
# try to find something useful to redirect '/' to
if WEB_CLIENT_PREFIX in resources:
root_resource = RootOptionsRedirectResource(WEB_CLIENT_PREFIX)
@ -358,60 +361,10 @@ def setup(config_options):
try:
hs.setup()
except IncorrectDatabaseSetup as e:
quit_with_error(str(e))
except UpgradeDatabaseException as e:
quit_with_error("Failed to upgrade database: %s" % (e,))
async def do_acme() -> bool:
"""
Reprovision an ACME certificate, if it's required.
Returns:
Whether the cert has been updated.
"""
acme = hs.get_acme_handler()
# Check how long the certificate is active for.
cert_days_remaining = hs.config.is_disk_cert_valid(allow_self_signed=False)
# We want to reprovision if cert_days_remaining is None (meaning no
# certificate exists), or the days remaining number it returns
# is less than our re-registration threshold.
provision = False
if (
cert_days_remaining is None
or cert_days_remaining < hs.config.acme_reprovision_threshold
):
provision = True
if provision:
await acme.provision_certificate()
return provision
async def reprovision_acme():
"""
Provision a certificate from ACME, if required, and reload the TLS
certificate if it's renewed.
"""
reprovisioned = await do_acme()
if reprovisioned:
_base.refresh_certificate(hs)
except Exception as e:
handle_startup_exception(e)
async def start():
# Run the ACME provisioning code, if it's enabled.
if hs.config.acme_enabled:
acme = hs.get_acme_handler()
# Start up the webservices which we will respond to ACME
# challenges with, and then provision.
await acme.start_listening()
await do_acme()
# Check if it needs to be reprovisioned every day.
hs.get_clock().looping_call(reprovision_acme, 24 * 60 * 60 * 1000)
# Load the OIDC provider metadatas, if OIDC is enabled.
if hs.config.oidc_enabled:
oidc = hs.get_oidc_handler()
@ -500,6 +453,11 @@ def main():
# check base requirements
check_requirements()
hs = setup(sys.argv[1:])
# redirect stdio to the logs, if configured.
if not hs.config.no_redirect_stdio:
redirect_stdio_to_logs()
run(hs)

View file

@ -405,7 +405,6 @@ class RootConfig:
listeners=None,
tls_certificate_path=None,
tls_private_key_path=None,
acme_domain=None,
):
"""
Build a default configuration file
@ -457,9 +456,6 @@ class RootConfig:
tls_private_key_path (str|None): The path to the tls private key.
acme_domain (str|None): The domain acme will try to validate. If
specified acme will be enabled.
Returns:
str: the yaml config file
"""
@ -477,7 +473,6 @@ class RootConfig:
listeners=listeners,
tls_certificate_path=tls_certificate_path,
tls_private_key_path=tls_private_key_path,
acme_domain=acme_domain,
).values()
)

View file

@ -11,11 +11,13 @@ from synapse.config import (
database,
emailconfig,
experimental,
federation,
groups,
jwt,
key,
logger,
metrics,
modules,
oidc,
password_auth_providers,
push,
@ -85,6 +87,8 @@ class RootConfig:
thirdpartyrules: third_party_event_rules.ThirdPartyRulesConfig
tracer: tracer.TracerConfig
redis: redis.RedisConfig
modules: modules.ModulesConfig
federation: federation.FederationConfig
config_classes: List = ...
def __init__(self) -> None: ...
@ -111,7 +115,6 @@ class RootConfig:
database_conf: Optional[Any] = ...,
tls_certificate_path: Optional[str] = ...,
tls_private_key_path: Optional[str] = ...,
acme_domain: Optional[str] = ...,
): ...
@classmethod
def load_or_generate_config(cls, description: Any, argv: Any): ...

View file

@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");

View file

@ -103,6 +103,10 @@ class AuthConfig(Config):
# the user-interactive authentication process, by allowing for multiple
# (and potentially different) operations to use the same validation session.
#
# This is ignored for potentially "dangerous" operations (including
# deactivating an account, modifying an account password, and
# adding a 3PID).
#
# Uncomment below to allow for credential validation to last for 15
# seconds.
#

View file

@ -35,3 +35,6 @@ class ExperimentalConfig(Config):
# MSC3026 (busy presence state)
self.msc3026_enabled = experimental.get("msc3026_enabled", False) # type: bool
# MSC2716 (backfill existing history)
self.msc2716_enabled = experimental.get("msc2716_enabled", False) # type: bool

View file

@ -1,5 +1,4 @@
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -30,6 +29,7 @@ from .jwt import JWTConfig
from .key import KeyConfig
from .logger import LoggingConfig
from .metrics import MetricsConfig
from .modules import ModulesConfig
from .oidc import OIDCConfig
from .password_auth_providers import PasswordAuthProviderConfig
from .push import PushConfig
@ -56,6 +56,7 @@ from .workers import WorkerConfig
class HomeServerConfig(RootConfig):
config_classes = [
ModulesConfig,
ServerConfig,
TlsConfig,
FederationConfig,

View file

@ -259,9 +259,7 @@ def _setup_stdlib_logging(config, log_config_path, logBeginner: LogBeginner) ->
finally:
threadlocal.active = False
logBeginner.beginLoggingTo([_log], redirectStandardIO=not config.no_redirect_stdio)
if not config.no_redirect_stdio:
print("Redirected stdout/stderr to logs")
logBeginner.beginLoggingTo([_log], redirectStandardIO=False)
def _load_logging_config(log_config_path: str) -> None:

49
synapse/config/modules.py Normal file
View file

@ -0,0 +1,49 @@
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any, Dict, List, Tuple
from synapse.config._base import Config, ConfigError
from synapse.util.module_loader import load_module
class ModulesConfig(Config):
section = "modules"
def read_config(self, config: dict, **kwargs):
self.loaded_modules: List[Tuple[Any, Dict]] = []
configured_modules = config.get("modules") or []
for i, module in enumerate(configured_modules):
config_path = ("modules", "<item %i>" % i)
if not isinstance(module, dict):
raise ConfigError("expected a mapping", config_path)
self.loaded_modules.append(load_module(module, config_path))
def generate_config_section(self, **kwargs):
return """
## Modules ##
# Server admins can expand Synapse's functionality with external modules.
#
# See https://matrix-org.github.io/synapse/develop/modules.html for more
# documentation on how to configure or create custom modules for Synapse.
#
modules:
# - module: my_super_module.MySuperClass
# config:
# do_thing: true
# - module: my_other_super_module.SomeClass
# config: {}
"""

View file

@ -254,6 +254,10 @@ class ContentRepositoryConfig(Config):
# The largest allowed upload size in bytes
#
# If you are using a reverse proxy you may also need to set this value in
# your reverse proxy's config. Notably Nginx has a small max body size by default.
# See https://matrix-org.github.io/synapse/develop/reverse_proxy.html.
#
#max_upload_size: 50M
# The largest allowed size for a user avatar. If not defined, no

View file

@ -397,19 +397,22 @@ class ServerConfig(Config):
self.ip_range_whitelist = generate_ip_set(
config.get("ip_range_whitelist", ()), config_path=("ip_range_whitelist",)
)
# The federation_ip_range_blacklist is used for backwards-compatibility
# and only applies to federation and identity servers. If it is not given,
# default to ip_range_blacklist.
federation_ip_range_blacklist = config.get(
"federation_ip_range_blacklist", ip_range_blacklist
)
# Always blacklist 0.0.0.0, ::
self.federation_ip_range_blacklist = generate_ip_set(
federation_ip_range_blacklist,
["0.0.0.0", "::"],
config_path=("federation_ip_range_blacklist",),
)
# and only applies to federation and identity servers.
if "federation_ip_range_blacklist" in config:
# Always blacklist 0.0.0.0, ::
self.federation_ip_range_blacklist = generate_ip_set(
config["federation_ip_range_blacklist"],
["0.0.0.0", "::"],
config_path=("federation_ip_range_blacklist",),
)
# 'federation_ip_range_whitelist' was never a supported configuration option.
self.federation_ip_range_whitelist = None
else:
# No backwards-compatiblity requrired, as federation_ip_range_blacklist
# is not given. Default to ip_range_blacklist and ip_range_whitelist.
self.federation_ip_range_blacklist = self.ip_range_blacklist
self.federation_ip_range_whitelist = self.ip_range_whitelist
# (undocumented) option for torturing the worker-mode replication a bit,
# for testing. The value defines the number of milliseconds to pause before

View file

@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import Any, Dict, List, Tuple
from synapse.config import ConfigError
@ -19,6 +20,15 @@ from synapse.util.module_loader import load_module
from ._base import Config
logger = logging.getLogger(__name__)
LEGACY_SPAM_CHECKER_WARNING = """
This server is using a spam checker module that is implementing the deprecated spam
checker interface. Please check with the module's maintainer to see if a new version
supporting Synapse's generic modules system is available.
For more information, please see https://matrix-org.github.io/synapse/develop/modules.html
---------------------------------------------------------------------------------------"""
class SpamCheckerConfig(Config):
section = "spamchecker"
@ -43,17 +53,7 @@ class SpamCheckerConfig(Config):
else:
raise ConfigError("spam_checker syntax is incorrect")
def generate_config_section(self, **kwargs):
return """\
# Spam checkers are third-party modules that can block specific actions
# of local users, such as creating rooms and registering undesirable
# usernames, as well as remote users by redacting incoming events.
#
spam_checker:
#- module: "my_custom_project.SuperSpamChecker"
# config:
# example_option: 'things'
#- module: "some_other_project.BadEventStopper"
# config:
# example_stop_events_from: ['@bad:example.com']
"""
# If this configuration is being used in any way, warn the admin that it is going
# away soon.
if self.spam_checkers:
logger.warning(LEGACY_SPAM_CHECKER_WARNING)

View file

@ -74,6 +74,10 @@ class SSOConfig(Config):
self.sso_client_whitelist = sso_config.get("client_whitelist") or []
self.sso_update_profile_information = (
sso_config.get("update_profile_information") or False
)
# Attempt to also whitelist the server's login fallback, since that fallback sets
# the redirect URL to itself (so it can process the login token then return
# gracefully to the client). This would make it pointless to ask the user for
@ -111,6 +115,17 @@ class SSOConfig(Config):
# - https://riot.im/develop
# - https://my.custom.client/
# Uncomment to keep a user's profile fields in sync with information from
# the identity provider. Currently only syncing the displayname is
# supported. Fields are checked on every SSO login, and are updated
# if necessary.
#
# Note that enabling this option will override user profile information,
# regardless of whether users have opted-out of syncing that
# information when first signing in. Defaults to false.
#
#update_profile_information: true
# Directory in which Synapse will try to find the template files below.
# If not set, or the files named below are not found within the template
# directory, default templates from within the Synapse package will be used.

View file

@ -14,7 +14,6 @@
import logging
import os
import warnings
from datetime import datetime
from typing import List, Optional, Pattern
@ -26,45 +25,12 @@ from synapse.util import glob_to_regex
logger = logging.getLogger(__name__)
ACME_SUPPORT_ENABLED_WARN = """\
This server uses Synapse's built-in ACME support. Note that ACME v1 has been
deprecated by Let's Encrypt, and that Synapse doesn't currently support ACME v2,
which means that this feature will not work with Synapse installs set up after
November 2019, and that it may stop working on June 2020 for installs set up
before that date.
For more info and alternative solutions, see
https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1
--------------------------------------------------------------------------------"""
class TlsConfig(Config):
section = "tls"
def read_config(self, config: dict, config_dir_path: str, **kwargs):
acme_config = config.get("acme", None)
if acme_config is None:
acme_config = {}
self.acme_enabled = acme_config.get("enabled", False)
if self.acme_enabled:
logger.warning(ACME_SUPPORT_ENABLED_WARN)
# hyperlink complains on py2 if this is not a Unicode
self.acme_url = str(
acme_config.get("url", "https://acme-v01.api.letsencrypt.org/directory")
)
self.acme_port = acme_config.get("port", 80)
self.acme_bind_addresses = acme_config.get("bind_addresses", ["::", "0.0.0.0"])
self.acme_reprovision_threshold = acme_config.get("reprovision_threshold", 30)
self.acme_domain = acme_config.get("domain", config.get("server_name"))
self.acme_account_key_file = self.abspath(
acme_config.get("account_key_file", config_dir_path + "/client.key")
)
self.tls_certificate_file = self.abspath(config.get("tls_certificate_path"))
self.tls_private_key_file = self.abspath(config.get("tls_private_key_path"))
@ -229,11 +195,9 @@ class TlsConfig(Config):
data_dir_path,
tls_certificate_path,
tls_private_key_path,
acme_domain,
**kwargs,
):
"""If the acme_domain is specified acme will be enabled.
If the TLS paths are not specified the default will be certs in the
"""If the TLS paths are not specified the default will be certs in the
config directory"""
base_key_name = os.path.join(config_dir_path, server_name)
@ -243,28 +207,15 @@ class TlsConfig(Config):
"Please specify both a cert path and a key path or neither."
)
tls_enabled = (
"" if tls_certificate_path and tls_private_key_path or acme_domain else "#"
)
tls_enabled = "" if tls_certificate_path and tls_private_key_path else "#"
if not tls_certificate_path:
tls_certificate_path = base_key_name + ".tls.crt"
if not tls_private_key_path:
tls_private_key_path = base_key_name + ".tls.key"
acme_enabled = bool(acme_domain)
acme_domain = "matrix.example.com"
default_acme_account_file = os.path.join(data_dir_path, "acme_account.key")
# this is to avoid the max line length. Sorrynotsorry
proxypassline = (
"ProxyPass /.well-known/acme-challenge "
"http://localhost:8009/.well-known/acme-challenge"
)
# flake8 doesn't recognise that variables are used in the below string
_ = tls_enabled, proxypassline, acme_enabled, default_acme_account_file
_ = tls_enabled
return (
"""\
@ -274,13 +225,9 @@ class TlsConfig(Config):
# This certificate, as of Synapse 1.0, will need to be a valid and verifiable
# certificate, signed by a recognised Certificate Authority.
#
# See 'ACME support' below to enable auto-provisioning this certificate via
# Let's Encrypt.
#
# If supplying your own, be sure to use a `.pem` file that includes the
# full certificate chain including any intermediate certificates (for
# instance, if using certbot, use `fullchain.pem` as your certificate,
# not `cert.pem`).
# Be sure to use a `.pem` file that includes the full certificate chain including
# any intermediate certificates (for instance, if using certbot, use
# `fullchain.pem` as your certificate, not `cert.pem`).
#
%(tls_enabled)stls_certificate_path: "%(tls_certificate_path)s"
@ -330,80 +277,6 @@ class TlsConfig(Config):
# - myCA1.pem
# - myCA2.pem
# - myCA3.pem
# ACME support: This will configure Synapse to request a valid TLS certificate
# for your configured `server_name` via Let's Encrypt.
#
# Note that ACME v1 is now deprecated, and Synapse currently doesn't support
# ACME v2. This means that this feature currently won't work with installs set
# up after November 2019. For more info, and alternative solutions, see
# https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1
#
# Note that provisioning a certificate in this way requires port 80 to be
# routed to Synapse so that it can complete the http-01 ACME challenge.
# By default, if you enable ACME support, Synapse will attempt to listen on
# port 80 for incoming http-01 challenges - however, this will likely fail
# with 'Permission denied' or a similar error.
#
# There are a couple of potential solutions to this:
#
# * If you already have an Apache, Nginx, or similar listening on port 80,
# you can configure Synapse to use an alternate port, and have your web
# server forward the requests. For example, assuming you set 'port: 8009'
# below, on Apache, you would write:
#
# %(proxypassline)s
#
# * Alternatively, you can use something like `authbind` to give Synapse
# permission to listen on port 80.
#
acme:
# ACME support is disabled by default. Set this to `true` and uncomment
# tls_certificate_path and tls_private_key_path above to enable it.
#
enabled: %(acme_enabled)s
# Endpoint to use to request certificates. If you only want to test,
# use Let's Encrypt's staging url:
# https://acme-staging.api.letsencrypt.org/directory
#
#url: https://acme-v01.api.letsencrypt.org/directory
# Port number to listen on for the HTTP-01 challenge. Change this if
# you are forwarding connections through Apache/Nginx/etc.
#
port: 80
# Local addresses to listen on for incoming connections.
# Again, you may want to change this if you are forwarding connections
# through Apache/Nginx/etc.
#
bind_addresses: ['::', '0.0.0.0']
# How many days remaining on a certificate before it is renewed.
#
reprovision_threshold: 30
# The domain that the certificate should be for. Normally this
# should be the same as your Matrix domain (i.e., 'server_name'), but,
# by putting a file at 'https://<server_name>/.well-known/matrix/server',
# you can delegate incoming traffic to another server. If you do that,
# you should give the target of the delegation here.
#
# For example: if your 'server_name' is 'example.com', but
# 'https://example.com/.well-known/matrix/server' delegates to
# 'matrix.example.com', you should put 'matrix.example.com' here.
#
# If not set, defaults to your 'server_name'.
#
domain: %(acme_domain)s
# file to use for the account key. This will be generated if it doesn't
# exist.
#
# If unspecified, we will use CONFDIR/client.key.
#
account_key_file: %(default_acme_account_file)s
"""
# Lowercase the string representation of boolean values
% {
@ -415,8 +288,6 @@ class TlsConfig(Config):
def read_tls_certificate(self) -> crypto.X509:
"""Reads the TLS certificate from the configured file, and returns it
Also checks if it is self-signed, and warns if so
Returns:
The certificate
"""
@ -425,16 +296,6 @@ class TlsConfig(Config):
cert_pem = self.read_file(cert_path, "tls_certificate_path")
cert = crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem)
# Check if it is self-signed, and issue a warning if so.
if cert.get_issuer() == cert.get_subject():
warnings.warn(
(
"Self-signed TLS certificates will not be accepted by Synapse 1.0. "
"Please either provide a valid certificate, or use Synapse's ACME "
"support to provision one."
)
)
return cert
def read_tls_private_key(self) -> crypto.PKey:

View file

@ -258,7 +258,11 @@ def _is_membership_change_allowed(
caller_in_room = caller and caller.membership == Membership.JOIN
caller_invited = caller and caller.membership == Membership.INVITE
caller_knocked = caller and caller.membership == Membership.KNOCK
caller_knocked = (
caller
and room_version.msc2403_knocking
and caller.membership == Membership.KNOCK
)
# get info about the target
key = (EventTypes.Member, target_user_id)
@ -285,6 +289,7 @@ def _is_membership_change_allowed(
{
"caller_in_room": caller_in_room,
"caller_invited": caller_invited,
"caller_knocked": caller_knocked,
"target_banned": target_banned,
"target_in_room": target_in_room,
"membership": membership,
@ -302,7 +307,9 @@ def _is_membership_change_allowed(
return
# Require the user to be in the room for membership changes other than join/knock.
if Membership.JOIN != membership and Membership.KNOCK != membership:
if Membership.JOIN != membership and (
RoomVersion.msc2403_knocking and Membership.KNOCK != membership
):
# If the user has been invited or has knocked, they are allowed to change their
# membership event to leave
if (
@ -344,7 +351,9 @@ def _is_membership_change_allowed(
and join_rule == JoinRules.MSC3083_RESTRICTED
):
pass
elif join_rule in (JoinRules.INVITE, JoinRules.KNOCK):
elif join_rule == JoinRules.INVITE or (
room_version.msc2403_knocking and join_rule == JoinRules.KNOCK
):
if not caller_in_room and not caller_invited:
raise AuthError(403, "You are not invited to this room.")
else:
@ -363,7 +372,7 @@ def _is_membership_change_allowed(
elif Membership.BAN == membership:
if user_level < ban_level or user_level <= target_level:
raise AuthError(403, "You don't have permission to ban")
elif Membership.KNOCK == membership:
elif room_version.msc2403_knocking and Membership.KNOCK == membership:
if join_rule != JoinRules.KNOCK:
raise AuthError(403, "You don't have permission to knock")
elif target_user_id != event.user_id:

View file

@ -119,6 +119,7 @@ class _EventInternalMetadata:
redacted = DictProperty("redacted") # type: bool
txn_id = DictProperty("txn_id") # type: str
token_id = DictProperty("token_id") # type: str
historical = DictProperty("historical") # type: bool
# XXX: These are set by StreamWorkerStore._set_before_and_after.
# I'm pretty sure that these are never persisted to the database, so shouldn't
@ -204,6 +205,14 @@ class _EventInternalMetadata:
"""
return self._dict.get("redacted", False)
def is_historical(self) -> bool:
"""Whether this is a historical message.
This is used by the batchsend historical message endpoint and
is needed to and mark the event as backfilled and skip some checks
like push notifications.
"""
return self._dict.get("historical", False)
class EventBase(metaclass=abc.ABCMeta):
@property

View file

@ -11,6 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import Any, Dict, List, Optional, Tuple, Union
import attr
@ -33,6 +34,8 @@ from synapse.types import EventID, JsonDict
from synapse.util import Clock
from synapse.util.stringutils import random_string
logger = logging.getLogger(__name__)
@attr.s(slots=True, cmp=False, frozen=True)
class EventBuilder:
@ -100,6 +103,7 @@ class EventBuilder:
self,
prev_event_ids: List[str],
auth_event_ids: Optional[List[str]],
depth: Optional[int] = None,
) -> EventBase:
"""Transform into a fully signed and hashed event
@ -108,6 +112,9 @@ class EventBuilder:
auth_event_ids: The event IDs to use as the auth events.
Should normally be set to None, which will cause them to be calculated
based on the room state at the prev_events.
depth: Override the depth used to order the event in the DAG.
Should normally be set to None, which will cause the depth to be calculated
based on the prev_events.
Returns:
The signed and hashed event.
@ -131,8 +138,14 @@ class EventBuilder:
auth_events = auth_event_ids
prev_events = prev_event_ids
old_depth = await self._store.get_max_depth_of(prev_event_ids)
depth = old_depth + 1
# Otherwise, progress the depth as normal
if depth is None:
(
_,
most_recent_prev_event_depth,
) = await self._store.get_max_depth_of(prev_event_ids)
depth = most_recent_prev_event_depth + 1
# we cap depth of generated events, to ensure that they are not
# rejected by other servers (and so that they can be persisted in

View file

@ -15,7 +15,18 @@
import inspect
import logging
from typing import TYPE_CHECKING, Any, Collection, Dict, List, Optional, Tuple, Union
from typing import (
TYPE_CHECKING,
Any,
Awaitable,
Callable,
Collection,
Dict,
List,
Optional,
Tuple,
Union,
)
from synapse.rest.media.v1._base import FileInfo
from synapse.rest.media.v1.media_storage import ReadableFileWrapper
@ -29,20 +40,200 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
CHECK_EVENT_FOR_SPAM_CALLBACK = Callable[
["synapse.events.EventBase"],
Awaitable[Union[bool, str]],
]
# FIXME: Callback signature differs from mainline
USER_MAY_INVITE_CALLBACK = Callable[[str, str, str, str, bool, bool], Awaitable[bool]]
# FIXME: Callback signature differs from mainline
USER_MAY_CREATE_ROOM_CALLBACK = Callable[
[str, List[str], List[dict], bool],
Awaitable[bool]
]
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK = Callable[[str, RoomAlias], Awaitable[bool]]
USER_MAY_PUBLISH_ROOM_CALLBACK = Callable[[str, str], Awaitable[bool]]
CHECK_USERNAME_FOR_SPAM_CALLBACK = Callable[[Dict[str, str]], Awaitable[bool]]
LEGACY_CHECK_REGISTRATION_FOR_SPAM_CALLBACK = Callable[
[
Optional[dict],
Optional[str],
Collection[Tuple[str, str]],
],
Awaitable[RegistrationBehaviour],
]
CHECK_REGISTRATION_FOR_SPAM_CALLBACK = Callable[
[
Optional[dict],
Optional[str],
Collection[Tuple[str, str]],
Optional[str],
],
Awaitable[RegistrationBehaviour],
]
CHECK_MEDIA_FILE_FOR_SPAM_CALLBACK = Callable[
[ReadableFileWrapper, FileInfo],
Awaitable[bool],
]
# FIXME: This callback only exists on the DINUM fork and not in mainline.
USER_MAY_JOIN_ROOM_CALLBACK = Callable[[str, str, bool], Awaitable[bool]]
def load_legacy_spam_checkers(hs: "synapse.server.HomeServer"):
"""Wrapper that loads spam checkers configured using the old configuration, and
registers the spam checker hooks they implement.
"""
spam_checkers = [] # type: List[Any]
api = hs.get_module_api()
for module, config in hs.config.spam_checkers:
# Older spam checkers don't accept the `api` argument, so we
# try and detect support.
spam_args = inspect.getfullargspec(module)
if "api" in spam_args.args:
spam_checkers.append(module(config=config, api=api))
else:
spam_checkers.append(module(config=config))
# The known spam checker hooks. If a spam checker module implements a method
# which name appears in this set, we'll want to register it.
spam_checker_methods = {
"check_event_for_spam",
"user_may_invite",
"user_may_create_room",
"user_may_create_room_alias",
"user_may_publish_room",
"check_username_for_spam",
"check_registration_for_spam",
"check_media_file_for_spam",
"user_may_join_room",
}
for spam_checker in spam_checkers:
# Methods on legacy spam checkers might not be async, so we wrap them around a
# wrapper that will call maybe_awaitable on the result.
def async_wrapper(f: Optional[Callable]) -> Optional[Callable[..., Awaitable]]:
# f might be None if the callback isn't implemented by the module. In this
# case we don't want to register a callback at all so we return None.
if f is None:
return None
wrapped_func = f
if f.__name__ == "check_registration_for_spam":
checker_args = inspect.signature(f)
if len(checker_args.parameters) == 3:
# Backwards compatibility; some modules might implement a hook that
# doesn't expect a 4th argument. In this case, wrap it in a function
# that gives it only 3 arguments and drops the auth_provider_id on
# the floor.
def wrapper(
email_threepid: Optional[dict],
username: Optional[str],
request_info: Collection[Tuple[str, str]],
auth_provider_id: Optional[str],
) -> Union[Awaitable[RegistrationBehaviour], RegistrationBehaviour]:
# We've already made sure f is not None above, but mypy doesn't
# do well across function boundaries so we need to tell it f is
# definitely not None.
assert f is not None
return f(
email_threepid,
username,
request_info,
)
wrapped_func = wrapper
elif len(checker_args.parameters) != 4:
raise RuntimeError(
"Bad signature for callback check_registration_for_spam",
)
def run(*args, **kwargs):
# mypy doesn't do well across function boundaries so we need to tell it
# wrapped_func is definitely not None.
assert wrapped_func is not None
return maybe_awaitable(wrapped_func(*args, **kwargs))
return run
# Register the hooks through the module API.
hooks = {
hook: async_wrapper(getattr(spam_checker, hook, None))
for hook in spam_checker_methods
}
api.register_spam_checker_callbacks(**hooks)
class SpamChecker:
def __init__(self, hs: "synapse.server.HomeServer"):
self.spam_checkers = [] # type: List[Any]
api = hs.get_module_api()
def __init__(self):
self._check_event_for_spam_callbacks: List[CHECK_EVENT_FOR_SPAM_CALLBACK] = []
self._user_may_invite_callbacks: List[USER_MAY_INVITE_CALLBACK] = []
self._user_may_create_room_callbacks: List[USER_MAY_CREATE_ROOM_CALLBACK] = []
self._user_may_create_room_alias_callbacks: List[
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK
] = []
self._user_may_publish_room_callbacks: List[USER_MAY_PUBLISH_ROOM_CALLBACK] = []
self._check_username_for_spam_callbacks: List[
CHECK_USERNAME_FOR_SPAM_CALLBACK
] = []
self._check_registration_for_spam_callbacks: List[
CHECK_REGISTRATION_FOR_SPAM_CALLBACK
] = []
self._check_media_file_for_spam_callbacks: List[
CHECK_MEDIA_FILE_FOR_SPAM_CALLBACK
] = []
self._user_may_join_room_callbacks: List[USER_MAY_JOIN_ROOM_CALLBACK] = []
for module, config in hs.config.spam_checkers:
# Older spam checkers don't accept the `api` argument, so we
# try and detect support.
spam_args = inspect.getfullargspec(module)
if "api" in spam_args.args:
self.spam_checkers.append(module(config=config, api=api))
else:
self.spam_checkers.append(module(config=config))
def register_callbacks(
self,
check_event_for_spam: Optional[CHECK_EVENT_FOR_SPAM_CALLBACK] = None,
user_may_invite: Optional[USER_MAY_INVITE_CALLBACK] = None,
user_may_create_room: Optional[USER_MAY_CREATE_ROOM_CALLBACK] = None,
user_may_create_room_alias: Optional[
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK
] = None,
user_may_publish_room: Optional[USER_MAY_PUBLISH_ROOM_CALLBACK] = None,
check_username_for_spam: Optional[CHECK_USERNAME_FOR_SPAM_CALLBACK] = None,
check_registration_for_spam: Optional[
CHECK_REGISTRATION_FOR_SPAM_CALLBACK
] = None,
check_media_file_for_spam: Optional[CHECK_MEDIA_FILE_FOR_SPAM_CALLBACK] = None,
user_may_join_room: Optional[USER_MAY_JOIN_ROOM_CALLBACK] = None,
):
"""Register callbacks from module for each hook."""
if check_event_for_spam is not None:
self._check_event_for_spam_callbacks.append(check_event_for_spam)
if user_may_invite is not None:
self._user_may_invite_callbacks.append(user_may_invite)
if user_may_create_room is not None:
self._user_may_create_room_callbacks.append(user_may_create_room)
if user_may_create_room_alias is not None:
self._user_may_create_room_alias_callbacks.append(
user_may_create_room_alias,
)
if user_may_publish_room is not None:
self._user_may_publish_room_callbacks.append(user_may_publish_room)
if check_username_for_spam is not None:
self._check_username_for_spam_callbacks.append(check_username_for_spam)
if check_registration_for_spam is not None:
self._check_registration_for_spam_callbacks.append(
check_registration_for_spam,
)
if check_media_file_for_spam is not None:
self._check_media_file_for_spam_callbacks.append(check_media_file_for_spam)
if user_may_join_room is not None:
self._user_may_join_room_callbacks.append(user_may_join_room)
async def check_event_for_spam(
self, event: "synapse.events.EventBase"
@ -60,9 +251,10 @@ class SpamChecker:
True or a string if the event is spammy. If a string is returned it
will be used as the error message returned to the user.
"""
for spam_checker in self.spam_checkers:
if await maybe_awaitable(spam_checker.check_event_for_spam(event)):
return True
for callback in self._check_event_for_spam_callbacks:
res = await callback(event) # type: Union[bool, str]
if res:
return res
return False
@ -97,20 +289,15 @@ class SpamChecker:
Returns:
True if the user may send an invite, otherwise False
"""
for spam_checker in self.spam_checkers:
if (
await maybe_awaitable(
spam_checker.user_may_invite(
inviter_userid,
invitee_userid,
third_party_invite,
room_id,
new_room,
published_room,
)
)
is False
):
for callback in self._user_may_invite_callbacks:
if await callback(
inviter_userid,
invitee_userid,
third_party_invite,
room_id,
new_room,
published_room,
) is False:
return False
return True
@ -138,15 +325,10 @@ class SpamChecker:
Returns:
True if the user may create a room, otherwise False
"""
for spam_checker in self.spam_checkers:
if (
await maybe_awaitable(
spam_checker.user_may_create_room(
userid, invite_list, third_party_invite_list, cloning
)
)
is False
):
for callback in self._user_may_create_room_callbacks:
if await callback(
userid, invite_list, third_party_invite_list, cloning
) is False:
return False
return True
@ -165,13 +347,8 @@ class SpamChecker:
Returns:
True if the user may create a room alias, otherwise False
"""
for spam_checker in self.spam_checkers:
if (
await maybe_awaitable(
spam_checker.user_may_create_room_alias(userid, room_alias)
)
is False
):
for callback in self._user_may_create_room_alias_callbacks:
if await callback(userid, room_alias) is False:
return False
return True
@ -188,13 +365,8 @@ class SpamChecker:
Returns:
True if the user may publish the room, otherwise False
"""
for spam_checker in self.spam_checkers:
if (
await maybe_awaitable(
spam_checker.user_may_publish_room(userid, room_id)
)
is False
):
for callback in self._user_may_publish_room_callbacks:
if await callback(userid, room_id) is False:
return False
return True
@ -212,8 +384,8 @@ class SpamChecker:
Returns:
bool: Whether the user may join the room
"""
for spam_checker in self.spam_checkers:
if spam_checker.user_may_join_room(userid, room_id, is_invited) is False:
for callback in self._user_may_join_room_callbacks:
if await callback(userid, room_id, is_invited) is False:
return False
return True
@ -233,15 +405,11 @@ class SpamChecker:
Returns:
True if the user is spammy.
"""
for spam_checker in self.spam_checkers:
# For backwards compatibility, only run if the method exists on the
# spam checker
checker = getattr(spam_checker, "check_username_for_spam", None)
if checker:
# Make a copy of the user profile object to ensure the spam checker
# cannot modify it.
if await maybe_awaitable(checker(user_profile.copy())):
return True
for callback in self._check_username_for_spam_callbacks:
# Make a copy of the user profile object to ensure the spam checker cannot
# modify it.
if await callback(user_profile.copy()):
return True
return False
@ -267,33 +435,13 @@ class SpamChecker:
Enum for how the request should be handled
"""
for spam_checker in self.spam_checkers:
# For backwards compatibility, only run if the method exists on the
# spam checker
checker = getattr(spam_checker, "check_registration_for_spam", None)
if checker:
# Provide auth_provider_id if the function supports it
checker_args = inspect.signature(checker)
if len(checker_args.parameters) == 4:
d = checker(
email_threepid,
username,
request_info,
auth_provider_id,
)
elif len(checker_args.parameters) == 3:
d = checker(email_threepid, username, request_info)
else:
logger.error(
"Invalid signature for %s.check_registration_for_spam. Denying registration",
spam_checker.__module__,
)
return RegistrationBehaviour.DENY
behaviour = await maybe_awaitable(d)
assert isinstance(behaviour, RegistrationBehaviour)
if behaviour != RegistrationBehaviour.ALLOW:
return behaviour
for callback in self._check_registration_for_spam_callbacks:
behaviour = await (
callback(email_threepid, username, request_info, auth_provider_id)
)
assert isinstance(behaviour, RegistrationBehaviour)
if behaviour != RegistrationBehaviour.ALLOW:
return behaviour
return RegistrationBehaviour.ALLOW
@ -331,13 +479,9 @@ class SpamChecker:
allowed.
"""
for spam_checker in self.spam_checkers:
# For backwards compatibility, only run if the method exists on the
# spam checker
checker = getattr(spam_checker, "check_media_file_for_spam", None)
if checker:
spam = await maybe_awaitable(checker(file_wrapper, file_info))
if spam:
return True
for callback in self._check_media_file_for_spam_callbacks:
spam = await callback(file_wrapper, file_info)
if spam:
return True
return False

View file

@ -1,6 +1,5 @@
# Copyright 2015, 2016 OpenMarket Ltd
# Copyrignt 2020 Sorunome
# Copyrignt 2020 The Matrix.org Foundation C.I.C.
# Copyright 2015-2021 The Matrix.org Foundation C.I.C.
# Copyright 2020 Sorunome
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -622,6 +621,7 @@ class FederationClient(FederationBase):
no servers successfully handle the request.
"""
valid_memberships = {Membership.JOIN, Membership.LEAVE, Membership.KNOCK}
if membership not in valid_memberships:
raise RuntimeError(
"make_membership_event called with membership='%s', must be one of %s"
@ -640,6 +640,13 @@ class FederationClient(FederationBase):
if not room_version:
raise UnsupportedRoomVersionError()
if not room_version.msc2403_knocking and membership == Membership.KNOCK:
raise SynapseError(
400,
"This room version does not support knocking",
errcode=Codes.FORBIDDEN,
)
pdu_dict = ret.get("event", None)
if not isinstance(pdu_dict, dict):
raise InvalidResponseError("Bad 'event' field in response")
@ -977,7 +984,7 @@ class FederationClient(FederationBase):
return await self._do_send_knock(destination, pdu)
return await self._try_destination_list(
"xyz.amorgan.knock/send_knock", destinations, send_request
"send_knock", destinations, send_request
)
async def _do_send_knock(self, destination: str, pdu: EventBase) -> JsonDict:
@ -997,7 +1004,7 @@ class FederationClient(FederationBase):
"""
time_now = self._clock.time_msec()
return await self.transport_layer.send_knock_v2(
return await self.transport_layer.send_knock_v1(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,

View file

@ -130,7 +130,7 @@ class FederationServer(FederationBase):
# come in waves.
self._state_resp_cache = ResponseCache(
hs.get_clock(), "state_resp", timeout_ms=30000
) # type: ResponseCache[Tuple[str, str]]
) # type: ResponseCache[Tuple[str, Optional[str]]]
self._state_ids_resp_cache = ResponseCache(
hs.get_clock(), "state_ids_resp", timeout_ms=30000
) # type: ResponseCache[Tuple[str, str]]
@ -139,6 +139,8 @@ class FederationServer(FederationBase):
hs.config.federation.federation_metrics_domains
)
self._room_prejoin_state_types = hs.config.api.room_prejoin_state
async def on_backfill_request(
self, origin: str, room_id: str, versions: List[str], limit: int
) -> Tuple[int, Dict[str, Any]]:
@ -407,7 +409,7 @@ class FederationServer(FederationBase):
)
async def on_room_state_request(
self, origin: str, room_id: str, event_id: str
self, origin: str, room_id: str, event_id: Optional[str]
) -> Tuple[int, Dict[str, Any]]:
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
@ -464,7 +466,7 @@ class FederationServer(FederationBase):
return {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids}
async def _on_context_state_request_compute(
self, room_id: str, event_id: str
self, room_id: str, event_id: Optional[str]
) -> Dict[str, list]:
if event_id:
pdus = await self.handler.get_state_for_pdu(
@ -607,16 +609,29 @@ class FederationServer(FederationBase):
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
room_version = await self.store.get_room_version_id(room_id)
if room_version not in supported_versions:
room_version = await self.store.get_room_version(room_id)
# Check that this room version is supported by the remote homeserver
if room_version.identifier not in supported_versions:
logger.warning(
"Room version %s not in %s", room_version, supported_versions
"Room version %s not in %s", room_version.identifier, supported_versions
)
raise IncompatibleRoomVersionError(room_version=room_version.identifier)
# Check that this room supports knocking as defined by its room version
if not room_version.msc2403_knocking:
raise SynapseError(
403,
"This room version does not support knocking",
errcode=Codes.FORBIDDEN,
)
raise IncompatibleRoomVersionError(room_version=room_version)
pdu = await self.handler.on_make_knock_request(origin, room_id, user_id)
time_now = self._clock.time_msec()
return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
return {
"event": pdu.get_pdu_json(time_now),
"room_version": room_version.identifier,
}
async def on_send_knock_request(
self,
@ -640,6 +655,15 @@ class FederationServer(FederationBase):
logger.debug("on_send_knock_request: content: %s", content)
room_version = await self.store.get_room_version(room_id)
# Check that this room supports knocking as defined by its room version
if not room_version.msc2403_knocking:
raise SynapseError(
403,
"This room version does not support knocking",
errcode=Codes.FORBIDDEN,
)
pdu = event_from_pdu_json(content, room_version)
origin_host, _ = parse_server_name(origin)
@ -657,7 +681,7 @@ class FederationServer(FederationBase):
# related to the room while the knock request is pending.
stripped_room_state = (
await self.store.get_stripped_room_state_from_event_context(
event_context, DEFAULT_ROOM_STATE_TYPES
event_context, self._room_prejoin_state_types
)
)
return {"knock_state_events": stripped_room_state}

View file

@ -1,5 +1,3 @@
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# Copyright 2020 Sorunome
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
@ -223,6 +221,7 @@ class TransportLayerClient:
is not in our federation whitelist
"""
valid_memberships = {Membership.JOIN, Membership.LEAVE, Membership.KNOCK}
if membership not in valid_memberships:
raise RuntimeError(
"make_membership_event called with membership='%s', must be one of %s"
@ -335,7 +334,7 @@ class TransportLayerClient:
return response
@log_function
async def send_knock_v2(
async def send_knock_v1(
self,
destination: str,
room_id: str,
@ -362,12 +361,7 @@ class TransportLayerClient:
The list of state events may be empty.
"""
path = _create_path(
FEDERATION_UNSTABLE_PREFIX + "/xyz.amorgan.knock",
"/send_knock/%s/%s",
room_id,
event_id,
)
path = _create_v1_path("/send_knock/%s/%s", room_id, event_id)
return await self.client.put_json(
destination=destination, path=path, data=content

View file

@ -1,6 +1,4 @@
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# Copyright 2019-2020 The Matrix.org Foundation C.I.C.
# Copyright 2014-2021 The Matrix.org Foundation C.I.C.
# Copyright 2020 Sorunome
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -28,15 +26,17 @@ from synapse.api.urls import (
FEDERATION_V1_PREFIX,
FEDERATION_V2_PREFIX,
)
from synapse.handlers.groups_local import GroupsLocalHandler
from synapse.http.server import HttpServer, JsonResource
from synapse.http.servlet import (
assert_params_in_dict,
parse_boolean_from_args,
parse_integer_from_args,
parse_json_object_from_request,
parse_list_from_args,
parse_string_from_args,
parse_strings_from_args,
)
from synapse.logging import opentracing
from synapse.logging.context import run_in_background
from synapse.logging.opentracing import (
SynapseTags,
@ -277,10 +277,17 @@ class BaseFederationServlet:
RATELIMIT = True # Whether to rate limit requests or not
def __init__(self, handler, authenticator, ratelimiter, server_name):
self.handler = handler
def __init__(
self,
hs: HomeServer,
authenticator: Authenticator,
ratelimiter: FederationRateLimiter,
server_name: str,
):
self.hs = hs
self.authenticator = authenticator
self.ratelimiter = ratelimiter
self.server_name = server_name
def _wrap(self, func):
authenticator = self.authenticator
@ -340,6 +347,8 @@ class BaseFederationServlet:
)
with scope:
opentracing.inject_response_headers(request.responseHeaders)
if origin and self.RATELIMIT:
with ratelimiter.ratelimit(origin) as d:
await d
@ -377,17 +386,30 @@ class BaseFederationServlet:
)
class FederationSendServlet(BaseFederationServlet):
class BaseFederationServerServlet(BaseFederationServlet):
"""Abstract base class for federation servlet classes which provides a federation server handler.
See BaseFederationServlet for more information.
"""
def __init__(
self,
hs: HomeServer,
authenticator: Authenticator,
ratelimiter: FederationRateLimiter,
server_name: str,
):
super().__init__(hs, authenticator, ratelimiter, server_name)
self.handler = hs.get_federation_server()
class FederationSendServlet(BaseFederationServerServlet):
PATH = "/send/(?P<transaction_id>[^/]*)/?"
# We ratelimit manually in the handler as we queue up the requests and we
# don't want to fill up the ratelimiter with blocked requests.
RATELIMIT = False
def __init__(self, handler, server_name, **kwargs):
super().__init__(handler, server_name=server_name, **kwargs)
self.server_name = server_name
# This is when someone is trying to send us a bunch of data.
async def on_PUT(self, origin, content, query, transaction_id):
"""Called on PUT /send/<transaction_id>/
@ -436,7 +458,7 @@ class FederationSendServlet(BaseFederationServlet):
return code, response
class FederationEventServlet(BaseFederationServlet):
class FederationEventServlet(BaseFederationServerServlet):
PATH = "/event/(?P<event_id>[^/]*)/?"
# This is when someone asks for a data item for a given server data_id pair.
@ -444,7 +466,7 @@ class FederationEventServlet(BaseFederationServlet):
return await self.handler.on_pdu_request(origin, event_id)
class FederationStateV1Servlet(BaseFederationServlet):
class FederationStateV1Servlet(BaseFederationServerServlet):
PATH = "/state/(?P<room_id>[^/]*)/?"
# This is when someone asks for all data for a given room.
@ -456,7 +478,7 @@ class FederationStateV1Servlet(BaseFederationServlet):
)
class FederationStateIdsServlet(BaseFederationServlet):
class FederationStateIdsServlet(BaseFederationServerServlet):
PATH = "/state_ids/(?P<room_id>[^/]*)/?"
async def on_GET(self, origin, content, query, room_id):
@ -467,7 +489,7 @@ class FederationStateIdsServlet(BaseFederationServlet):
)
class FederationBackfillServlet(BaseFederationServlet):
class FederationBackfillServlet(BaseFederationServerServlet):
PATH = "/backfill/(?P<room_id>[^/]*)/?"
async def on_GET(self, origin, content, query, room_id):
@ -480,7 +502,7 @@ class FederationBackfillServlet(BaseFederationServlet):
return await self.handler.on_backfill_request(origin, room_id, versions, limit)
class FederationQueryServlet(BaseFederationServlet):
class FederationQueryServlet(BaseFederationServerServlet):
PATH = "/query/(?P<query_type>[^/]*)"
# This is when we receive a server-server Query
@ -490,7 +512,7 @@ class FederationQueryServlet(BaseFederationServlet):
return await self.handler.on_query_request(query_type, args)
class FederationMakeJoinServlet(BaseFederationServlet):
class FederationMakeJoinServlet(BaseFederationServerServlet):
PATH = "/make_join/(?P<room_id>[^/]*)/(?P<user_id>[^/]*)"
async def on_GET(self, origin, _content, query, room_id, user_id):
@ -520,7 +542,7 @@ class FederationMakeJoinServlet(BaseFederationServlet):
return 200, content
class FederationMakeLeaveServlet(BaseFederationServlet):
class FederationMakeLeaveServlet(BaseFederationServerServlet):
PATH = "/make_leave/(?P<room_id>[^/]*)/(?P<user_id>[^/]*)"
async def on_GET(self, origin, content, query, room_id, user_id):
@ -528,7 +550,7 @@ class FederationMakeLeaveServlet(BaseFederationServlet):
return 200, content
class FederationV1SendLeaveServlet(BaseFederationServlet):
class FederationV1SendLeaveServlet(BaseFederationServerServlet):
PATH = "/send_leave/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
async def on_PUT(self, origin, content, query, room_id, event_id):
@ -536,7 +558,7 @@ class FederationV1SendLeaveServlet(BaseFederationServlet):
return 200, (200, content)
class FederationV2SendLeaveServlet(BaseFederationServlet):
class FederationV2SendLeaveServlet(BaseFederationServerServlet):
PATH = "/send_leave/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
PREFIX = FEDERATION_V2_PREFIX
@ -546,15 +568,13 @@ class FederationV2SendLeaveServlet(BaseFederationServlet):
return 200, content
class FederationMakeKnockServlet(BaseFederationServlet):
class FederationMakeKnockServlet(BaseFederationServerServlet):
PATH = "/make_knock/(?P<room_id>[^/]*)/(?P<user_id>[^/]*)"
PREFIX = FEDERATION_UNSTABLE_PREFIX + "/xyz.amorgan.knock"
async def on_GET(self, origin, content, query, room_id, user_id):
try:
# Retrieve the room versions the remote homeserver claims to support
supported_versions = parse_list_from_args(query, "ver", encoding="utf-8")
supported_versions = parse_strings_from_args(query, "ver", encoding="utf-8")
except KeyError:
raise SynapseError(400, "Missing required query parameter 'ver'")
@ -564,24 +584,22 @@ class FederationMakeKnockServlet(BaseFederationServlet):
return 200, content
class FederationV2SendKnockServlet(BaseFederationServlet):
class FederationV1SendKnockServlet(BaseFederationServerServlet):
PATH = "/send_knock/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
PREFIX = FEDERATION_UNSTABLE_PREFIX + "/xyz.amorgan.knock"
async def on_PUT(self, origin, content, query, room_id, event_id):
content = await self.handler.on_send_knock_request(origin, content, room_id)
return 200, content
class FederationEventAuthServlet(BaseFederationServlet):
class FederationEventAuthServlet(BaseFederationServerServlet):
PATH = "/event_auth/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
async def on_GET(self, origin, content, query, room_id, event_id):
return await self.handler.on_event_auth(origin, room_id, event_id)
class FederationV1SendJoinServlet(BaseFederationServlet):
class FederationV1SendJoinServlet(BaseFederationServerServlet):
PATH = "/send_join/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
async def on_PUT(self, origin, content, query, room_id, event_id):
@ -591,7 +609,7 @@ class FederationV1SendJoinServlet(BaseFederationServlet):
return 200, (200, content)
class FederationV2SendJoinServlet(BaseFederationServlet):
class FederationV2SendJoinServlet(BaseFederationServerServlet):
PATH = "/send_join/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
PREFIX = FEDERATION_V2_PREFIX
@ -603,7 +621,7 @@ class FederationV2SendJoinServlet(BaseFederationServlet):
return 200, content
class FederationV1InviteServlet(BaseFederationServlet):
class FederationV1InviteServlet(BaseFederationServerServlet):
PATH = "/invite/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
async def on_PUT(self, origin, content, query, room_id, event_id):
@ -620,7 +638,7 @@ class FederationV1InviteServlet(BaseFederationServlet):
return 200, (200, content)
class FederationV2InviteServlet(BaseFederationServlet):
class FederationV2InviteServlet(BaseFederationServerServlet):
PATH = "/invite/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
PREFIX = FEDERATION_V2_PREFIX
@ -644,7 +662,7 @@ class FederationV2InviteServlet(BaseFederationServlet):
return 200, content
class FederationThirdPartyInviteExchangeServlet(BaseFederationServlet):
class FederationThirdPartyInviteExchangeServlet(BaseFederationServerServlet):
PATH = "/exchange_third_party_invite/(?P<room_id>[^/]*)"
async def on_PUT(self, origin, content, query, room_id):
@ -652,21 +670,21 @@ class FederationThirdPartyInviteExchangeServlet(BaseFederationServlet):
return 200, {}
class FederationClientKeysQueryServlet(BaseFederationServlet):
class FederationClientKeysQueryServlet(BaseFederationServerServlet):
PATH = "/user/keys/query"
async def on_POST(self, origin, content, query):
return await self.handler.on_query_client_keys(origin, content)
class FederationUserDevicesQueryServlet(BaseFederationServlet):
class FederationUserDevicesQueryServlet(BaseFederationServerServlet):
PATH = "/user/devices/(?P<user_id>[^/]*)"
async def on_GET(self, origin, content, query, user_id):
return await self.handler.on_query_user_devices(origin, user_id)
class FederationClientKeysClaimServlet(BaseFederationServlet):
class FederationClientKeysClaimServlet(BaseFederationServerServlet):
PATH = "/user/keys/claim"
async def on_POST(self, origin, content, query):
@ -674,7 +692,7 @@ class FederationClientKeysClaimServlet(BaseFederationServlet):
return 200, response
class FederationGetMissingEventsServlet(BaseFederationServlet):
class FederationGetMissingEventsServlet(BaseFederationServerServlet):
# TODO(paul): Why does this path alone end with "/?" optional?
PATH = "/get_missing_events/(?P<room_id>[^/]*)/?"
@ -694,7 +712,7 @@ class FederationGetMissingEventsServlet(BaseFederationServlet):
return 200, content
class On3pidBindServlet(BaseFederationServlet):
class On3pidBindServlet(BaseFederationServerServlet):
PATH = "/3pid/onbind"
REQUIRE_AUTH = False
@ -724,7 +742,7 @@ class On3pidBindServlet(BaseFederationServlet):
return 200, {}
class OpenIdUserInfo(BaseFederationServlet):
class OpenIdUserInfo(BaseFederationServerServlet):
"""
Exchange a bearer token for information about a user.
@ -800,8 +818,16 @@ class PublicRoomList(BaseFederationServlet):
PATH = "/publicRooms"
def __init__(self, handler, authenticator, ratelimiter, server_name, allow_access):
super().__init__(handler, authenticator, ratelimiter, server_name)
def __init__(
self,
hs: HomeServer,
authenticator: Authenticator,
ratelimiter: FederationRateLimiter,
server_name: str,
allow_access: bool,
):
super().__init__(hs, authenticator, ratelimiter, server_name)
self.handler = hs.get_room_list_handler()
self.allow_access = allow_access
async def on_GET(self, origin, content, query):
@ -937,7 +963,24 @@ class FederationVersionServlet(BaseFederationServlet):
)
class FederationGroupsProfileServlet(BaseFederationServlet):
class BaseGroupsServerServlet(BaseFederationServlet):
"""Abstract base class for federation servlet classes which provides a groups server handler.
See BaseFederationServlet for more information.
"""
def __init__(
self,
hs: HomeServer,
authenticator: Authenticator,
ratelimiter: FederationRateLimiter,
server_name: str,
):
super().__init__(hs, authenticator, ratelimiter, server_name)
self.handler = hs.get_groups_server_handler()
class FederationGroupsProfileServlet(BaseGroupsServerServlet):
"""Get/set the basic profile of a group on behalf of a user"""
PATH = "/groups/(?P<group_id>[^/]*)/profile"
@ -963,7 +1006,7 @@ class FederationGroupsProfileServlet(BaseFederationServlet):
return 200, new_content
class FederationGroupsSummaryServlet(BaseFederationServlet):
class FederationGroupsSummaryServlet(BaseGroupsServerServlet):
PATH = "/groups/(?P<group_id>[^/]*)/summary"
async def on_GET(self, origin, content, query, group_id):
@ -976,7 +1019,7 @@ class FederationGroupsSummaryServlet(BaseFederationServlet):
return 200, new_content
class FederationGroupsRoomsServlet(BaseFederationServlet):
class FederationGroupsRoomsServlet(BaseGroupsServerServlet):
"""Get the rooms in a group on behalf of a user"""
PATH = "/groups/(?P<group_id>[^/]*)/rooms"
@ -991,7 +1034,7 @@ class FederationGroupsRoomsServlet(BaseFederationServlet):
return 200, new_content
class FederationGroupsAddRoomsServlet(BaseFederationServlet):
class FederationGroupsAddRoomsServlet(BaseGroupsServerServlet):
"""Add/remove room from group"""
PATH = "/groups/(?P<group_id>[^/]*)/room/(?P<room_id>[^/]*)"
@ -1019,7 +1062,7 @@ class FederationGroupsAddRoomsServlet(BaseFederationServlet):
return 200, new_content
class FederationGroupsAddRoomsConfigServlet(BaseFederationServlet):
class FederationGroupsAddRoomsConfigServlet(BaseGroupsServerServlet):
"""Update room config in group"""
PATH = (
@ -1039,7 +1082,7 @@ class FederationGroupsAddRoomsConfigServlet(BaseFederationServlet):
return 200, result
class FederationGroupsUsersServlet(BaseFederationServlet):
class FederationGroupsUsersServlet(BaseGroupsServerServlet):
"""Get the users in a group on behalf of a user"""
PATH = "/groups/(?P<group_id>[^/]*)/users"
@ -1054,7 +1097,7 @@ class FederationGroupsUsersServlet(BaseFederationServlet):
return 200, new_content
class FederationGroupsInvitedUsersServlet(BaseFederationServlet):
class FederationGroupsInvitedUsersServlet(BaseGroupsServerServlet):
"""Get the users that have been invited to a group"""
PATH = "/groups/(?P<group_id>[^/]*)/invited_users"
@ -1071,7 +1114,7 @@ class FederationGroupsInvitedUsersServlet(BaseFederationServlet):
return 200, new_content
class FederationGroupsInviteServlet(BaseFederationServlet):
class FederationGroupsInviteServlet(BaseGroupsServerServlet):
"""Ask a group server to invite someone to the group"""
PATH = "/groups/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/invite"
@ -1088,7 +1131,7 @@ class FederationGroupsInviteServlet(BaseFederationServlet):
return 200, new_content
class FederationGroupsAcceptInviteServlet(BaseFederationServlet):
class FederationGroupsAcceptInviteServlet(BaseGroupsServerServlet):
"""Accept an invitation from the group server"""
PATH = "/groups/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/accept_invite"
@ -1102,7 +1145,7 @@ class FederationGroupsAcceptInviteServlet(BaseFederationServlet):
return 200, new_content
class FederationGroupsJoinServlet(BaseFederationServlet):
class FederationGroupsJoinServlet(BaseGroupsServerServlet):
"""Attempt to join a group"""
PATH = "/groups/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/join"
@ -1116,7 +1159,7 @@ class FederationGroupsJoinServlet(BaseFederationServlet):
return 200, new_content
class FederationGroupsRemoveUserServlet(BaseFederationServlet):
class FederationGroupsRemoveUserServlet(BaseGroupsServerServlet):
"""Leave or kick a user from the group"""
PATH = "/groups/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/remove"
@ -1133,7 +1176,24 @@ class FederationGroupsRemoveUserServlet(BaseFederationServlet):
return 200, new_content
class FederationGroupsLocalInviteServlet(BaseFederationServlet):
class BaseGroupsLocalServlet(BaseFederationServlet):
"""Abstract base class for federation servlet classes which provides a groups local handler.
See BaseFederationServlet for more information.
"""
def __init__(
self,
hs: HomeServer,
authenticator: Authenticator,
ratelimiter: FederationRateLimiter,
server_name: str,
):
super().__init__(hs, authenticator, ratelimiter, server_name)
self.handler = hs.get_groups_local_handler()
class FederationGroupsLocalInviteServlet(BaseGroupsLocalServlet):
"""A group server has invited a local user"""
PATH = "/groups/local/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/invite"
@ -1142,12 +1202,16 @@ class FederationGroupsLocalInviteServlet(BaseFederationServlet):
if get_domain_from_id(group_id) != origin:
raise SynapseError(403, "group_id doesn't match origin")
assert isinstance(
self.handler, GroupsLocalHandler
), "Workers cannot handle group invites."
new_content = await self.handler.on_invite(group_id, user_id, content)
return 200, new_content
class FederationGroupsRemoveLocalUserServlet(BaseFederationServlet):
class FederationGroupsRemoveLocalUserServlet(BaseGroupsLocalServlet):
"""A group server has removed a local user"""
PATH = "/groups/local/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/remove"
@ -1156,6 +1220,10 @@ class FederationGroupsRemoveLocalUserServlet(BaseFederationServlet):
if get_domain_from_id(group_id) != origin:
raise SynapseError(403, "user_id doesn't match origin")
assert isinstance(
self.handler, GroupsLocalHandler
), "Workers cannot handle group removals."
new_content = await self.handler.user_removed_from_group(
group_id, user_id, content
)
@ -1168,6 +1236,16 @@ class FederationGroupsRenewAttestaionServlet(BaseFederationServlet):
PATH = "/groups/(?P<group_id>[^/]*)/renew_attestation/(?P<user_id>[^/]*)"
def __init__(
self,
hs: HomeServer,
authenticator: Authenticator,
ratelimiter: FederationRateLimiter,
server_name: str,
):
super().__init__(hs, authenticator, ratelimiter, server_name)
self.handler = hs.get_groups_attestation_renewer()
async def on_POST(self, origin, content, query, group_id, user_id):
# We don't need to check auth here as we check the attestation signatures
@ -1178,7 +1256,7 @@ class FederationGroupsRenewAttestaionServlet(BaseFederationServlet):
return 200, new_content
class FederationGroupsSummaryRoomsServlet(BaseFederationServlet):
class FederationGroupsSummaryRoomsServlet(BaseGroupsServerServlet):
"""Add/remove a room from the group summary, with optional category.
Matches both:
@ -1235,7 +1313,7 @@ class FederationGroupsSummaryRoomsServlet(BaseFederationServlet):
return 200, resp
class FederationGroupsCategoriesServlet(BaseFederationServlet):
class FederationGroupsCategoriesServlet(BaseGroupsServerServlet):
"""Get all categories for a group"""
PATH = "/groups/(?P<group_id>[^/]*)/categories/?"
@ -1250,7 +1328,7 @@ class FederationGroupsCategoriesServlet(BaseFederationServlet):
return 200, resp
class FederationGroupsCategoryServlet(BaseFederationServlet):
class FederationGroupsCategoryServlet(BaseGroupsServerServlet):
"""Add/remove/get a category in a group"""
PATH = "/groups/(?P<group_id>[^/]*)/categories/(?P<category_id>[^/]+)"
@ -1303,7 +1381,7 @@ class FederationGroupsCategoryServlet(BaseFederationServlet):
return 200, resp
class FederationGroupsRolesServlet(BaseFederationServlet):
class FederationGroupsRolesServlet(BaseGroupsServerServlet):
"""Get roles in a group"""
PATH = "/groups/(?P<group_id>[^/]*)/roles/?"
@ -1318,7 +1396,7 @@ class FederationGroupsRolesServlet(BaseFederationServlet):
return 200, resp
class FederationGroupsRoleServlet(BaseFederationServlet):
class FederationGroupsRoleServlet(BaseGroupsServerServlet):
"""Add/remove/get a role in a group"""
PATH = "/groups/(?P<group_id>[^/]*)/roles/(?P<role_id>[^/]+)"
@ -1371,7 +1449,7 @@ class FederationGroupsRoleServlet(BaseFederationServlet):
return 200, resp
class FederationGroupsSummaryUsersServlet(BaseFederationServlet):
class FederationGroupsSummaryUsersServlet(BaseGroupsServerServlet):
"""Add/remove a user from the group summary, with optional role.
Matches both:
@ -1426,7 +1504,7 @@ class FederationGroupsSummaryUsersServlet(BaseFederationServlet):
return 200, resp
class FederationGroupsBulkPublicisedServlet(BaseFederationServlet):
class FederationGroupsBulkPublicisedServlet(BaseGroupsLocalServlet):
"""Get roles in a group"""
PATH = "/get_groups_publicised"
@ -1439,7 +1517,7 @@ class FederationGroupsBulkPublicisedServlet(BaseFederationServlet):
return 200, resp
class FederationGroupsSettingJoinPolicyServlet(BaseFederationServlet):
class FederationGroupsSettingJoinPolicyServlet(BaseGroupsServerServlet):
"""Sets whether a group is joinable without an invite or knock"""
PATH = "/groups/(?P<group_id>[^/]*)/settings/m.join_policy"
@ -1460,6 +1538,16 @@ class FederationSpaceSummaryServlet(BaseFederationServlet):
PREFIX = FEDERATION_UNSTABLE_PREFIX + "/org.matrix.msc2946"
PATH = "/spaces/(?P<room_id>[^/]*)"
def __init__(
self,
hs: HomeServer,
authenticator: Authenticator,
ratelimiter: FederationRateLimiter,
server_name: str,
):
super().__init__(hs, authenticator, ratelimiter, server_name)
self.handler = hs.get_space_summary_handler()
async def on_GET(
self,
origin: str,
@ -1525,16 +1613,25 @@ class RoomComplexityServlet(BaseFederationServlet):
PATH = "/rooms/(?P<room_id>[^/]*)/complexity"
PREFIX = FEDERATION_UNSTABLE_PREFIX
def __init__(
self,
hs: HomeServer,
authenticator: Authenticator,
ratelimiter: FederationRateLimiter,
server_name: str,
):
super().__init__(hs, authenticator, ratelimiter, server_name)
self._store = self.hs.get_datastore()
async def on_GET(self, origin, content, query, room_id):
store = self.handler.hs.get_datastore()
is_public = await store.is_room_world_readable_or_publicly_joinable(room_id)
is_public = await self._store.is_room_world_readable_or_publicly_joinable(
room_id
)
if not is_public:
raise SynapseError(404, "Room not found", errcode=Codes.INVALID_PARAM)
complexity = await store.get_room_complexity(room_id)
complexity = await self._store.get_room_complexity(room_id)
return 200, complexity
@ -1553,7 +1650,6 @@ FEDERATION_SERVLET_CLASSES = (
FederationV2SendJoinServlet,
FederationV1SendLeaveServlet,
FederationV2SendLeaveServlet,
FederationV2SendKnockServlet,
FederationV1InviteServlet,
FederationV2InviteServlet,
FederationGetMissingEventsServlet,
@ -1566,6 +1662,9 @@ FEDERATION_SERVLET_CLASSES = (
FederationVersionServlet,
RoomComplexityServlet,
FederationUserInfoServlet,
FederationSpaceSummaryServlet,
FederationV1SendKnockServlet,
FederationMakeKnockServlet,
) # type: Tuple[Type[BaseFederationServlet], ...]
OPENID_SERVLET_CLASSES = (
@ -1607,6 +1706,7 @@ GROUP_ATTESTATION_SERVLET_CLASSES = (
FederationGroupsRenewAttestaionServlet,
) # type: Tuple[Type[BaseFederationServlet], ...]
DEFAULT_SERVLET_GROUPS = (
"federation",
"room_list",
@ -1643,23 +1743,16 @@ def register_servlets(
if "federation" in servlet_groups:
for servletclass in FEDERATION_SERVLET_CLASSES:
servletclass(
handler=hs.get_federation_server(),
hs=hs,
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
).register(resource)
FederationSpaceSummaryServlet(
handler=hs.get_space_summary_handler(),
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
).register(resource)
if "openid" in servlet_groups:
for servletclass in OPENID_SERVLET_CLASSES:
servletclass(
handler=hs.get_federation_server(),
hs=hs,
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
@ -1668,7 +1761,7 @@ def register_servlets(
if "room_list" in servlet_groups:
for servletclass in ROOM_LIST_CLASSES:
servletclass(
handler=hs.get_room_list_handler(),
hs=hs,
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
@ -1678,7 +1771,7 @@ def register_servlets(
if "group_server" in servlet_groups:
for servletclass in GROUP_SERVER_SERVLET_CLASSES:
servletclass(
handler=hs.get_groups_server_handler(),
hs=hs,
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
@ -1687,7 +1780,7 @@ def register_servlets(
if "group_local" in servlet_groups:
for servletclass in GROUP_LOCAL_SERVLET_CLASSES:
servletclass(
handler=hs.get_groups_local_handler(),
hs=hs,
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
@ -1696,7 +1789,7 @@ def register_servlets(
if "group_attestation" in servlet_groups:
for servletclass in GROUP_ATTESTATION_SERVLET_CLASSES:
servletclass(
handler=hs.get_groups_attestation_renewer(),
hs=hs,
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,

View file

@ -1,117 +0,0 @@
# Copyright 2019 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING
import twisted
import twisted.internet.error
from twisted.web import server, static
from twisted.web.resource import Resource
from synapse.app import check_bind_error
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
ACME_REGISTER_FAIL_ERROR = """
--------------------------------------------------------------------------------
Failed to register with the ACME provider. This is likely happening because the installation
is new, and ACME v1 has been deprecated by Let's Encrypt and disabled for
new installations since November 2019.
At the moment, Synapse doesn't support ACME v2. For more information and alternative
solutions, please read https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1
--------------------------------------------------------------------------------"""
class AcmeHandler:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.reactor = hs.get_reactor()
self._acme_domain = hs.config.acme_domain
async def start_listening(self) -> None:
from synapse.handlers import acme_issuing_service
# Configure logging for txacme, if you need to debug
# from eliot import add_destinations
# from eliot.twisted import TwistedDestination
#
# add_destinations(TwistedDestination())
well_known = Resource()
self._issuer = acme_issuing_service.create_issuing_service(
self.reactor,
acme_url=self.hs.config.acme_url,
account_key_file=self.hs.config.acme_account_key_file,
well_known_resource=well_known,
)
responder_resource = Resource()
responder_resource.putChild(b".well-known", well_known)
responder_resource.putChild(b"check", static.Data(b"OK", b"text/plain"))
srv = server.Site(responder_resource)
bind_addresses = self.hs.config.acme_bind_addresses
for host in bind_addresses:
logger.info(
"Listening for ACME requests on %s:%i", host, self.hs.config.acme_port
)
try:
self.reactor.listenTCP(
self.hs.config.acme_port, srv, backlog=50, interface=host
)
except twisted.internet.error.CannotListenError as e:
check_bind_error(e, host, bind_addresses)
# Make sure we are registered to the ACME server. There's no public API
# for this, it is usually triggered by startService, but since we don't
# want it to control where we save the certificates, we have to reach in
# and trigger the registration machinery ourselves.
self._issuer._registered = False
try:
await self._issuer._ensure_registered()
except Exception:
logger.error(ACME_REGISTER_FAIL_ERROR)
raise
async def provision_certificate(self) -> None:
logger.warning("Reprovisioning %s", self._acme_domain)
try:
await self._issuer.issue_cert(self._acme_domain)
except Exception:
logger.exception("Fail!")
raise
logger.warning("Reprovisioned %s, saving.", self._acme_domain)
cert_chain = self._issuer.cert_store.certs[self._acme_domain]
try:
with open(self.hs.config.tls_private_key_file, "wb") as private_key_file:
for x in cert_chain:
if x.startswith(b"-----BEGIN RSA PRIVATE KEY-----"):
private_key_file.write(x)
with open(self.hs.config.tls_certificate_file, "wb") as certificate_file:
for x in cert_chain:
if x.startswith(b"-----BEGIN CERTIFICATE-----"):
certificate_file.write(x)
except Exception:
logger.exception("Failed saving!")
raise

View file

@ -1,127 +0,0 @@
# Copyright 2019 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Utility function to create an ACME issuing service.
This file contains the unconditional imports on the acme and cryptography bits that we
only need (and may only have available) if we are doing ACME, so is designed to be
imported conditionally.
"""
import logging
from typing import Dict, Iterable, List
import attr
import pem
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import serialization
from josepy import JWKRSA
from josepy.jwa import RS256
from txacme.challenges import HTTP01Responder
from txacme.client import Client
from txacme.interfaces import ICertificateStore
from txacme.service import AcmeIssuingService
from txacme.util import generate_private_key
from zope.interface import implementer
from twisted.internet import defer
from twisted.internet.interfaces import IReactorTCP
from twisted.python.filepath import FilePath
from twisted.python.url import URL
from twisted.web.resource import IResource
logger = logging.getLogger(__name__)
def create_issuing_service(
reactor: IReactorTCP,
acme_url: str,
account_key_file: str,
well_known_resource: IResource,
) -> AcmeIssuingService:
"""Create an ACME issuing service, and attach it to a web Resource
Args:
reactor: twisted reactor
acme_url: URL to use to request certificates
account_key_file: where to store the account key
well_known_resource: web resource for .well-known.
we will attach a child resource for "acme-challenge".
Returns:
AcmeIssuingService
"""
responder = HTTP01Responder()
well_known_resource.putChild(b"acme-challenge", responder.resource)
store = ErsatzStore()
return AcmeIssuingService(
cert_store=store,
client_creator=(
lambda: Client.from_url(
reactor=reactor,
url=URL.from_text(acme_url),
key=load_or_create_client_key(account_key_file),
alg=RS256,
)
),
clock=reactor,
responders=[responder],
)
@attr.s(slots=True)
@implementer(ICertificateStore)
class ErsatzStore:
"""
A store that only stores in memory.
"""
certs = attr.ib(type=Dict[bytes, List[bytes]], default=attr.Factory(dict))
def store(
self, server_name: bytes, pem_objects: Iterable[pem.AbstractPEMObject]
) -> defer.Deferred:
self.certs[server_name] = [o.as_bytes() for o in pem_objects]
return defer.succeed(None)
def load_or_create_client_key(key_file: str) -> JWKRSA:
"""Load the ACME account key from a file, creating it if it does not exist.
Args:
key_file: name of the file to use as the account key
"""
# this is based on txacme.endpoint.load_or_create_client_key, but doesn't
# hardcode the 'client.key' filename
acme_key_file = FilePath(key_file)
if acme_key_file.exists():
logger.info("Loading ACME account key from '%s'", acme_key_file)
key = serialization.load_pem_private_key(
acme_key_file.getContent(), password=None, backend=default_backend()
)
else:
logger.info("Saving new ACME account key to '%s'", acme_key_file)
key = generate_private_key("rsa")
acme_key_file.setContent(
key.private_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PrivateFormat.TraditionalOpenSSL,
encryption_algorithm=serialization.NoEncryption(),
)
)
return JWKRSA(key=key)

View file

@ -302,6 +302,7 @@ class AuthHandler(BaseHandler):
request: SynapseRequest,
request_body: Dict[str, Any],
description: str,
can_skip_ui_auth: bool = False,
) -> Tuple[dict, Optional[str]]:
"""
Checks that the user is who they claim to be, via a UI auth.
@ -320,6 +321,10 @@ class AuthHandler(BaseHandler):
description: A human readable string to be displayed to the user that
describes the operation happening on their account.
can_skip_ui_auth: True if the UI auth session timeout applies this
action. Should be set to False for any "dangerous"
actions (e.g. deactivating an account).
Returns:
A tuple of (params, session_id).
@ -343,7 +348,7 @@ class AuthHandler(BaseHandler):
"""
if not requester.access_token_id:
raise ValueError("Cannot validate a user without an access token")
if self._ui_auth_session_timeout:
if can_skip_ui_auth and self._ui_auth_session_timeout:
last_validated = await self.store.get_access_token_last_validated(
requester.access_token_id
)

View file

@ -79,9 +79,15 @@ class E2eKeysHandler:
"client_keys", self.on_federation_query_client_keys
)
# Limit the number of in-flight requests from a single device.
self._query_devices_linearizer = Linearizer(
name="query_devices",
max_count=10,
)
@trace
async def query_devices(
self, query_body: JsonDict, timeout: int, from_user_id: str
self, query_body: JsonDict, timeout: int, from_user_id: str, from_device_id: str
) -> JsonDict:
"""Handle a device key query from a client
@ -105,191 +111,197 @@ class E2eKeysHandler:
from_user_id: the user making the query. This is used when
adding cross-signing signatures to limit what signatures users
can see.
from_device_id: the device making the query. This is used to limit
the number of in-flight queries at a time.
"""
with await self._query_devices_linearizer.queue((from_user_id, from_device_id)):
device_keys_query = query_body.get(
"device_keys", {}
) # type: Dict[str, Iterable[str]]
device_keys_query = query_body.get(
"device_keys", {}
) # type: Dict[str, Iterable[str]]
# separate users by domain.
# make a map from domain to user_id to device_ids
local_query = {}
remote_queries = {}
# separate users by domain.
# make a map from domain to user_id to device_ids
local_query = {}
remote_queries = {}
for user_id, device_ids in device_keys_query.items():
# we use UserID.from_string to catch invalid user ids
if self.is_mine(UserID.from_string(user_id)):
local_query[user_id] = device_ids
else:
remote_queries[user_id] = device_ids
set_tag("local_key_query", local_query)
set_tag("remote_key_query", remote_queries)
# First get local devices.
# A map of destination -> failure response.
failures = {} # type: Dict[str, JsonDict]
results = {}
if local_query:
local_result = await self.query_local_devices(local_query)
for user_id, keys in local_result.items():
if user_id in local_query:
results[user_id] = keys
# Get cached cross-signing keys
cross_signing_keys = await self.get_cross_signing_keys_from_cache(
device_keys_query, from_user_id
)
# Now attempt to get any remote devices from our local cache.
# A map of destination -> user ID -> device IDs.
remote_queries_not_in_cache = {} # type: Dict[str, Dict[str, Iterable[str]]]
if remote_queries:
query_list = [] # type: List[Tuple[str, Optional[str]]]
for user_id, device_ids in remote_queries.items():
if device_ids:
query_list.extend((user_id, device_id) for device_id in device_ids)
for user_id, device_ids in device_keys_query.items():
# we use UserID.from_string to catch invalid user ids
if self.is_mine(UserID.from_string(user_id)):
local_query[user_id] = device_ids
else:
query_list.append((user_id, None))
remote_queries[user_id] = device_ids
(
user_ids_not_in_cache,
remote_results,
) = await self.store.get_user_devices_from_cache(query_list)
for user_id, devices in remote_results.items():
user_devices = results.setdefault(user_id, {})
for device_id, device in devices.items():
keys = device.get("keys", None)
device_display_name = device.get("device_display_name", None)
if keys:
result = dict(keys)
unsigned = result.setdefault("unsigned", {})
if device_display_name:
unsigned["device_display_name"] = device_display_name
user_devices[device_id] = result
set_tag("local_key_query", local_query)
set_tag("remote_key_query", remote_queries)
# check for missing cross-signing keys.
for user_id in remote_queries.keys():
cached_cross_master = user_id in cross_signing_keys["master_keys"]
cached_cross_selfsigning = (
user_id in cross_signing_keys["self_signing_keys"]
)
# check if we are missing only one of cross-signing master or
# self-signing key, but the other one is cached.
# as we need both, this will issue a federation request.
# if we don't have any of the keys, either the user doesn't have
# cross-signing set up, or the cached device list
# is not (yet) updated.
if cached_cross_master ^ cached_cross_selfsigning:
user_ids_not_in_cache.add(user_id)
# add those users to the list to fetch over federation.
for user_id in user_ids_not_in_cache:
domain = get_domain_from_id(user_id)
r = remote_queries_not_in_cache.setdefault(domain, {})
r[user_id] = remote_queries[user_id]
# Now fetch any devices that we don't have in our cache
@trace
async def do_remote_query(destination):
"""This is called when we are querying the device list of a user on
a remote homeserver and their device list is not in the device list
cache. If we share a room with this user and we're not querying for
specific user we will update the cache with their device list.
"""
destination_query = remote_queries_not_in_cache[destination]
# We first consider whether we wish to update the device list cache with
# the users device list. We want to track a user's devices when the
# authenticated user shares a room with the queried user and the query
# has not specified a particular device.
# If we update the cache for the queried user we remove them from further
# queries. We use the more efficient batched query_client_keys for all
# remaining users
user_ids_updated = []
for (user_id, device_list) in destination_query.items():
if user_id in user_ids_updated:
continue
if device_list:
continue
room_ids = await self.store.get_rooms_for_user(user_id)
if not room_ids:
continue
# We've decided we're sharing a room with this user and should
# probably be tracking their device lists. However, we haven't
# done an initial sync on the device list so we do it now.
try:
if self._is_master:
user_devices = await self.device_handler.device_list_updater.user_device_resync(
user_id
)
else:
user_devices = await self._user_device_resync_client(
user_id=user_id
)
user_devices = user_devices["devices"]
user_results = results.setdefault(user_id, {})
for device in user_devices:
user_results[device["device_id"]] = device["keys"]
user_ids_updated.append(user_id)
except Exception as e:
failures[destination] = _exception_to_failure(e)
if len(destination_query) == len(user_ids_updated):
# We've updated all the users in the query and we do not need to
# make any further remote calls.
return
# Remove all the users from the query which we have updated
for user_id in user_ids_updated:
destination_query.pop(user_id)
try:
remote_result = await self.federation.query_client_keys(
destination, {"device_keys": destination_query}, timeout=timeout
)
for user_id, keys in remote_result["device_keys"].items():
if user_id in destination_query:
# First get local devices.
# A map of destination -> failure response.
failures = {} # type: Dict[str, JsonDict]
results = {}
if local_query:
local_result = await self.query_local_devices(local_query)
for user_id, keys in local_result.items():
if user_id in local_query:
results[user_id] = keys
if "master_keys" in remote_result:
for user_id, key in remote_result["master_keys"].items():
# Get cached cross-signing keys
cross_signing_keys = await self.get_cross_signing_keys_from_cache(
device_keys_query, from_user_id
)
# Now attempt to get any remote devices from our local cache.
# A map of destination -> user ID -> device IDs.
remote_queries_not_in_cache = (
{}
) # type: Dict[str, Dict[str, Iterable[str]]]
if remote_queries:
query_list = [] # type: List[Tuple[str, Optional[str]]]
for user_id, device_ids in remote_queries.items():
if device_ids:
query_list.extend(
(user_id, device_id) for device_id in device_ids
)
else:
query_list.append((user_id, None))
(
user_ids_not_in_cache,
remote_results,
) = await self.store.get_user_devices_from_cache(query_list)
for user_id, devices in remote_results.items():
user_devices = results.setdefault(user_id, {})
for device_id, device in devices.items():
keys = device.get("keys", None)
device_display_name = device.get("device_display_name", None)
if keys:
result = dict(keys)
unsigned = result.setdefault("unsigned", {})
if device_display_name:
unsigned["device_display_name"] = device_display_name
user_devices[device_id] = result
# check for missing cross-signing keys.
for user_id in remote_queries.keys():
cached_cross_master = user_id in cross_signing_keys["master_keys"]
cached_cross_selfsigning = (
user_id in cross_signing_keys["self_signing_keys"]
)
# check if we are missing only one of cross-signing master or
# self-signing key, but the other one is cached.
# as we need both, this will issue a federation request.
# if we don't have any of the keys, either the user doesn't have
# cross-signing set up, or the cached device list
# is not (yet) updated.
if cached_cross_master ^ cached_cross_selfsigning:
user_ids_not_in_cache.add(user_id)
# add those users to the list to fetch over federation.
for user_id in user_ids_not_in_cache:
domain = get_domain_from_id(user_id)
r = remote_queries_not_in_cache.setdefault(domain, {})
r[user_id] = remote_queries[user_id]
# Now fetch any devices that we don't have in our cache
@trace
async def do_remote_query(destination):
"""This is called when we are querying the device list of a user on
a remote homeserver and their device list is not in the device list
cache. If we share a room with this user and we're not querying for
specific user we will update the cache with their device list.
"""
destination_query = remote_queries_not_in_cache[destination]
# We first consider whether we wish to update the device list cache with
# the users device list. We want to track a user's devices when the
# authenticated user shares a room with the queried user and the query
# has not specified a particular device.
# If we update the cache for the queried user we remove them from further
# queries. We use the more efficient batched query_client_keys for all
# remaining users
user_ids_updated = []
for (user_id, device_list) in destination_query.items():
if user_id in user_ids_updated:
continue
if device_list:
continue
room_ids = await self.store.get_rooms_for_user(user_id)
if not room_ids:
continue
# We've decided we're sharing a room with this user and should
# probably be tracking their device lists. However, we haven't
# done an initial sync on the device list so we do it now.
try:
if self._is_master:
user_devices = await self.device_handler.device_list_updater.user_device_resync(
user_id
)
else:
user_devices = await self._user_device_resync_client(
user_id=user_id
)
user_devices = user_devices["devices"]
user_results = results.setdefault(user_id, {})
for device in user_devices:
user_results[device["device_id"]] = device["keys"]
user_ids_updated.append(user_id)
except Exception as e:
failures[destination] = _exception_to_failure(e)
if len(destination_query) == len(user_ids_updated):
# We've updated all the users in the query and we do not need to
# make any further remote calls.
return
# Remove all the users from the query which we have updated
for user_id in user_ids_updated:
destination_query.pop(user_id)
try:
remote_result = await self.federation.query_client_keys(
destination, {"device_keys": destination_query}, timeout=timeout
)
for user_id, keys in remote_result["device_keys"].items():
if user_id in destination_query:
cross_signing_keys["master_keys"][user_id] = key
results[user_id] = keys
if "self_signing_keys" in remote_result:
for user_id, key in remote_result["self_signing_keys"].items():
if user_id in destination_query:
cross_signing_keys["self_signing_keys"][user_id] = key
if "master_keys" in remote_result:
for user_id, key in remote_result["master_keys"].items():
if user_id in destination_query:
cross_signing_keys["master_keys"][user_id] = key
except Exception as e:
failure = _exception_to_failure(e)
failures[destination] = failure
set_tag("error", True)
set_tag("reason", failure)
if "self_signing_keys" in remote_result:
for user_id, key in remote_result["self_signing_keys"].items():
if user_id in destination_query:
cross_signing_keys["self_signing_keys"][user_id] = key
await make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(do_remote_query, destination)
for destination in remote_queries_not_in_cache
],
consumeErrors=True,
).addErrback(unwrapFirstError)
)
except Exception as e:
failure = _exception_to_failure(e)
failures[destination] = failure
set_tag("error", True)
set_tag("reason", failure)
ret = {"device_keys": results, "failures": failures}
await make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(do_remote_query, destination)
for destination in remote_queries_not_in_cache
],
consumeErrors=True,
).addErrback(unwrapFirstError)
)
ret.update(cross_signing_keys)
ret = {"device_keys": results, "failures": failures}
return ret
ret.update(cross_signing_keys)
return ret
async def get_cross_signing_keys_from_cache(
self, query: Iterable[str], from_user_id: Optional[str]

View file

@ -13,7 +13,12 @@
# limitations under the License.
from typing import TYPE_CHECKING, Collection, Optional
from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.api.constants import (
EventTypes,
JoinRules,
Membership,
RestrictedJoinRuleTypes,
)
from synapse.api.errors import AuthError
from synapse.api.room_versions import RoomVersion
from synapse.events import EventBase
@ -42,7 +47,7 @@ class EventAuthHandler:
Check whether a user can join a room without an invite due to restricted join rules.
When joining a room with restricted joined rules (as defined in MSC3083),
the membership of spaces must be checked during a room join.
the membership of rooms must be checked during a room join.
Args:
state_ids: The state of the room as it currently is.
@ -67,20 +72,20 @@ class EventAuthHandler:
if not await self.has_restricted_join_rules(state_ids, room_version):
return
# Get the spaces which allow access to this room and check if the user is
# Get the rooms which allow access to this room and check if the user is
# in any of them.
allowed_spaces = await self.get_spaces_that_allow_join(state_ids)
if not await self.is_user_in_rooms(allowed_spaces, user_id):
allowed_rooms = await self.get_rooms_that_allow_join(state_ids)
if not await self.is_user_in_rooms(allowed_rooms, user_id):
raise AuthError(
403,
"You do not belong to any of the required spaces to join this room.",
"You do not belong to any of the required rooms to join this room.",
)
async def has_restricted_join_rules(
self, state_ids: StateMap[str], room_version: RoomVersion
) -> bool:
"""
Return if the room has the proper join rules set for access via spaces.
Return if the room has the proper join rules set for access via rooms.
Args:
state_ids: The state of the room as it currently is.
@ -102,17 +107,17 @@ class EventAuthHandler:
join_rules_event = await self._store.get_event(join_rules_event_id)
return join_rules_event.content.get("join_rule") == JoinRules.MSC3083_RESTRICTED
async def get_spaces_that_allow_join(
async def get_rooms_that_allow_join(
self, state_ids: StateMap[str]
) -> Collection[str]:
"""
Generate a list of spaces which allow access to a room.
Generate a list of rooms in which membership allows access to a room.
Args:
state_ids: The state of the room as it currently is.
state_ids: The current state of the room the user wishes to join
Returns:
A collection of spaces which provide membership to the room.
A collection of room IDs. Membership in any of the rooms in the list grants the ability to join the target room.
"""
# If there's no join rule, then it defaults to invite (so this doesn't apply).
join_rules_event_id = state_ids.get((EventTypes.JoinRules, ""), None)
@ -123,21 +128,25 @@ class EventAuthHandler:
join_rules_event = await self._store.get_event(join_rules_event_id)
# If allowed is of the wrong form, then only allow invited users.
allowed_spaces = join_rules_event.content.get("allow", [])
if not isinstance(allowed_spaces, list):
allow_list = join_rules_event.content.get("allow", [])
if not isinstance(allow_list, list):
return ()
# Pull out the other room IDs, invalid data gets filtered.
result = []
for space in allowed_spaces:
if not isinstance(space, dict):
for allow in allow_list:
if not isinstance(allow, dict):
continue
space_id = space.get("space")
if not isinstance(space_id, str):
# If the type is unexpected, skip it.
if allow.get("type") != RestrictedJoinRuleTypes.ROOM_MEMBERSHIP:
continue
result.append(space_id)
room_id = allow.get("room_id")
if not isinstance(room_id, str):
continue
result.append(room_id)
return result

View file

@ -1,6 +1,4 @@
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017-2018 New Vector Ltd
# Copyright 2019-2020 The Matrix.org Foundation C.I.C.
# Copyright 2014-2021 The Matrix.org Foundation C.I.C.
# Copyright 2020 Sorunome
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -35,6 +33,7 @@ from typing import (
)
import attr
from prometheus_client import Counter
from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json
from unpaddedbase64 import decode_base64
@ -103,6 +102,11 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
soft_failed_event_counter = Counter(
"synapse_federation_soft_failed_events_total",
"Events received over federation that we marked as soft_failed",
)
@attr.s(slots=True)
class _NewEventInfo:
@ -1965,7 +1969,7 @@ class FederationHandler(BaseHandler):
return event
async def on_send_leave_request(self, origin: str, pdu: EventBase) -> None:
""" We have received a leave event for a room. Fully process it."""
"""We have received a leave event for a room. Fully process it."""
event = pdu
logger.debug(
@ -2013,8 +2017,7 @@ class FederationHandler(BaseHandler):
"""
if get_domain_from_id(user_id) != origin:
logger.info(
"Get /xyz.amorgan.knock/make_knock request for user %r"
"from different origin %s, ignoring",
"Get /make_knock request for user %r from different origin %s, ignoring",
user_id,
origin,
)
@ -2081,8 +2084,7 @@ class FederationHandler(BaseHandler):
if get_domain_from_id(event.sender) != origin:
logger.info(
"Got /xyz.amorgan.knock/send_knock request for user %r "
"from different origin %s",
"Got /send_knock request for user %r from different origin %s",
event.sender,
origin,
)
@ -2090,7 +2092,7 @@ class FederationHandler(BaseHandler):
event.internal_metadata.outlier = False
context = await self._handle_new_event(origin, event)
context = await self.state_handler.compute_event_context(event)
event_allowed = await self.third_party_event_rules.check_event_allowed(
event, context
@ -2101,11 +2103,7 @@ class FederationHandler(BaseHandler):
403, "This event is not allowed in this context", Codes.FORBIDDEN
)
logger.debug(
"on_send_knock_request: After _handle_new_event: %s, sigs: %s",
event.event_id,
event.signatures,
)
await self._auth_and_persist_event(origin, event, context)
return context
@ -2433,7 +2431,11 @@ class FederationHandler(BaseHandler):
)
async def _check_for_soft_fail(
self, event: EventBase, state: Optional[Iterable[EventBase]], backfilled: bool
self,
event: EventBase,
state: Optional[Iterable[EventBase]],
backfilled: bool,
origin: str,
) -> None:
"""Checks if we should soft fail the event; if so, marks the event as
such.
@ -2442,6 +2444,7 @@ class FederationHandler(BaseHandler):
event
state: The state at the event if we don't have all the event's prev events
backfilled: Whether the event is from backfill
origin: The host the event originates from.
"""
# For new (non-backfilled and non-outlier) events we check if the event
# passes auth based on the current state. If it doesn't then we
@ -2511,7 +2514,18 @@ class FederationHandler(BaseHandler):
try:
event_auth.check(room_version_obj, event, auth_events=current_auth_events)
except AuthError as e:
logger.warning("Soft-failing %r because %s", event, e)
logger.warning(
"Soft-failing %r (from %s) because %s",
event,
e,
origin,
extra={
"room_id": event.room_id,
"mxid": event.sender,
"hs": origin,
},
)
soft_failed_event_counter.inc()
event.internal_metadata.soft_failed = True
async def on_get_missing_events(
@ -2623,7 +2637,7 @@ class FederationHandler(BaseHandler):
context.rejected = RejectedReason.AUTH_ERROR
if not context.rejected:
await self._check_for_soft_fail(event, state, backfilled)
await self._check_for_soft_fail(event, state, backfilled, origin=origin)
if event.type == EventTypes.GuestAccess and not context.rejected:
await self.maybe_kick_guest_users(event)

View file

@ -400,13 +400,14 @@ class EventCreationHandler:
self._events_shard_config = self.config.worker.events_shard_config
self._instance_name = hs.get_instance_name()
self.room_invite_state_types = self.hs.config.api.room_prejoin_state
self.room_prejoin_state_types = self.hs.config.api.room_prejoin_state
self.membership_types_to_include_profile_data_in = (
{Membership.JOIN, Membership.INVITE, Membership.KNOCK}
if self.hs.config.include_profile_data_on_invite
else {Membership.JOIN, Membership.KNOCK}
)
self.membership_types_to_include_profile_data_in = {
Membership.JOIN,
Membership.KNOCK,
}
if self.hs.config.include_profile_data_on_invite:
self.membership_types_to_include_profile_data_in.add(Membership.INVITE)
self.send_event = ReplicationSendEventRestServlet.make_client(hs)
@ -482,6 +483,9 @@ class EventCreationHandler:
prev_event_ids: Optional[List[str]] = None,
auth_event_ids: Optional[List[str]] = None,
require_consent: bool = True,
outlier: bool = False,
historical: bool = False,
depth: Optional[int] = None,
) -> Tuple[EventBase, EventContext]:
"""
Given a dict from a client, create a new event.
@ -508,6 +512,14 @@ class EventCreationHandler:
require_consent: Whether to check if the requester has
consented to the privacy policy.
outlier: Indicates whether the event is an `outlier`, i.e. if
it's from an arbitrary point and floating in the DAG as
opposed to being inline with the current DAG.
depth: Override the depth used to order the event in the DAG.
Should normally be set to None, which will cause the depth to be calculated
based on the prev_events.
Raises:
ResourceLimitError if server is blocked to some resource being
exceeded
@ -563,11 +575,36 @@ class EventCreationHandler:
if txn_id is not None:
builder.internal_metadata.txn_id = txn_id
builder.internal_metadata.outlier = outlier
builder.internal_metadata.historical = historical
# Strip down the auth_event_ids to only what we need to auth the event.
# For example, we don't need extra m.room.member that don't match event.sender
if auth_event_ids is not None:
temp_event = await builder.build(
prev_event_ids=prev_event_ids,
auth_event_ids=auth_event_ids,
depth=depth,
)
auth_events = await self.store.get_events_as_list(auth_event_ids)
# Create a StateMap[str]
auth_event_state_map = {
(e.type, e.state_key): e.event_id for e in auth_events
}
# Actually strip down and use the necessary auth events
auth_event_ids = self.auth.compute_auth_events(
event=temp_event,
current_state_ids=auth_event_state_map,
for_verification=False,
)
event, context = await self.create_new_client_event(
builder=builder,
requester=requester,
prev_event_ids=prev_event_ids,
auth_event_ids=auth_event_ids,
depth=depth,
)
# In an ideal world we wouldn't need the second part of this condition. However,
@ -724,9 +761,13 @@ class EventCreationHandler:
self,
requester: Requester,
event_dict: dict,
prev_event_ids: Optional[List[str]] = None,
auth_event_ids: Optional[List[str]] = None,
ratelimit: bool = True,
txn_id: Optional[str] = None,
ignore_shadow_ban: bool = False,
outlier: bool = False,
depth: Optional[int] = None,
) -> Tuple[EventBase, int]:
"""
Creates an event, then sends it.
@ -736,10 +777,24 @@ class EventCreationHandler:
Args:
requester: The requester sending the event.
event_dict: An entire event.
prev_event_ids:
The event IDs to use as the prev events.
Should normally be left as None to automatically request them
from the database.
auth_event_ids:
The event ids to use as the auth_events for the new event.
Should normally be left as None, which will cause them to be calculated
based on the room state at the prev_events.
ratelimit: Whether to rate limit this send.
txn_id: The transaction ID.
ignore_shadow_ban: True if shadow-banned users should be allowed to
send this event.
outlier: Indicates whether the event is an `outlier`, i.e. if
it's from an arbitrary point and floating in the DAG as
opposed to being inline with the current DAG.
depth: Override the depth used to order the event in the DAG.
Should normally be set to None, which will cause the depth to be calculated
based on the prev_events.
Returns:
The event, and its stream ordering (if deduplication happened,
@ -779,7 +834,13 @@ class EventCreationHandler:
return event, event.internal_metadata.stream_ordering
event, context = await self.create_event(
requester, event_dict, txn_id=txn_id
requester,
event_dict,
txn_id=txn_id,
prev_event_ids=prev_event_ids,
auth_event_ids=auth_event_ids,
outlier=outlier,
depth=depth,
)
assert self.hs.is_mine_id(event.sender), "User must be our own: %s" % (
@ -811,6 +872,7 @@ class EventCreationHandler:
requester: Optional[Requester] = None,
prev_event_ids: Optional[List[str]] = None,
auth_event_ids: Optional[List[str]] = None,
depth: Optional[int] = None,
) -> Tuple[EventBase, EventContext]:
"""Create a new event for a local client
@ -828,6 +890,10 @@ class EventCreationHandler:
Should normally be left as None, which will cause them to be calculated
based on the room state at the prev_events.
depth: Override the depth used to order the event in the DAG.
Should normally be set to None, which will cause the depth to be calculated
based on the prev_events.
Returns:
Tuple of created event, context
"""
@ -851,9 +917,24 @@ class EventCreationHandler:
), "Attempting to create an event with no prev_events"
event = await builder.build(
prev_event_ids=prev_event_ids, auth_event_ids=auth_event_ids
prev_event_ids=prev_event_ids,
auth_event_ids=auth_event_ids,
depth=depth,
)
context = await self.state.compute_event_context(event)
old_state = None
# Pass on the outlier property from the builder to the event
# after it is created
if builder.internal_metadata.outlier:
event.internal_metadata.outlier = builder.internal_metadata.outlier
# Calculate the state for outliers that pass in their own `auth_event_ids`
if auth_event_ids:
old_state = await self.store.get_events_as_list(auth_event_ids)
context = await self.state.compute_event_context(event, old_state=old_state)
if requester:
context.app_service = requester.app_service
@ -1018,7 +1099,13 @@ class EventCreationHandler:
the arguments.
"""
await self.action_generator.handle_push_actions_for_event(event, context)
# Skip push notification actions for historical messages
# because we don't want to notify people about old history back in time.
# The historical messages also do not have the proper `context.current_state_ids`
# and `state_groups` because they have `prev_events` that aren't persisted yet
# (historical messages persisted in reverse-chronological order).
if not event.internal_metadata.is_historical():
await self.action_generator.handle_push_actions_for_event(event, context)
try:
# If we're a worker we need to hit out to the master.
@ -1241,7 +1328,7 @@ class EventCreationHandler:
"invite_room_state"
] = await self.store.get_stripped_room_state_from_event_context(
context,
self.room_invite_state_types,
self.room_prejoin_state_types,
membership_user_id=event.sender,
)
@ -1264,7 +1351,7 @@ class EventCreationHandler:
"knock_room_state"
] = await self.store.get_stripped_room_state_from_event_context(
context,
DEFAULT_ROOM_STATE_TYPES,
self.room_prejoin_state_types,
)
if event.type == EventTypes.Redaction:
@ -1317,13 +1404,21 @@ class EventCreationHandler:
if prev_state_ids:
raise AuthError(403, "Changing the room create event is forbidden")
# Mark any `m.historical` messages as backfilled so they don't appear
# in `/sync` and have the proper decrementing `stream_ordering` as we import
backfilled = False
if event.internal_metadata.is_historical():
backfilled = True
# Note that this returns the event that was persisted, which may not be
# the same as we passed in if it was deduplicated due transaction IDs.
(
event,
event_pos,
max_stream_token,
) = await self.storage.persistence.persist_event(event, context=context)
) = await self.storage.persistence.persist_event(
event, context=context, backfilled=backfilled
)
if self._ephemeral_events_enabled:
# If there's an expiry timestamp on the event, schedule its expiry.

View file

@ -210,7 +210,7 @@ class RegistrationHandler(BaseHandler):
bind_emails: list of emails to bind to this account.
by_admin: True if this registration is being made via the
admin api, otherwise False.
user_agent_ips: Tuples of IP addresses and user-agents used
user_agent_ips: Tuples of user-agents and IP addresses used
during the registration process.
auth_provider_id: The SSO IdP the user used, if any.
Returns:

View file

@ -44,7 +44,7 @@ class RoomListHandler(BaseHandler):
self.enable_room_list_search = hs.config.enable_room_list_search
self.response_cache = ResponseCache(
hs.get_clock(), "room_list"
) # type: ResponseCache[Tuple[Optional[int], Optional[str], ThirdPartyInstanceID]]
) # type: ResponseCache[Tuple[Optional[int], Optional[str], Optional[ThirdPartyInstanceID]]]
self.remote_response_cache = ResponseCache(
hs.get_clock(), "remote_room_list", timeout_ms=30 * 1000
) # type: ResponseCache[Tuple[str, Optional[int], Optional[str], bool, Optional[str]]]
@ -54,7 +54,7 @@ class RoomListHandler(BaseHandler):
limit: Optional[int] = None,
since_token: Optional[str] = None,
search_filter: Optional[dict] = None,
network_tuple: ThirdPartyInstanceID = EMPTY_THIRD_PARTY_ID,
network_tuple: Optional[ThirdPartyInstanceID] = EMPTY_THIRD_PARTY_ID,
from_federation: bool = False,
) -> JsonDict:
"""Generate a local public room list.
@ -111,7 +111,7 @@ class RoomListHandler(BaseHandler):
limit: Optional[int] = None,
since_token: Optional[str] = None,
search_filter: Optional[dict] = None,
network_tuple: ThirdPartyInstanceID = EMPTY_THIRD_PARTY_ID,
network_tuple: Optional[ThirdPartyInstanceID] = EMPTY_THIRD_PARTY_ID,
from_federation: bool = False,
) -> JsonDict:
"""Generate a public room list.

View file

@ -257,11 +257,42 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
room_id: str,
membership: str,
prev_event_ids: List[str],
auth_event_ids: Optional[List[str]] = None,
txn_id: Optional[str] = None,
ratelimit: bool = True,
content: Optional[dict] = None,
require_consent: bool = True,
outlier: bool = False,
) -> Tuple[str, int]:
"""
Internal membership update function to get an existing event or create
and persist a new event for the new membership change.
Args:
requester:
target:
room_id:
membership:
prev_event_ids: The event IDs to use as the prev events
auth_event_ids:
The event ids to use as the auth_events for the new event.
Should normally be left as None, which will cause them to be calculated
based on the room state at the prev_events.
txn_id:
ratelimit:
content:
require_consent:
outlier: Indicates whether the event is an `outlier`, i.e. if
it's from an arbitrary point and floating in the DAG as
opposed to being inline with the current DAG.
Returns:
Tuple of event ID and stream ordering position
"""
user_id = target.to_string()
if content is None:
@ -298,7 +329,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
},
txn_id=txn_id,
prev_event_ids=prev_event_ids,
auth_event_ids=auth_event_ids,
require_consent=require_consent,
outlier=outlier,
)
prev_state_ids = await context.get_prev_state_ids()
@ -400,6 +433,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
content: Optional[dict] = None,
new_room: bool = False,
require_consent: bool = True,
outlier: bool = False,
prev_event_ids: Optional[List[str]] = None,
auth_event_ids: Optional[List[str]] = None,
) -> Tuple[str, int]:
"""Update a user's membership in a room.
@ -414,6 +450,14 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
ratelimit: Whether to rate limit the request.
content: The content of the created event.
require_consent: Whether consent is required.
outlier: Indicates whether the event is an `outlier`, i.e. if
it's from an arbitrary point and floating in the DAG as
opposed to being inline with the current DAG.
prev_event_ids: The event IDs to use as the prev events
auth_event_ids:
The event ids to use as the auth_events for the new event.
Should normally be left as None, which will cause them to be calculated
based on the room state at the prev_events.
Returns:
A tuple of the new event ID and stream ID.
@ -441,6 +485,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
content=content,
new_room=new_room,
require_consent=require_consent,
outlier=outlier,
prev_event_ids=prev_event_ids,
auth_event_ids=auth_event_ids,
)
return result
@ -458,10 +505,36 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
content: Optional[dict] = None,
new_room: bool = False,
require_consent: bool = True,
outlier: bool = False,
prev_event_ids: Optional[List[str]] = None,
auth_event_ids: Optional[List[str]] = None,
) -> Tuple[str, int]:
"""Helper for update_membership.
Assumes that the membership linearizer is already held for the room.
Args:
requester:
target:
room_id:
action:
txn_id:
remote_room_hosts:
third_party_signed:
ratelimit:
content:
require_consent:
outlier: Indicates whether the event is an `outlier`, i.e. if
it's from an arbitrary point and floating in the DAG as
opposed to being inline with the current DAG.
prev_event_ids: The event IDs to use as the prev events
auth_event_ids:
The event ids to use as the auth_events for the new event.
Should normally be left as None, which will cause them to be calculated
based on the room state at the prev_events.
Returns:
A tuple of the new event ID and stream ID.
"""
content_specified = bool(content)
if content is None:
@ -553,6 +626,21 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
if block_invite:
raise SynapseError(403, "Invites have been disabled on this server")
if prev_event_ids:
return await self._local_membership_update(
requester=requester,
target=target,
room_id=room_id,
membership=effective_membership_state,
txn_id=txn_id,
ratelimit=ratelimit,
prev_event_ids=prev_event_ids,
auth_event_ids=auth_event_ids,
content=content,
require_consent=require_consent,
outlier=outlier,
)
latest_event_ids = await self.store.get_prev_events_for_room(room_id)
current_state_ids = await self.state_handler.get_current_state_ids(
@ -761,8 +849,10 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
txn_id=txn_id,
ratelimit=ratelimit,
prev_event_ids=latest_event_ids,
auth_event_ids=auth_event_ids,
content=content,
require_consent=require_consent,
outlier=outlier,
)
async def transfer_room_state_on_room_upgrade(

View file

@ -1,5 +1,4 @@
# Copyright 2018 New Vector Ltd
# Copyright 2020 The Matrix.org Foundation C.I.C.
# Copyright 2018-2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.

View file

@ -160,14 +160,14 @@ class SpaceSummaryHandler:
# Check if the user is a member of any of the allowed spaces
# from the response.
allowed_spaces = room.get("allowed_spaces")
allowed_rooms = room.get("allowed_spaces")
if (
not include_room
and allowed_spaces
and isinstance(allowed_spaces, list)
and allowed_rooms
and isinstance(allowed_rooms, list)
):
include_room = await self._event_auth_handler.is_user_in_rooms(
allowed_spaces, requester
allowed_rooms, requester
)
# Finally, if this isn't the requested room, check ourselves
@ -402,10 +402,7 @@ class SpaceSummaryHandler:
return (), ()
return res.rooms, tuple(
ev.data
for ev in res.events
if ev.event_type == EventTypes.MSC1772_SPACE_CHILD
or ev.event_type == EventTypes.SpaceChild
ev.data for ev in res.events if ev.event_type == EventTypes.SpaceChild
)
async def _is_room_accessible(
@ -448,21 +445,20 @@ class SpaceSummaryHandler:
member_event_id = state_ids.get((EventTypes.Member, requester), None)
# If they're in the room they can see info on it.
member_event = None
if member_event_id:
member_event = await self._store.get_event(member_event_id)
if member_event.membership in (Membership.JOIN, Membership.INVITE):
return True
# Otherwise, check if they should be allowed access via membership in a space.
if self._event_auth_handler.has_restricted_join_rules(
if await self._event_auth_handler.has_restricted_join_rules(
state_ids, room_version
):
allowed_spaces = (
await self._event_auth_handler.get_spaces_that_allow_join(state_ids)
allowed_rooms = (
await self._event_auth_handler.get_rooms_that_allow_join(state_ids)
)
if await self._event_auth_handler.is_user_in_rooms(
allowed_spaces, requester
allowed_rooms, requester
):
return True
@ -478,10 +474,10 @@ class SpaceSummaryHandler:
if await self._event_auth_handler.has_restricted_join_rules(
state_ids, room_version
):
allowed_spaces = (
await self._event_auth_handler.get_spaces_that_allow_join(state_ids)
allowed_rooms = (
await self._event_auth_handler.get_rooms_that_allow_join(state_ids)
)
for space_id in allowed_spaces:
for space_id in allowed_rooms:
if await self._auth.check_host_in_room(space_id, origin):
return True
@ -514,17 +510,12 @@ class SpaceSummaryHandler:
current_state_ids[(EventTypes.Create, "")]
)
# TODO: update once MSC1772 lands
room_type = create_event.content.get(EventContentFields.ROOM_TYPE)
if not room_type:
room_type = create_event.content.get(EventContentFields.MSC1772_ROOM_TYPE)
room_version = await self._store.get_room_version(room_id)
allowed_spaces = None
allowed_rooms = None
if await self._event_auth_handler.has_restricted_join_rules(
current_state_ids, room_version
):
allowed_spaces = await self._event_auth_handler.get_spaces_that_allow_join(
allowed_rooms = await self._event_auth_handler.get_rooms_that_allow_join(
current_state_ids
)
@ -540,8 +531,8 @@ class SpaceSummaryHandler:
),
"guest_can_join": stats["guest_access"] == "can_join",
"creation_ts": create_event.origin_server_ts,
"room_type": room_type,
"allowed_spaces": allowed_spaces,
"room_type": create_event.content.get(EventContentFields.ROOM_TYPE),
"allowed_spaces": allowed_rooms,
}
# Filter out Nones rather omit the field altogether
@ -569,9 +560,7 @@ class SpaceSummaryHandler:
[
event_id
for key, event_id in current_state_ids.items()
# TODO: update once MSC1772 has been FCP for a period of time.
if key[0] == EventTypes.MSC1772_SPACE_CHILD
or key[0] == EventTypes.SpaceChild
if key[0] == EventTypes.SpaceChild
]
)

View file

@ -41,7 +41,12 @@ from synapse.handlers.ui_auth import UIAuthSessionDataConstants
from synapse.http import get_request_user_agent
from synapse.http.server import respond_with_html, respond_with_redirect
from synapse.http.site import SynapseRequest
from synapse.types import JsonDict, UserID, contains_invalid_mxid_characters
from synapse.types import (
JsonDict,
UserID,
contains_invalid_mxid_characters,
create_requester,
)
from synapse.util.async_helpers import Linearizer
from synapse.util.stringutils import random_string
@ -185,11 +190,14 @@ class SsoHandler:
self._auth_handler = hs.get_auth_handler()
self._error_template = hs.config.sso_error_template
self._bad_user_template = hs.config.sso_auth_bad_user_template
self._profile_handler = hs.get_profile_handler()
# The following template is shown after a successful user interactive
# authentication session. It tells the user they can close the window.
self._sso_auth_success_template = hs.config.sso_auth_success_template
self._sso_update_profile_information = hs.config.sso_update_profile_information
# a lock on the mappings
self._mapping_lock = Linearizer(name="sso_user_mapping", clock=hs.get_clock())
@ -458,6 +466,21 @@ class SsoHandler:
request.getClientIP(),
)
new_user = True
elif self._sso_update_profile_information:
attributes = await self._call_attribute_mapper(sso_to_matrix_id_mapper)
if attributes.display_name:
user_id_obj = UserID.from_string(user_id)
profile_display_name = await self._profile_handler.get_displayname(
user_id_obj
)
if profile_display_name != attributes.display_name:
requester = create_requester(
user_id,
authenticated_entity=user_id,
)
await self._profile_handler.set_displayname(
user_id_obj, requester, attributes.display_name, True
)
await self._auth_handler.complete_sso_login(
user_id,

View file

@ -1,6 +1,5 @@
# Copyright 2018 New Vector Ltd
# Copyright 2018-2021 The Matrix.org Foundation C.I.C.
# Copyright 2020 Sorunome
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.

View file

@ -49,7 +49,7 @@ from synapse.types import (
from synapse.util.async_helpers import concurrently_execute
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.caches.lrucache import LruCache
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.caches.response_cache import ResponseCache, ResponseCacheContext
from synapse.util.metrics import Measure, measure_func
from synapse.visibility import filter_events_for_client
@ -83,12 +83,15 @@ LAZY_LOADED_MEMBERS_CACHE_MAX_AGE = 30 * 60 * 1000
LAZY_LOADED_MEMBERS_CACHE_MAX_SIZE = 100
SyncRequestKey = Tuple[Any, ...]
@attr.s(slots=True, frozen=True)
class SyncConfig:
user = attr.ib(type=UserID)
filter_collection = attr.ib(type=FilterCollection)
is_guest = attr.ib(type=bool)
request_key = attr.ib(type=Tuple[Any, ...])
request_key = attr.ib(type=SyncRequestKey)
device_id = attr.ib(type=Optional[str])
@ -266,9 +269,9 @@ class SyncHandler:
self.presence_handler = hs.get_presence_handler()
self.event_sources = hs.get_event_sources()
self.clock = hs.get_clock()
self.response_cache = ResponseCache(
self.response_cache: ResponseCache[SyncRequestKey] = ResponseCache(
hs.get_clock(), "sync"
) # type: ResponseCache[Tuple[Any, ...]]
)
self.state = hs.get_state_handler()
self.auth = hs.get_auth()
self.storage = hs.get_storage()
@ -307,6 +310,7 @@ class SyncHandler:
since_token,
timeout,
full_state,
cache_context=True,
)
logger.debug("Returning sync response for %s", user_id)
return res
@ -314,9 +318,10 @@ class SyncHandler:
async def _wait_for_sync_for_user(
self,
sync_config: SyncConfig,
since_token: Optional[StreamToken] = None,
timeout: int = 0,
full_state: bool = False,
since_token: Optional[StreamToken],
timeout: int,
full_state: bool,
cache_context: ResponseCacheContext[SyncRequestKey],
) -> SyncResult:
if since_token is None:
sync_type = "initial_sync"
@ -343,13 +348,13 @@ class SyncHandler:
if timeout == 0 or since_token is None or full_state:
# we are going to return immediately, so don't bother calling
# notifier.wait_for_events.
result = await self.current_sync_for_user(
result: SyncResult = await self.current_sync_for_user(
sync_config, since_token, full_state=full_state
)
else:
def current_sync_callback(before_token, after_token):
return self.current_sync_for_user(sync_config, since_token)
async def current_sync_callback(before_token, after_token) -> SyncResult:
return await self.current_sync_for_user(sync_config, since_token)
result = await self.notifier.wait_for_events(
sync_config.user.to_string(),
@ -358,6 +363,17 @@ class SyncHandler:
from_token=since_token,
)
# if nothing has happened in any of the users' rooms since /sync was called,
# the resultant next_batch will be the same as since_token (since the result
# is generated when wait_for_events is first called, and not regenerated
# when wait_for_events times out).
#
# If that happens, we mustn't cache it, so that when the client comes back
# with the same cache token, we don't immediately return the same empty
# result, causing a tightloop. (#8518)
if result.next_batch == since_token:
cache_context.should_cache = False
if result:
if sync_config.filter_collection.lazy_load_members():
lazy_loaded = "true"

View file

@ -65,13 +65,9 @@ from synapse.http.client import (
read_body_with_max_size,
)
from synapse.http.federation.matrix_federation_agent import MatrixFederationAgent
from synapse.logging import opentracing
from synapse.logging.context import make_deferred_yieldable
from synapse.logging.opentracing import (
inject_active_span_byte_dict,
set_tag,
start_active_span,
tags,
)
from synapse.logging.opentracing import set_tag, start_active_span, tags
from synapse.types import ISynapseReactor, JsonDict
from synapse.util import json_decoder
from synapse.util.async_helpers import timeout_deferred
@ -322,7 +318,9 @@ class MatrixFederationHttpClient:
# We need to use a DNS resolver which filters out blacklisted IP
# addresses, to prevent DNS rebinding.
self.reactor = BlacklistingReactorWrapper(
hs.get_reactor(), None, hs.config.federation_ip_range_blacklist
hs.get_reactor(),
hs.config.federation_ip_range_whitelist,
hs.config.federation_ip_range_blacklist,
) # type: ISynapseReactor
user_agent = hs.version_string
@ -497,7 +495,7 @@ class MatrixFederationHttpClient:
# Inject the span into the headers
headers_dict = {} # type: Dict[bytes, List[bytes]]
inject_active_span_byte_dict(headers_dict, request.destination)
opentracing.inject_header_dict(headers_dict, request.destination)
headers_dict[b"User-Agent"] = [self.version_string_bytes]

View file

@ -200,36 +200,6 @@ def parse_string(
args, name, default, required, allowed_values, encoding
)
def parse_list_from_args(
args: Dict[bytes, List[bytes]],
name: Union[bytes, str],
encoding: Optional[str] = "ascii",
):
"""Parse and optionally decode a list of values from request query parameters.
Args:
args: A dictionary of query parameters from a request.
name: The name of the query parameter to extract values from. If given as bytes,
will be decoded as "ascii".
encoding: An optional encoding that is used to decode each parameter value with.
Raises:
KeyError: If the given `name` does not exist in `args`.
SynapseError: If an argument was not encoded with the specified `encoding`.
"""
if not isinstance(name, bytes):
name = name.encode("ascii")
args_list = args[name]
if encoding:
# Decode each argument value
try:
args_list = [value.decode(encoding) for value in args_list]
except ValueError:
raise SynapseError(400, "Query parameter %r must be %s" % (name, encoding))
return args_list
def _parse_string_value(
value: bytes,
@ -324,6 +294,30 @@ def parse_strings_from_args(
return default
@overload
def parse_string_from_args(
args: Dict[bytes, List[bytes]],
name: str,
default: Optional[str] = None,
required: Literal[True] = True,
allowed_values: Optional[Iterable[str]] = None,
encoding: str = "ascii",
) -> str:
...
@overload
def parse_string_from_args(
args: Dict[bytes, List[bytes]],
name: str,
default: Optional[str] = None,
required: bool = False,
allowed_values: Optional[Iterable[str]] = None,
encoding: str = "ascii",
) -> Optional[str]:
...
def parse_string_from_args(
args: Dict[bytes, List[bytes]],
name: str,
@ -460,7 +454,7 @@ class RestServlet:
"""
def register(self, http_server):
""" Register this servlet with the given HTTP server. """
"""Register this servlet with the given HTTP server."""
patterns = getattr(self, "PATTERNS", None)
if patterns:
for method in ("GET", "PUT", "POST", "DELETE"):

View file

@ -20,8 +20,9 @@ import logging
_encoder = json.JSONEncoder(ensure_ascii=False, separators=(",", ":"))
# The properties of a standard LogRecord.
_LOG_RECORD_ATTRIBUTES = {
# The properties of a standard LogRecord that should be ignored when generating
# JSON logs.
_IGNORED_LOG_RECORD_ATTRIBUTES = {
"args",
"asctime",
"created",
@ -59,9 +60,9 @@ class JsonFormatter(logging.Formatter):
return self._format(record, event)
def _format(self, record: logging.LogRecord, event: dict) -> str:
# Add any extra attributes to the event.
# Add attributes specified via the extra keyword to the logged event.
for key, value in record.__dict__.items():
if key not in _LOG_RECORD_ATTRIBUTES:
if key not in _IGNORED_LOG_RECORD_ATTRIBUTES:
event[key] = value
return _encoder.encode(event)

View file

@ -168,11 +168,12 @@ import inspect
import logging
import re
from functools import wraps
from typing import TYPE_CHECKING, Dict, Optional, Pattern, Type
from typing import TYPE_CHECKING, Collection, Dict, List, Optional, Pattern, Type
import attr
from twisted.internet import defer
from twisted.web.http_headers import Headers
from synapse.config import ConfigError
from synapse.util import json_decoder, json_encoder
@ -278,6 +279,10 @@ class SynapseTags:
DB_TXN_ID = "db.txn_id"
class SynapseBaggage:
FORCE_TRACING = "synapse-force-tracing"
# Block everything by default
# A regex which matches the server_names to expose traces for.
# None means 'block everything'.
@ -285,6 +290,8 @@ _homeserver_whitelist = None # type: Optional[Pattern[str]]
# Util methods
Sentinel = object()
def only_if_tracing(func):
"""Executes the function only if we're tracing. Otherwise returns None."""
@ -447,12 +454,28 @@ def start_active_span(
)
def start_active_span_follows_from(operation_name, contexts):
def start_active_span_follows_from(
operation_name: str, contexts: Collection, inherit_force_tracing=False
):
"""Starts an active opentracing span, with additional references to previous spans
Args:
operation_name: name of the operation represented by the new span
contexts: the previous spans to inherit from
inherit_force_tracing: if set, and any of the previous contexts have had tracing
forced, the new span will also have tracing forced.
"""
if opentracing is None:
return noop_context_manager()
references = [opentracing.follows_from(context) for context in contexts]
scope = start_active_span(operation_name, references=references)
if inherit_force_tracing and any(
is_context_forced_tracing(ctx) for ctx in contexts
):
force_tracing(scope.span)
return scope
@ -551,6 +574,10 @@ def start_active_span_from_edu(
# Opentracing setters for tags, logs, etc
@only_if_tracing
def active_span():
"""Get the currently active span, if any"""
return opentracing.tracer.active_span
@ensure_active_span("set a tag")
@ -571,25 +598,52 @@ def set_operation_name(operation_name):
opentracing.tracer.active_span.set_operation_name(operation_name)
@only_if_tracing
def force_tracing(span=Sentinel) -> None:
"""Force sampling for the active/given span and its children.
Args:
span: span to force tracing for. By default, the active span.
"""
if span is Sentinel:
span = opentracing.tracer.active_span
if span is None:
logger.error("No active span in force_tracing")
return
span.set_tag(opentracing.tags.SAMPLING_PRIORITY, 1)
# also set a bit of baggage, so that we have a way of figuring out if
# it is enabled later
span.set_baggage_item(SynapseBaggage.FORCE_TRACING, "1")
def is_context_forced_tracing(span_context) -> bool:
"""Check if sampling has been force for the given span context."""
if span_context is None:
return False
return span_context.baggage.get(SynapseBaggage.FORCE_TRACING) is not None
# Injection and extraction
@ensure_active_span("inject the span into a header")
def inject_active_span_twisted_headers(headers, destination, check_destination=True):
@ensure_active_span("inject the span into a header dict")
def inject_header_dict(
headers: Dict[bytes, List[bytes]],
destination: Optional[str] = None,
check_destination: bool = True,
) -> None:
"""
Injects a span context into twisted headers in-place
Injects a span context into a dict of HTTP headers
Args:
headers (twisted.web.http_headers.Headers)
destination (str): address of entity receiving the span context. If check_destination
is true the context will only be injected if the destination matches the
opentracing whitelist
headers: the dict to inject headers into
destination: address of entity receiving the span context. Must be given unless
check_destination is False. The context will only be injected if the
destination matches the opentracing whitelist
check_destination (bool): If false, destination will be ignored and the context
will always be injected.
span (opentracing.Span)
Returns:
In-place modification of headers
Note:
The headers set by the tracer are custom to the tracer implementation which
@ -598,45 +652,13 @@ def inject_active_span_twisted_headers(headers, destination, check_destination=T
here:
https://github.com/jaegertracing/jaeger-client-python/blob/master/jaeger_client/constants.py
"""
if check_destination and not whitelisted_homeserver(destination):
return
span = opentracing.tracer.active_span
carrier = {} # type: Dict[str, str]
opentracing.tracer.inject(span.context, opentracing.Format.HTTP_HEADERS, carrier)
for key, value in carrier.items():
headers.addRawHeaders(key, value)
@ensure_active_span("inject the span into a byte dict")
def inject_active_span_byte_dict(headers, destination, check_destination=True):
"""
Injects a span context into a dict where the headers are encoded as byte
strings
Args:
headers (dict)
destination (str): address of entity receiving the span context. If check_destination
is true the context will only be injected if the destination matches the
opentracing whitelist
check_destination (bool): If false, destination will be ignored and the context
will always be injected.
span (opentracing.Span)
Returns:
In-place modification of headers
Note:
The headers set by the tracer are custom to the tracer implementation which
should be unique enough that they don't interfere with any headers set by
synapse or twisted. If we're still using jaeger these headers would be those
here:
https://github.com/jaegertracing/jaeger-client-python/blob/master/jaeger_client/constants.py
"""
if check_destination and not whitelisted_homeserver(destination):
return
if check_destination:
if destination is None:
raise ValueError(
"destination must be given unless check_destination is False"
)
if not whitelisted_homeserver(destination):
return
span = opentracing.tracer.active_span
@ -647,36 +669,23 @@ def inject_active_span_byte_dict(headers, destination, check_destination=True):
headers[key.encode()] = [value.encode()]
@ensure_active_span("inject the span into a text map")
def inject_active_span_text_map(carrier, destination, check_destination=True):
"""
Injects a span context into a dict
Args:
carrier (dict)
destination (str): address of entity receiving the span context. If check_destination
is true the context will only be injected if the destination matches the
opentracing whitelist
check_destination (bool): If false, destination will be ignored and the context
will always be injected.
Returns:
In-place modification of carrier
Note:
The headers set by the tracer are custom to the tracer implementation which
should be unique enough that they don't interfere with any headers set by
synapse or twisted. If we're still using jaeger these headers would be those
here:
https://github.com/jaegertracing/jaeger-client-python/blob/master/jaeger_client/constants.py
"""
if check_destination and not whitelisted_homeserver(destination):
def inject_response_headers(response_headers: Headers) -> None:
"""Inject the current trace id into the HTTP response headers"""
if not opentracing:
return
span = opentracing.tracer.active_span
if not span:
return
opentracing.tracer.inject(
opentracing.tracer.active_span.context, opentracing.Format.TEXT_MAP, carrier
)
# This is a bit implementation-specific.
#
# Jaeger's Spans have a trace_id property; other implementations (including the
# dummy opentracing.span.Span which we use if init_tracer is not called) do not
# expose it
trace_id = getattr(span, "trace_id", None)
if trace_id is not None:
response_headers.addRawHeader("Synapse-Trace-Id", f"{trace_id:x}")
@ensure_active_span("get the active span context as a dict", ret={})
@ -854,6 +863,7 @@ def trace_servlet(request: "SynapseRequest", extract_context: bool = False):
scope = start_active_span(request_name)
with scope:
inject_response_headers(request.responseHeaders)
try:
yield
finally:

View file

@ -16,6 +16,7 @@ import logging
from typing import TYPE_CHECKING, Any, Generator, Iterable, List, Optional, Tuple
from twisted.internet import defer
from twisted.web.resource import IResource
from synapse.events import EventBase
from synapse.http.client import SimpleHttpClient
@ -56,6 +57,33 @@ class ModuleApi:
self._http_client = hs.get_simple_http_client() # type: SimpleHttpClient
self._public_room_list_manager = PublicRoomListManager(hs)
self._spam_checker = hs.get_spam_checker()
#################################################################################
# The following methods should only be called during the module's initialisation.
@property
def register_spam_checker_callbacks(self):
"""Registers callbacks for spam checking capabilities."""
return self._spam_checker.register_callbacks
def register_web_resource(self, path: str, resource: IResource):
"""Registers a web resource to be served at the given path.
This function should be called during initialisation of the module.
If multiple modules register a resource for the same path, the module that
appears the highest in the configuration file takes priority.
Args:
path: The path to register the resource for.
resource: The resource to attach to this path.
"""
self._hs.register_module_web_resource(path, resource)
#########################################################################
# The following methods can be called by the module at any point in time.
@property
def http_client(self):
"""Allows making outbound HTTP requests to remote resources.

View file

@ -15,3 +15,4 @@
"""Exception types which are exposed as part of the stable module API"""
from synapse.api.errors import RedirectException, SynapseError # noqa: F401
from synapse.config._base import ConfigError # noqa: F401

View file

@ -75,11 +75,9 @@ REQUIREMENTS = [
"phonenumbers>=8.2.0",
# we use GaugeHistogramMetric, which was added in prom-client 0.4.0.
"prometheus_client>=0.4.0",
# we use attr.validators.deep_iterable, which arrived in 19.1.0 (Note:
# Fedora 31 only has 19.1, so if we want to upgrade we should wait until 33
# is out in November.)
# we use `order`, which arrived in attrs 19.2.0.
# Note: 21.1.0 broke `/sync`, see #9936
"attrs>=19.1.0,!=21.1.0",
"attrs>=19.2.0,!=21.1.0",
"netaddr>=0.7.18",
"Jinja2>=2.9",
"bleach>=1.4.3",
@ -98,11 +96,6 @@ CONDITIONAL_REQUIREMENTS = {
"psycopg2cffi>=2.8 ; platform_python_implementation == 'PyPy'",
"psycopg2cffi-compat==1.1 ; platform_python_implementation == 'PyPy'",
],
# ACME support is required to provision TLS certificates from authorities
# that use the protocol, such as Let's Encrypt.
"acme": [
"txacme>=0.9.2",
],
"saml2": [
"pysaml2>=4.5.0",
],

View file

@ -23,7 +23,8 @@ from prometheus_client import Counter, Gauge
from synapse.api.errors import HttpResponseException, SynapseError
from synapse.http import RequestTimedOutError
from synapse.logging.opentracing import inject_active_span_byte_dict, trace
from synapse.logging import opentracing
from synapse.logging.opentracing import trace
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.stringutils import random_string
@ -235,7 +236,7 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
# Add an authorization header, if configured.
if replication_secret:
headers[b"Authorization"] = [b"Bearer " + replication_secret]
inject_active_span_byte_dict(headers, None, check_destination=False)
opentracing.inject_header_dict(headers, check_destination=False)
try:
result = await request_func(uri, data, headers=headers)
break
@ -284,7 +285,7 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
self.__class__.__name__,
)
def _check_auth_and_handle(self, request, **kwargs):
async def _check_auth_and_handle(self, request, **kwargs):
"""Called on new incoming requests when caching is enabled. Checks
if there is a cached response for the request and returns that,
otherwise calls `_handle_request` and caches its response.
@ -299,8 +300,8 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
if self.CACHE:
txn_id = kwargs.pop("txn_id")
return self.response_cache.wrap(
return await self.response_cache.wrap(
txn_id, self._handle_request, request, **kwargs
)
return self._handle_request(request, **kwargs)
return await self._handle_request(request, **kwargs)

View file

@ -114,7 +114,7 @@ class ReplicationRemoteKnockRestServlet(ReplicationEndpoint):
NAME = "remote_knock"
PATH_ARGS = ("room_id", "user_id")
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.federation_handler = hs.get_federation_handler()
@ -345,7 +345,7 @@ class ReplicationUserJoinedLeftRoomRestServlet(ReplicationEndpoint):
return {}
def _handle_request( # type: ignore
async def _handle_request( # type: ignore
self, request: Request, room_id: str, user_id: str, change: str
) -> Tuple[int, JsonDict]:
logger.info("user membership change: %s in %s", user_id, room_id)

View file

@ -571,7 +571,7 @@ class ReplicationCommandHandler:
def on_REMOTE_SERVER_UP(
self, conn: IReplicationConnection, cmd: RemoteServerUpCommand
):
""""Called when get a new REMOTE_SERVER_UP command."""
"""Called when get a new REMOTE_SERVER_UP command."""
self._replication_data_handler.on_remote_server_up(cmd.data)
self._notifier.notify_remote_server_up(cmd.data)

View file

@ -16,10 +16,10 @@
""" This module contains REST servlets to do with rooms: /rooms/<paths> """
import logging
import re
from typing import TYPE_CHECKING, Optional, Tuple
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple
from urllib import parse as urlparse
from synapse.api.constants import EventTypes, Membership
from synapse.api.constants import EventContentFields, EventTypes, Membership
from synapse.api.errors import (
AuthError,
Codes,
@ -36,8 +36,8 @@ from synapse.http.servlet import (
parse_boolean,
parse_integer,
parse_json_object_from_request,
parse_list_from_args,
parse_string,
parse_strings_from_args,
)
from synapse.http.site import SynapseRequest
from synapse.logging.opentracing import set_tag
@ -266,6 +266,288 @@ class RoomSendEventRestServlet(TransactionRestServlet):
)
class RoomBatchSendEventRestServlet(TransactionRestServlet):
"""
API endpoint which can insert a chunk of events historically back in time
next to the given `prev_event`.
`chunk_id` comes from `next_chunk_id `in the response of the batch send
endpoint and is derived from the "insertion" events added to each chunk.
It's not required for the first batch send.
`state_events_at_start` is used to define the historical state events
needed to auth the events like join events. These events will float
outside of the normal DAG as outlier's and won't be visible in the chat
history which also allows us to insert multiple chunks without having a bunch
of `@mxid joined the room` noise between each chunk.
`events` is chronological chunk/list of events you want to insert.
There is a reverse-chronological constraint on chunks so once you insert
some messages, you can only insert older ones after that.
tldr; Insert chunks from your most recent history -> oldest history.
POST /_matrix/client/unstable/org.matrix.msc2716/rooms/<roomID>/batch_send?prev_event=<eventID>&chunk_id=<chunkID>
{
"events": [ ... ],
"state_events_at_start": [ ... ]
}
"""
PATTERNS = (
re.compile(
"^/_matrix/client/unstable/org.matrix.msc2716"
"/rooms/(?P<room_id>[^/]*)/batch_send$"
),
)
def __init__(self, hs):
super().__init__(hs)
self.hs = hs
self.store = hs.get_datastore()
self.state_store = hs.get_storage().state
self.event_creation_handler = hs.get_event_creation_handler()
self.room_member_handler = hs.get_room_member_handler()
self.auth = hs.get_auth()
async def inherit_depth_from_prev_ids(self, prev_event_ids) -> int:
(
most_recent_prev_event_id,
most_recent_prev_event_depth,
) = await self.store.get_max_depth_of(prev_event_ids)
# We want to insert the historical event after the `prev_event` but before the successor event
#
# We inherit depth from the successor event instead of the `prev_event`
# because events returned from `/messages` are first sorted by `topological_ordering`
# which is just the `depth` and then tie-break with `stream_ordering`.
#
# We mark these inserted historical events as "backfilled" which gives them a
# negative `stream_ordering`. If we use the same depth as the `prev_event`,
# then our historical event will tie-break and be sorted before the `prev_event`
# when it should come after.
#
# We want to use the successor event depth so they appear after `prev_event` because
# it has a larger `depth` but before the successor event because the `stream_ordering`
# is negative before the successor event.
successor_event_ids = await self.store.get_successor_events(
[most_recent_prev_event_id]
)
# If we can't find any successor events, then it's a forward extremity of
# historical messages and we can just inherit from the previous historical
# event which we can already assume has the correct depth where we want
# to insert into.
if not successor_event_ids:
depth = most_recent_prev_event_depth
else:
(
_,
oldest_successor_depth,
) = await self.store.get_min_depth_of(successor_event_ids)
depth = oldest_successor_depth
return depth
async def on_POST(self, request, room_id):
requester = await self.auth.get_user_by_req(request, allow_guest=False)
if not requester.app_service:
raise AuthError(
403,
"Only application services can use the /batchsend endpoint",
)
body = parse_json_object_from_request(request)
assert_params_in_dict(body, ["state_events_at_start", "events"])
prev_events_from_query = parse_strings_from_args(request.args, "prev_event")
chunk_id_from_query = parse_string(request, "chunk_id", default=None)
if prev_events_from_query is None:
raise SynapseError(
400,
"prev_event query parameter is required when inserting historical messages back in time",
errcode=Codes.MISSING_PARAM,
)
# For the event we are inserting next to (`prev_events_from_query`),
# find the most recent auth events (derived from state events) that
# allowed that message to be sent. We will use that as a base
# to auth our historical messages against.
(
most_recent_prev_event_id,
_,
) = await self.store.get_max_depth_of(prev_events_from_query)
# mapping from (type, state_key) -> state_event_id
prev_state_map = await self.state_store.get_state_ids_for_event(
most_recent_prev_event_id
)
# List of state event ID's
prev_state_ids = list(prev_state_map.values())
auth_event_ids = prev_state_ids
for state_event in body["state_events_at_start"]:
assert_params_in_dict(
state_event, ["type", "origin_server_ts", "content", "sender"]
)
logger.debug(
"RoomBatchSendEventRestServlet inserting state_event=%s, auth_event_ids=%s",
state_event,
auth_event_ids,
)
event_dict = {
"type": state_event["type"],
"origin_server_ts": state_event["origin_server_ts"],
"content": state_event["content"],
"room_id": room_id,
"sender": state_event["sender"],
"state_key": state_event["state_key"],
}
# Make the state events float off on their own
fake_prev_event_id = "$" + random_string(43)
# TODO: This is pretty much the same as some other code to handle inserting state in this file
if event_dict["type"] == EventTypes.Member:
membership = event_dict["content"].get("membership", None)
event_id, _ = await self.room_member_handler.update_membership(
requester,
target=UserID.from_string(event_dict["state_key"]),
room_id=room_id,
action=membership,
content=event_dict["content"],
outlier=True,
prev_event_ids=[fake_prev_event_id],
# Make sure to use a copy of this list because we modify it
# later in the loop here. Otherwise it will be the same
# reference and also update in the event when we append later.
auth_event_ids=auth_event_ids.copy(),
)
else:
# TODO: Add some complement tests that adds state that is not member joins
# and will use this code path. Maybe we only want to support join state events
# and can get rid of this `else`?
(
event,
_,
) = await self.event_creation_handler.create_and_send_nonmember_event(
requester,
event_dict,
outlier=True,
prev_event_ids=[fake_prev_event_id],
# Make sure to use a copy of this list because we modify it
# later in the loop here. Otherwise it will be the same
# reference and also update in the event when we append later.
auth_event_ids=auth_event_ids.copy(),
)
event_id = event.event_id
auth_event_ids.append(event_id)
events_to_create = body["events"]
# If provided, connect the chunk to the last insertion point
# The chunk ID passed in comes from the chunk_id in the
# "insertion" event from the previous chunk.
if chunk_id_from_query:
last_event_in_chunk = events_to_create[-1]
last_event_in_chunk["content"][
EventContentFields.MSC2716_CHUNK_ID
] = chunk_id_from_query
# Add an "insertion" event to the start of each chunk (next to the oldest
# event in the chunk) so the next chunk can be connected to this one.
next_chunk_id = random_string(64)
insertion_event = {
"type": EventTypes.MSC2716_INSERTION,
"sender": requester.user.to_string(),
"content": {
EventContentFields.MSC2716_NEXT_CHUNK_ID: next_chunk_id,
EventContentFields.MSC2716_HISTORICAL: True,
},
# Since the insertion event is put at the start of the chunk,
# where the oldest event is, copy the origin_server_ts from
# the first event we're inserting
"origin_server_ts": events_to_create[0]["origin_server_ts"],
}
# Prepend the insertion event to the start of the chunk
events_to_create = [insertion_event] + events_to_create
inherited_depth = await self.inherit_depth_from_prev_ids(prev_events_from_query)
event_ids = []
prev_event_ids = prev_events_from_query
events_to_persist = []
for ev in events_to_create:
assert_params_in_dict(ev, ["type", "origin_server_ts", "content", "sender"])
# Mark all events as historical
# This has important semantics within the Synapse internals to backfill properly
ev["content"][EventContentFields.MSC2716_HISTORICAL] = True
event_dict = {
"type": ev["type"],
"origin_server_ts": ev["origin_server_ts"],
"content": ev["content"],
"room_id": room_id,
"sender": ev["sender"], # requester.user.to_string(),
"prev_events": prev_event_ids.copy(),
}
event, context = await self.event_creation_handler.create_event(
requester,
event_dict,
prev_event_ids=event_dict.get("prev_events"),
auth_event_ids=auth_event_ids,
historical=True,
depth=inherited_depth,
)
logger.debug(
"RoomBatchSendEventRestServlet inserting event=%s, prev_event_ids=%s, auth_event_ids=%s",
event,
prev_event_ids,
auth_event_ids,
)
assert self.hs.is_mine_id(event.sender), "User must be our own: %s" % (
event.sender,
)
events_to_persist.append((event, context))
event_id = event.event_id
event_ids.append(event_id)
prev_event_ids = [event_id]
# Persist events in reverse-chronological order so they have the
# correct stream_ordering as they are backfilled (which decrements).
# Events are sorted by (topological_ordering, stream_ordering)
# where topological_ordering is just depth.
for (event, context) in reversed(events_to_persist):
ev = await self.event_creation_handler.handle_new_client_event(
requester=requester,
event=event,
context=context,
)
return 200, {
"state_events": auth_event_ids,
"events": event_ids,
"next_chunk_id": next_chunk_id,
}
def on_GET(self, request, room_id):
return 501, "Not implemented"
def on_PUT(self, request, room_id):
return self.txns.fetch_or_execute_request(
request, self.on_POST, request, room_id
)
# TODO: Needs unit testing for room ID + alias joins
class JoinRoomAliasServlet(TransactionRestServlet):
def __init__(self, hs):
@ -278,7 +560,12 @@ class JoinRoomAliasServlet(TransactionRestServlet):
PATTERNS = "/join/(?P<room_identifier>[^/]*)"
register_txn_path(self, PATTERNS, http_server)
async def on_POST(self, request, room_identifier, txn_id=None):
async def on_POST(
self,
request: SynapseRequest,
room_identifier: str,
txn_id: Optional[str] = None,
):
requester = await self.auth.get_user_by_req(request, allow_guest=True)
try:
@ -290,15 +577,18 @@ class JoinRoomAliasServlet(TransactionRestServlet):
if RoomID.is_valid(room_identifier):
room_id = room_identifier
try:
remote_room_hosts = parse_list_from_args(request.args, "server_name")
except KeyError:
remote_room_hosts = None
# twisted.web.server.Request.args is incorrectly defined as Optional[Any]
args: Dict[bytes, List[bytes]] = request.args # type: ignore
remote_room_hosts = parse_strings_from_args(
args, "server_name", required=False
)
elif RoomAlias.is_valid(room_identifier):
handler = self.room_member_handler
room_alias = RoomAlias.from_string(room_identifier)
room_id, remote_room_hosts = await handler.lookup_room_alias(room_alias)
room_id = room_id.to_string()
room_id_obj, remote_room_hosts = await handler.lookup_room_alias(room_alias)
room_id = room_id_obj.to_string()
else:
raise SynapseError(
400, "%s was not legal room ID or room alias" % (room_identifier,)
@ -1047,6 +1337,8 @@ class RoomSpaceSummaryRestServlet(RestServlet):
def register_servlets(hs: "HomeServer", http_server, is_worker=False):
msc2716_enabled = hs.config.experimental.msc2716_enabled
RoomStateEventRestServlet(hs).register(http_server)
RoomMemberListRestServlet(hs).register(http_server)
JoinedRoomMemberListRestServlet(hs).register(http_server)
@ -1054,6 +1346,8 @@ def register_servlets(hs: "HomeServer", http_server, is_worker=False):
JoinRoomAliasServlet(hs).register(http_server)
RoomMembershipRestServlet(hs).register(http_server)
RoomSendEventRestServlet(hs).register(http_server)
if msc2716_enabled:
RoomBatchSendEventRestServlet(hs).register(http_server)
PublicRoomListRestServlet(hs).register(http_server)
RoomStateRestServlet(hs).register(http_server)
RoomRedactEventRestServlet(hs).register(http_server)

View file

@ -86,6 +86,9 @@ class DeleteDevicesRestServlet(RestServlet):
request,
body,
"remove device(s) from your account",
# Users might call this multiple times in a row while cleaning up
# devices, allow a single UI auth session to be re-used.
can_skip_ui_auth=True,
)
await self.device_handler.delete_devices(
@ -135,6 +138,9 @@ class DeviceRestServlet(RestServlet):
request,
body,
"remove a device from your account",
# Users might call this multiple times in a row while cleaning up
# devices, allow a single UI auth session to be re-used.
can_skip_ui_auth=True,
)
await self.device_handler.delete_device(requester.user.to_string(), device_id)

View file

@ -160,9 +160,12 @@ class KeyQueryServlet(RestServlet):
async def on_POST(self, request):
requester = await self.auth.get_user_by_req(request, allow_guest=True)
user_id = requester.user.to_string()
device_id = requester.device_id
timeout = parse_integer(request, "timeout", 10 * 1000)
body = parse_json_object_from_request(request)
result = await self.e2e_keys_handler.query_devices(body, timeout, user_id)
result = await self.e2e_keys_handler.query_devices(
body, timeout, user_id, device_id
)
return 200, result
@ -274,6 +277,9 @@ class SigningKeyUploadServlet(RestServlet):
request,
body,
"add a device signing key to your account",
# Allow skipping of UI auth since this is frequently called directly
# after login and it is silly to ask users to re-auth immediately.
can_skip_ui_auth=True,
)
result = await self.e2e_keys_handler.upload_signing_keys_for_user(user_id, body)

View file

@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2020 Sorunome
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
@ -14,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Optional, Tuple
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple
from twisted.web.server import Request
@ -23,7 +22,7 @@ from synapse.api.errors import SynapseError
from synapse.http.servlet import (
RestServlet,
parse_json_object_from_request,
parse_list_from_args,
parse_strings_from_args,
)
from synapse.http.site import SynapseRequest
from synapse.logging.opentracing import set_tag
@ -40,12 +39,10 @@ logger = logging.getLogger(__name__)
class KnockRoomAliasServlet(RestServlet):
"""
POST /xyz.amorgan.knock/{roomIdOrAlias}
POST /knock/{roomIdOrAlias}
"""
PATTERNS = client_patterns(
"/xyz.amorgan.knock/(?P<room_identifier>[^/]*)", releases=()
)
PATTERNS = client_patterns("/knock/(?P<room_identifier>[^/]*)")
def __init__(self, hs: "HomeServer"):
super().__init__()
@ -68,10 +65,13 @@ class KnockRoomAliasServlet(RestServlet):
if RoomID.is_valid(room_identifier):
room_id = room_identifier
try:
remote_room_hosts = parse_list_from_args(request.args, "server_name") # type: ignore
except KeyError:
remote_room_hosts = None
# twisted.web.server.Request.args is incorrectly defined as Optional[Any]
args: Dict[bytes, List[bytes]] = request.args # type: ignore
remote_room_hosts = parse_strings_from_args(
args, "server_name", required=False
)
elif RoomAlias.is_valid(room_identifier):
handler = self.room_member_handler
room_alias = RoomAlias.from_string(room_identifier)

View file

@ -85,7 +85,7 @@ class IdTokenServlet(RestServlet):
"access_token": token,
"token_type": "Bearer",
"matrix_server_name": self.server_name,
"expires_in": self.EXPIRES_MS / 1000,
"expires_in": self.EXPIRES_MS // 1000,
},
)

View file

@ -1,6 +1,4 @@
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017-2018 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -39,6 +37,7 @@ import twisted.internet.tcp
from twisted.internet import defer
from twisted.mail.smtp import sendmail
from twisted.web.iweb import IPolicyForHTTPS
from twisted.web.resource import IResource
from synapse.api.auth import Auth
from synapse.api.filtering import Filtering
@ -66,7 +65,6 @@ from synapse.groups.attestations import GroupAttestationSigning, GroupAttestionR
from synapse.groups.groups_server import GroupsServerHandler, GroupsServerWorkerHandler
from synapse.handlers.account_data import AccountDataHandler
from synapse.handlers.account_validity import AccountValidityHandler
from synapse.handlers.acme import AcmeHandler
from synapse.handlers.admin import AdminHandler
from synapse.handlers.appservice import ApplicationServicesHandler
from synapse.handlers.auth import AuthHandler, MacaroonGenerator
@ -259,6 +257,38 @@ class HomeServer(metaclass=abc.ABCMeta):
self.datastores = None # type: Optional[Databases]
self._module_web_resources: Dict[str, IResource] = {}
self._module_web_resources_consumed = False
def register_module_web_resource(self, path: str, resource: IResource):
"""Allows a module to register a web resource to be served at the given path.
If multiple modules register a resource for the same path, the module that
appears the highest in the configuration file takes priority.
Args:
path: The path to register the resource for.
resource: The resource to attach to this path.
Raises:
SynapseError(500): A module tried to register a web resource after the HTTP
listeners have been started.
"""
if self._module_web_resources_consumed:
raise RuntimeError(
"Tried to register a web resource from a module after startup",
)
# Don't register a resource that's already been registered.
if path not in self._module_web_resources.keys():
self._module_web_resources[path] = resource
else:
logger.warning(
"Module tried to register a web resource for path %s but another module"
" has already registered a resource for this path.",
path,
)
def get_instance_id(self) -> str:
"""A unique ID for this synapse process instance.
@ -494,10 +524,6 @@ class HomeServer(metaclass=abc.ABCMeta):
def get_e2e_room_keys_handler(self) -> E2eRoomKeysHandler:
return E2eRoomKeysHandler(self)
@cache_in_self
def get_acme_handler(self) -> AcmeHandler:
return AcmeHandler(self)
@cache_in_self
def get_admin_handler(self) -> AdminHandler:
return AdminHandler(self)
@ -651,7 +677,7 @@ class HomeServer(metaclass=abc.ABCMeta):
@cache_in_self
def get_spam_checker(self) -> SpamChecker:
return SpamChecker(self)
return SpamChecker()
@cache_in_self
def get_third_party_event_rules(self) -> ThirdPartyEventRules:

Some files were not shown because too many files have changed in this diff Show more