Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes

This commit is contained in:
Olivier Wilkinson (reivilibre) 2021-08-03 10:34:44 +01:00
commit 11dda97e86
169 changed files with 3655 additions and 877 deletions

View file

@ -12,6 +12,10 @@ on:
# we do the full build on tags.
tags: ["v*"]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: write
@ -44,12 +48,43 @@ jobs:
distro: ${{ fromJson(needs.get-distros.outputs.distros) }}
steps:
- uses: actions/checkout@v2
- name: Checkout
uses: actions/checkout@v2
with:
path: src
- uses: actions/setup-python@v2
- run: ./src/scripts-dev/build_debian_packages "${{ matrix.distro }}"
- uses: actions/upload-artifact@v2
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
install: true
- name: Set up docker layer caching
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Set up python
uses: actions/setup-python@v2
- name: Build the packages
# see https://github.com/docker/build-push-action/issues/252
# for the cache magic here
run: |
./src/scripts-dev/build_debian_packages \
--docker-build-arg=--cache-from=type=local,src=/tmp/.buildx-cache \
--docker-build-arg=--cache-to=type=local,mode=max,dest=/tmp/.buildx-cache-new \
--docker-build-arg=--progress=plain \
--docker-build-arg=--load \
"${{ matrix.distro }}"
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
- name: Upload debs as artifacts
uses: actions/upload-artifact@v2
with:
name: debs
path: debs/*

View file

@ -5,6 +5,10 @@ on:
branches: ["develop", "release-*"]
pull_request:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
lint:
runs-on: ubuntu-latest
@ -340,14 +344,19 @@ jobs:
working-directory: complement/dockerfiles
# Run Complement
- run: go test -v -tags synapse_blacklist,msc2403,msc2946,msc3083 ./tests
- run: go test -v -tags synapse_blacklist,msc2403,msc2946,msc3083 ./tests/...
env:
COMPLEMENT_BASE_IMAGE: complement-synapse:latest
working-directory: complement
# a job which marks all the other jobs as complete, thus allowing PRs to be merged.
tests-done:
if: ${{ always() }}
needs:
- lint
- lint-crlf
- lint-newsfile
- lint-sdist
- trial
- trial-olddeps
- sytest
@ -355,4 +364,19 @@ jobs:
- complement
runs-on: ubuntu-latest
steps:
- run: "true"
- name: Set build result
env:
NEEDS_CONTEXT: ${{ toJSON(needs) }}
# the `jq` incantation dumps out a series of "<job> <result>" lines.
# we set it to an intermediate variable to avoid a pipe, which makes it
# hard to set $rc.
run: |
rc=0
results=$(jq -r 'to_entries[] | [.key,.value.result] | join(" ")' <<< $NEEDS_CONTEXT)
while read job result ; do
if [ "$result" != "success" ]; then
echo "::set-failed ::Job $job returned $result"
rc=1
fi
done <<< $results
exit $rc

View file

@ -1,10 +1,31 @@
Synapse 1.39.0rc2 (2021-07-22)
Synapse 1.39.0 (2021-07-29)
===========================
No significant changes.
Synapse 1.39.0rc3 (2021-07-28)
==============================
Bugfixes
--------
- Always include `device_one_time_keys_count` key in `/sync` response to work around a bug in Element Android that broke encryption for new devices. ([\#10457](https://github.com/matrix-org/synapse/issues/10457))
- Fix a bug introduced in Synapse 1.38 which caused an exception at startup when SAML authentication was enabled. ([\#10477](https://github.com/matrix-org/synapse/issues/10477))
- Fix a long-standing bug where Synapse would not inform clients that a device had exhausted its one-time-key pool, potentially causing problems decrypting events. ([\#10485](https://github.com/matrix-org/synapse/issues/10485))
- Fix reporting old R30 stats as R30v2 stats. Introduced in v1.39.0rc1. ([\#10486](https://github.com/matrix-org/synapse/issues/10486))
Internal Changes
----------------
- Fix an error which prevented the Github Actions workflow to build the docker images from running. ([\#10461](https://github.com/matrix-org/synapse/issues/10461))
- Fix release script to correctly version debian changelog when doing RCs. ([\#10465](https://github.com/matrix-org/synapse/issues/10465))
Synapse 1.39.0rc2 (2021-07-22)
==============================
This release also includes the changes in v1.38.1.
Internal Changes

View file

@ -155,7 +155,7 @@ source ./env/bin/activate
./scripts-dev/lint.sh path/to/file1.py path/to/file2.py path/to/folder
```
## Run the unit tests.
## Run the unit tests (Twisted trial).
The unit tests run parts of Synapse, including your changes, to see if anything
was broken. They are slower than the linters but will typically catch more errors.
@ -186,7 +186,7 @@ SYNAPSE_TEST_LOG_LEVEL=DEBUG trial tests
```
## Run the integration tests.
## Run the integration tests ([Sytest](https://github.com/matrix-org/sytest)).
The integration tests are a more comprehensive suite of tests. They
run a full version of Synapse, including your changes, to check if
@ -203,6 +203,43 @@ $ docker run --rm -it -v /path/where/you/have/cloned/the/repository\:/src:ro -v
This configuration should generally cover your needs. For more details about other configurations, see [documentation in the SyTest repo](https://github.com/matrix-org/sytest/blob/develop/docker/README.md).
## Run the integration tests ([Complement](https://github.com/matrix-org/complement)).
[Complement](https://github.com/matrix-org/complement) is a suite of black box tests that can be run on any homeserver implementation. It can also be thought of as end-to-end (e2e) tests.
It's often nice to develop on Synapse and write Complement tests at the same time.
Here is how to run your local Synapse checkout against your local Complement checkout.
(checkout [`complement`](https://github.com/matrix-org/complement) alongside your `synapse` checkout)
```sh
COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh
```
To run a specific test file, you can pass the test name at the end of the command. The name passed comes from the naming structure in your Complement tests. If you're unsure of the name, you can do a full run and copy it from the test output:
```sh
COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh TestBackfillingHistory
```
To run a specific test, you can specify the whole name structure:
```sh
COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh TestBackfillingHistory/parallel/Backfilled_historical_events_resolve_with_proper_state_in_correct_order
```
### Access database for homeserver after Complement test runs.
If you're curious what the database looks like after you run some tests, here are some steps to get you going in Synapse:
1. In your Complement test comment out `defer deployment.Destroy(t)` and replace with `defer time.Sleep(2 * time.Hour)` to keep the homeserver running after the tests complete
1. Start the Complement tests
1. Find the name of the container, `docker ps -f name=complement_` (this will filter for just the Compelement related Docker containers)
1. Access the container replacing the name with what you found in the previous step: `docker exec -it complement_1_hs_with_application_service.hs1_2 /bin/bash`
1. Install sqlite (database driver), `apt-get update && apt-get install -y sqlite3`
1. Then run `sqlite3` and open the database `.open /conf/homeserver.db` (this db path comes from the Synapse homeserver.yaml)
# 9. Submit your patch.
Once you're happy with your patch, it's time to prepare a Pull Request.
@ -392,7 +429,7 @@ By now, you know the drill!
# Notes for maintainers on merging PRs etc
There are some notes for those with commit access to the project on how we
manage git [here](docs/dev/git.md).
manage git [here](docs/development/git.md).
# Conclusion

View file

@ -0,0 +1 @@
Make historical events discoverable from backfill for servers without any scrollback history (part of MSC2716).

View file

@ -0,0 +1 @@
Update support for [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083) to consider changes in the MSC around which servers can issue join events.

View file

@ -0,0 +1 @@
Initial support for MSC3244, Room version capabilities over the /capabilities API.

1
changelog.d/10390.misc Normal file
View file

@ -0,0 +1 @@
Prune inbound federation inbound queues for a room if they get too large.

View file

@ -0,0 +1 @@
Add a buffered logging handler which periodically flushes itself.

1
changelog.d/10408.misc Normal file
View file

@ -0,0 +1 @@
Add type hints to `synapse.federation.transport.client` module.

1
changelog.d/10410.bugfix Normal file
View file

@ -0,0 +1 @@
Improve character set detection in URL previews by supporting underscores (in addition to hyphens). Contributed by @srividyut.

View file

@ -0,0 +1 @@
Add support for https connections to a proxy server. Contributed by @Bubu and @dklimpel.

View file

@ -0,0 +1 @@
Support for [MSC2285 (hidden read receipts)](https://github.com/matrix-org/matrix-doc/pull/2285). Contributed by @SimonBrandner.

1
changelog.d/10415.misc Normal file
View file

@ -0,0 +1 @@
Remove shebang line from module files.

View file

@ -0,0 +1 @@
Email notifications now state whether an invitation is to a room or a space.

1
changelog.d/10429.misc Normal file
View file

@ -0,0 +1 @@
Drop backwards-compatibility code that was required to support Ubuntu Xenial.

1
changelog.d/10431.misc Normal file
View file

@ -0,0 +1 @@
Use a docker image cache for the prerequisites for the debian package build.

1
changelog.d/10432.misc Normal file
View file

@ -0,0 +1 @@
Connect historical chunks together with chunk events instead of a content field (MSC2716).

1
changelog.d/10437.misc Normal file
View file

@ -0,0 +1 @@
Improve servlet type hints.

1
changelog.d/10438.misc Normal file
View file

@ -0,0 +1 @@
Improve servlet type hints.

1
changelog.d/10439.bugfix Normal file
View file

@ -0,0 +1 @@
Fix events with floating outlier state being rejected over federation.

View file

@ -0,0 +1 @@
Allow setting transaction limit for database connections.

1
changelog.d/10442.misc Normal file
View file

@ -0,0 +1 @@
Replace usage of `or_ignore` in `simple_insert` with `simple_upsert` usage, to stop spamming postgres logs with spurious ERROR messages.

1
changelog.d/10444.misc Normal file
View file

@ -0,0 +1 @@
Update the `tests-done` Github Actions status.

1
changelog.d/10445.doc Normal file
View file

@ -0,0 +1 @@
Fix hierarchy of providers on the OpenID page.

1
changelog.d/10446.misc Normal file
View file

@ -0,0 +1 @@
Update type annotations to work with forthcoming Twisted 21.7.0 release.

View file

@ -0,0 +1 @@
Update support for [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083) to consider changes in the MSC around which servers can issue join events.

View file

@ -0,0 +1 @@
Add `creation_ts` to list users admin API.

1
changelog.d/10450.misc Normal file
View file

@ -0,0 +1 @@
Update type annotations to work with forthcoming Twisted 21.7.0 release.

1
changelog.d/10451.misc Normal file
View file

@ -0,0 +1 @@
Cancel redundant GHA workflows when a new commit is pushed.

1
changelog.d/10453.doc Normal file
View file

@ -0,0 +1 @@
Consolidate development documentation to `docs/development/`.

1
changelog.d/10455.bugfix Normal file
View file

@ -0,0 +1 @@
Fix `synapse_federation_server_oldest_inbound_pdu_in_staging` Prometheus metric to not report a max age of 51 years when the queue is empty.

View file

@ -1 +0,0 @@
Fix an error which prevented the Github Actions workflow to build the docker images from running.

1
changelog.d/10463.misc Normal file
View file

@ -0,0 +1 @@
Disable `msc2716` Complement tests until Complement updates are merged.

View file

@ -1 +0,0 @@
Fix release script to correctly version debian changelog when doing RCs.

1
changelog.d/10468.misc Normal file
View file

@ -0,0 +1 @@
Mitigate media repo XSS attacks on IE11 via the non-standard X-Content-Security-Policy header.

View file

@ -1 +0,0 @@
Fix bug introduced in Synapse 1.38 which caused an exception at startup when SAML authentication was enabled.

1
changelog.d/10482.misc Normal file
View file

@ -0,0 +1 @@
Additional type hints in the state handler.

1
changelog.d/10483.doc Normal file
View file

@ -0,0 +1 @@
Document how to use Complement while developing a new Synapse feature.

View file

@ -1 +0,0 @@
Fix a long-standing bug where Synapse would not inform clients that a device had exhausted its one-time-key pool, potentially causing problems decrypting events.

View file

@ -1 +0,0 @@
Fix reporting old R30 stats as R30v2 stats.

1
changelog.d/10488.misc Normal file
View file

@ -0,0 +1 @@
Update syntax used to run complement tests.

View file

@ -0,0 +1 @@
Update support for [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083) to consider changes in the MSC around which servers can issue join events.

1
changelog.d/10490.misc Normal file
View file

@ -0,0 +1 @@
Fix up type annotations to work with Twisted 21.7.

1
changelog.d/10491.misc Normal file
View file

@ -0,0 +1 @@
Improve type annotations for `ObservableDeferred`.

1
changelog.d/10499.bugfix Normal file
View file

@ -0,0 +1 @@
Fix a bug which caused an explicit assignment of power-level 0 to a user to be misinterpreted in rare circumstances.

1
changelog.d/10500.misc Normal file
View file

@ -0,0 +1 @@
Fix a bug which caused production debian packages to be incorrectly marked as 'prerelease'.

View file

@ -0,0 +1 @@
Allow setting transaction limit for database connections.

1
changelog.d/10512.misc Normal file
View file

@ -0,0 +1 @@
Update the `tests-done` Github Actions status.

1
changelog.d/9918.feature Normal file
View file

@ -0,0 +1 @@
Add support for [MSC2033](https://github.com/matrix-org/matrix-doc/pull/2033): `device_id` on `/account/whoami`.

View file

@ -33,13 +33,11 @@ esac
# Use --builtin-venv to use the better `venv` module from CPython 3.4+ rather
# than the 2/3 compatible `virtualenv`.
# Pin pip to 20.3.4 to fix breakage in 21.0 on py3.5 (xenial)
dh_virtualenv \
--install-suffix "matrix-synapse" \
--builtin-venv \
--python "$SNAKE" \
--upgrade-pip-to="20.3.4" \
--upgrade-pip \
--preinstall="lxml" \
--preinstall="mock" \
--extra-pip-arg="--no-cache-dir" \

18
debian/changelog vendored
View file

@ -1,3 +1,21 @@
matrix-synapse-py3 (1.39.0ubuntu1) UNRELEASED; urgency=medium
* Drop backwards-compatibility code that was required to support Ubuntu Xenial.
-- Richard van der Hoff <richard@matrix.org> Tue, 20 Jul 2021 00:10:03 +0100
matrix-synapse-py3 (1.39.0) stable; urgency=medium
* New synapse release 1.39.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 29 Jul 2021 09:59:00 +0100
matrix-synapse-py3 (1.39.0~rc3) stable; urgency=medium
* New synapse release 1.39.0~rc3.
-- Synapse Packaging team <packages@matrix.org> Wed, 28 Jul 2021 13:30:58 +0100
matrix-synapse-py3 (1.38.1) stable; urgency=medium
* New synapse release 1.38.1.

2
debian/compat vendored
View file

@ -1 +1 @@
9
10

5
debian/control vendored
View file

@ -3,11 +3,8 @@ Section: contrib/python
Priority: extra
Maintainer: Synapse Packaging team <packages@matrix.org>
# keep this list in sync with the build dependencies in docker/Dockerfile-dhvirtualenv.
# TODO: Remove the dependency on dh-systemd after dropping support for Ubuntu xenial
# On all other supported releases, it's merely a transitional package which
# does nothing but depends on debhelper (> 9.20160709)
Build-Depends:
debhelper (>= 9.20160709) | dh-systemd,
debhelper (>= 10),
dh-virtualenv (>= 1.1),
libsystemd-dev,
libpq-dev,

4
debian/rules vendored
View file

@ -51,7 +51,5 @@ override_dh_shlibdeps:
override_dh_virtualenv:
./debian/build_virtualenv
# We are restricted to compat level 9 (because xenial), so have to
# enable the systemd bits manually.
%:
dh $@ --with python-virtualenv --with systemd
dh $@ --with python-virtualenv

View file

@ -15,6 +15,15 @@ ARG distro=""
###
### Stage 0: build a dh-virtualenv
###
# This is only really needed on bionic and focal, since other distributions we
# care about have a recent version of dh-virtualenv by default. Unfortunately,
# it looks like focal is going to be with us for a while.
#
# (focal doesn't have a dh-virtualenv package at all. There is a PPA at
# https://launchpad.net/~jyrki-pulliainen/+archive/ubuntu/dh-virtualenv, but
# it's not obviously easier to use that than to build our own.)
FROM ${distro} as builder
RUN apt-get update -qq -o Acquire::Languages=none
@ -27,7 +36,7 @@ RUN env DEBIAN_FRONTEND=noninteractive apt-get install \
wget
# fetch and unpack the package
# TODO: Upgrade to 1.2.2 once xenial is dropped
# TODO: Upgrade to 1.2.2 once bionic is dropped (1.2.2 requires debhelper 12; bionic has only 11)
RUN mkdir /dh-virtualenv
RUN wget -q -O /dh-virtualenv.tar.gz https://github.com/spotify/dh-virtualenv/archive/ac6e1b1.tar.gz
RUN tar -xv --strip-components=1 -C /dh-virtualenv -f /dh-virtualenv.tar.gz
@ -59,8 +68,6 @@ ENV LANG C.UTF-8
#
# NB: keep this list in sync with the list of build-deps in debian/control
# TODO: it would be nice to do that automatically.
# TODO: Remove the dh-systemd stanza after dropping support for Ubuntu xenial
# it's a transitional package on all other, more recent releases
RUN apt-get update -qq -o Acquire::Languages=none \
&& env DEBIAN_FRONTEND=noninteractive apt-get install \
-yqq --no-install-recommends -o Dpkg::Options::=--force-unsafe-io \
@ -76,10 +83,7 @@ RUN apt-get update -qq -o Acquire::Languages=none \
python3-venv \
sqlite3 \
libpq-dev \
xmlsec1 \
&& ( env DEBIAN_FRONTEND=noninteractive apt-get install \
-yqq --no-install-recommends -o Dpkg::Options::=--force-unsafe-io \
dh-systemd || true )
xmlsec1
COPY --from=builder /dh-virtualenv_1.2~dev-1_all.deb /

View file

@ -11,10 +11,6 @@ DIST=`cut -d ':' -f2 <<< $distro`
cp -aT /synapse/source /synapse/build
cd /synapse/build
# add an entry to the changelog for this distribution
dch -M -l "+$DIST" "build for $DIST"
dch -M -r "" --force-distribution --distribution "$DIST"
# if this is a prerelease, set the Section accordingly.
#
# When the package is later added to the package repo, reprepro will use the
@ -23,11 +19,14 @@ dch -M -r "" --force-distribution --distribution "$DIST"
DEB_VERSION=`dpkg-parsechangelog -SVersion`
case $DEB_VERSION in
*rc*|*a*|*b*|*c*)
*~rc*|*~a*|*~b*|*~c*)
sed -ie '/^Section:/c\Section: prerelease' debian/control
;;
esac
# add an entry to the changelog for this distribution
dch -M -l "+$DIST" "build for $DIST"
dch -M -r "" --force-distribution --distribution "$DIST"
dpkg-buildpackage -us -uc

View file

@ -67,7 +67,7 @@
# Development
- [Contributing Guide](development/contributing_guide.md)
- [Code Style](code_style.md)
- [Git Usage](dev/git.md)
- [Git Usage](development/git.md)
- [Testing]()
- [OpenTracing](opentracing.md)
- [Database Schemas](development/database_schema.md)
@ -77,8 +77,8 @@
- [TCP Replication](tcp_replication.md)
- [Internal Documentation](development/internal_documentation/README.md)
- [Single Sign-On]()
- [SAML](dev/saml.md)
- [CAS](dev/cas.md)
- [SAML](development/saml.md)
- [CAS](development/cas.md)
- [State Resolution]()
- [The Auth Chain Difference Algorithm](auth_chain_difference_algorithm.md)
- [Media Repository](media_repository.md)

View file

@ -144,7 +144,8 @@ A response body like the following is returned:
"deactivated": 0,
"shadow_banned": 0,
"displayname": "<User One>",
"avatar_url": null
"avatar_url": null,
"creation_ts": 1560432668000
}, {
"name": "<user_id2>",
"is_guest": 0,
@ -153,7 +154,8 @@ A response body like the following is returned:
"deactivated": 0,
"shadow_banned": 0,
"displayname": "<User Two>",
"avatar_url": "<avatar_url>"
"avatar_url": "<avatar_url>",
"creation_ts": 1561550621000
}
],
"next_token": "100",
@ -197,11 +199,12 @@ The following parameters should be set in the URL:
- `shadow_banned` - Users are ordered by `shadow_banned` status.
- `displayname` - Users are ordered alphabetically by `displayname`.
- `avatar_url` - Users are ordered alphabetically by avatar URL.
- `creation_ts` - Users are ordered by when the users was created in ms.
- `dir` - Direction of media order. Either `f` for forwards or `b` for backwards.
Setting this value to `b` will reverse the above sort order. Defaults to `f`.
Caution. The database only has indexes on the columns `name` and `created_ts`.
Caution. The database only has indexes on the columns `name` and `creation_ts`.
This means that if a different sort order is used (`is_guest`, `admin`,
`user_type`, `deactivated`, `shadow_banned`, `avatar_url` or `displayname`),
this can cause a large load on the database, especially for large environments.
@ -222,6 +225,7 @@ The following fields are returned in the JSON response body:
- `shadow_banned` - bool - Status if that user has been marked as shadow banned.
- `displayname` - string - The user's display name if they have set one.
- `avatar_url` - string - The user's avatar URL if they have set one.
- `creation_ts` - integer - The user's creation timestamp in ms.
- `next_token`: string representing a positive integer - Indication for pagination. See above.
- `total` - integer - Total number of media.

View file

@ -9,7 +9,7 @@ commits each of which contains a single change building on what came
before. Here, by way of an arbitrary example, is the top of `git log --graph
b2dba0607`:
<img src="git/clean.png" alt="clean git graph" width="500px">
<img src="img/git/clean.png" alt="clean git graph" width="500px">
Note how the commit comment explains clearly what is changing and why. Also
note the *absence* of merge commits, as well as the absence of commits called
@ -61,7 +61,7 @@ Ok, so that's what we'd like to achieve. How do we achieve it?
The TL;DR is: when you come to merge a pull request, you *probably* want to
“squash and merge”:
![squash and merge](git/squash.png).
![squash and merge](img/git/squash.png).
(This applies whether you are merging your own PR, or that of another
contributor.)
@ -105,7 +105,7 @@ complicated. Here's how we do it.
Let's start with a picture:
![branching model](git/branches.jpg)
![branching model](img/git/branches.jpg)
It looks complicated, but it's really not. There's one basic rule: *anyone* is
free to merge from *any* more-stable branch to *any* less-stable branch at

View file

Before

Width:  |  Height:  |  Size: 70 KiB

After

Width:  |  Height:  |  Size: 70 KiB

View file

Before

Width:  |  Height:  |  Size: 108 KiB

After

Width:  |  Height:  |  Size: 108 KiB

View file

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 29 KiB

View file

@ -410,7 +410,7 @@ oidc_providers:
display_name_template: "{{ user.name }}"
```
## Apple
### Apple
Configuring "Sign in with Apple" (SiWA) requires an Apple Developer account.

View file

@ -720,6 +720,9 @@ caches:
# 'name' gives the database engine to use: either 'sqlite3' (for SQLite) or
# 'psycopg2' (for PostgreSQL).
#
# 'txn_limit' gives the maximum number of transactions to run per connection
# before reconnecting. Defaults to 0, which means no limit.
#
# 'args' gives options which are passed through to the database engine,
# except for options starting 'cp_', which are used to configure the Twisted
# connection pool. For a reference to valid arguments, see:
@ -740,6 +743,7 @@ caches:
#
#database:
# name: psycopg2
# txn_limit: 10000
# args:
# user: synapse_user
# password: secretpassword

View file

@ -28,7 +28,7 @@ handlers:
# will be a delay for INFO/DEBUG logs to get written, but WARNING/ERROR
# logs will still be flushed immediately.
buffer:
class: logging.handlers.MemoryHandler
class: synapse.logging.handlers.PeriodicallyFlushingMemoryHandler
target: file
# The capacity is the number of log lines that are buffered before
# being written to disk. Increasing this will lead to better
@ -36,6 +36,9 @@ handlers:
# be written to disk.
capacity: 10
flushLevel: 30 # Flush for WARNING logs as well
# The period of time, in seconds, between forced flushes.
# Messages will not be delayed for longer than this time.
period: 5
# A handler that writes logs to stderr. Unused by default, but can be used
# instead of "buffer" and "file" in the logger handlers.

View file

@ -17,6 +17,7 @@ import subprocess
import sys
import threading
from concurrent.futures import ThreadPoolExecutor
from typing import Optional, Sequence
DISTS = (
"debian:buster",
@ -39,8 +40,11 @@ projdir = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
class Builder(object):
def __init__(self, redirect_stdout=False):
def __init__(
self, redirect_stdout=False, docker_build_args: Optional[Sequence[str]] = None
):
self.redirect_stdout = redirect_stdout
self._docker_build_args = tuple(docker_build_args or ())
self.active_containers = set()
self._lock = threading.Lock()
self._failed = False
@ -79,8 +83,8 @@ class Builder(object):
stdout = None
# first build a docker image for the build environment
subprocess.check_call(
[
build_args = (
(
"docker",
"build",
"--tag",
@ -89,8 +93,13 @@ class Builder(object):
"distro=" + dist,
"-f",
"docker/Dockerfile-dhvirtualenv",
"docker",
],
)
+ self._docker_build_args
+ ("docker",)
)
subprocess.check_call(
build_args,
stdout=stdout,
stderr=subprocess.STDOUT,
cwd=projdir,
@ -147,9 +156,7 @@ class Builder(object):
self.active_containers.remove(c)
def run_builds(dists, jobs=1, skip_tests=False):
builder = Builder(redirect_stdout=(jobs > 1))
def run_builds(builder, dists, jobs=1, skip_tests=False):
def sig(signum, _frame):
print("Caught SIGINT")
builder.kill_containers()
@ -180,6 +187,11 @@ if __name__ == "__main__":
action="store_true",
help="skip running tests after building",
)
parser.add_argument(
"--docker-build-arg",
action="append",
help="specify an argument to pass to docker build",
)
parser.add_argument(
"--show-dists-json",
action="store_true",
@ -195,4 +207,12 @@ if __name__ == "__main__":
if args.show_dists_json:
print(json.dumps(DISTS))
else:
run_builds(dists=args.dist, jobs=args.jobs, skip_tests=args.no_check)
builder = Builder(
redirect_stdout=(args.jobs > 1), docker_build_args=args.docker_build_arg
)
run_builds(
builder,
dists=args.dist,
jobs=args.jobs,
skip_tests=args.no_check,
)

View file

@ -65,4 +65,4 @@ if [[ -n "$1" ]]; then
fi
# Run the tests!
go test -v -tags synapse_blacklist,msc2946,msc3083,msc2716,msc2403 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests
go test -v -tags synapse_blacklist,msc2946,msc3083,msc2403 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/...

View file

@ -47,7 +47,7 @@ try:
except ImportError:
pass
__version__ = "1.39.0rc2"
__version__ = "1.39.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View file

@ -1,4 +1,3 @@
#!/usr/bin/env python
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");

View file

@ -120,6 +120,7 @@ class EventTypes:
SpaceParent = "m.space.parent"
MSC2716_INSERTION = "org.matrix.msc2716.insertion"
MSC2716_CHUNK = "org.matrix.msc2716.chunk"
MSC2716_MARKER = "org.matrix.msc2716.marker"
@ -198,15 +199,13 @@ class EventContentFields:
# Used on normal messages to indicate they were historically imported after the fact
MSC2716_HISTORICAL = "org.matrix.msc2716.historical"
# For "insertion" events
# For "insertion" events to indicate what the next chunk ID should be in
# order to connect to it
MSC2716_NEXT_CHUNK_ID = "org.matrix.msc2716.next_chunk_id"
# Used on normal message events to indicate where the chunk connects to
# Used on "chunk" events to indicate which insertion event it connects to
MSC2716_CHUNK_ID = "org.matrix.msc2716.chunk_id"
# For "marker" events
MSC2716_MARKER_INSERTION = "org.matrix.msc2716.marker.insertion"
MSC2716_MARKER_INSERTION_PREV_EVENTS = (
"org.matrix.msc2716.marker.insertion_prev_events"
)
class RoomTypes:
@ -230,3 +229,7 @@ class HistoryVisibility:
JOINED = "joined"
SHARED = "shared"
WORLD_READABLE = "world_readable"
class ReadReceiptEventFields:
MSC2285_HIDDEN = "org.matrix.msc2285.hidden"

View file

@ -75,6 +75,9 @@ class Codes:
INVALID_SIGNATURE = "M_INVALID_SIGNATURE"
USER_DEACTIVATED = "M_USER_DEACTIVATED"
BAD_ALIAS = "M_BAD_ALIAS"
# For restricted join rules.
UNABLE_AUTHORISE_JOIN = "M_UNABLE_TO_AUTHORISE_JOIN"
UNABLE_TO_GRANT_JOIN = "M_UNABLE_TO_GRANT_JOIN"
class CodeMessageException(RuntimeError):

View file

@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Dict
from typing import Callable, Dict, Optional
import attr
@ -73,6 +73,9 @@ class RoomVersion:
# MSC2403: Allows join_rules to be set to 'knock', changes auth rules to allow sending
# m.room.membership event with membership 'knock'.
msc2403_knocking = attr.ib(type=bool)
# MSC2716: Adds m.room.power_levels -> content.historical field to control
# whether "insertion", "chunk", "marker" events can be sent
msc2716_historical = attr.ib(type=bool)
class RoomVersions:
@ -88,6 +91,7 @@ class RoomVersions:
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
)
V2 = RoomVersion(
"2",
@ -101,6 +105,7 @@ class RoomVersions:
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
)
V3 = RoomVersion(
"3",
@ -114,6 +119,7 @@ class RoomVersions:
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
)
V4 = RoomVersion(
"4",
@ -127,6 +133,7 @@ class RoomVersions:
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
)
V5 = RoomVersion(
"5",
@ -140,6 +147,7 @@ class RoomVersions:
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
)
V6 = RoomVersion(
"6",
@ -153,6 +161,7 @@ class RoomVersions:
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
)
MSC2176 = RoomVersion(
"org.matrix.msc2176",
@ -166,9 +175,10 @@ class RoomVersions:
msc2176_redaction_rules=True,
msc3083_join_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
)
MSC3083 = RoomVersion(
"org.matrix.msc3083",
"org.matrix.msc3083.v2",
RoomDisposition.UNSTABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
@ -179,6 +189,7 @@ class RoomVersions:
msc2176_redaction_rules=False,
msc3083_join_rules=True,
msc2403_knocking=False,
msc2716_historical=False,
)
V7 = RoomVersion(
"7",
@ -192,6 +203,21 @@ class RoomVersions:
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc2403_knocking=True,
msc2716_historical=False,
)
MSC2716 = RoomVersion(
"org.matrix.msc2716",
RoomDisposition.STABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc2403_knocking=True,
msc2716_historical=True,
)
@ -207,6 +233,41 @@ KNOWN_ROOM_VERSIONS: Dict[str, RoomVersion] = {
RoomVersions.MSC2176,
RoomVersions.MSC3083,
RoomVersions.V7,
RoomVersions.MSC2716,
)
}
@attr.s(slots=True, frozen=True, auto_attribs=True)
class RoomVersionCapability:
"""An object which describes the unique attributes of a room version."""
identifier: str # the identifier for this capability
preferred_version: Optional[RoomVersion]
support_check_lambda: Callable[[RoomVersion], bool]
MSC3244_CAPABILITIES = {
cap.identifier: {
"preferred": cap.preferred_version.identifier
if cap.preferred_version is not None
else None,
"support": [
v.identifier
for v in KNOWN_ROOM_VERSIONS.values()
if cap.support_check_lambda(v)
],
}
for cap in (
RoomVersionCapability(
"knock",
RoomVersions.V7,
lambda room_version: room_version.msc2403_knocking,
),
RoomVersionCapability(
"restricted",
None,
lambda room_version: room_version.msc3083_join_rules,
),
)
# Note that we do not include MSC2043 here unless it is enabled in the config.
}

View file

@ -1,4 +1,3 @@
#!/usr/bin/env python
# Copyright 2019 Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");

View file

@ -1,4 +1,3 @@
#!/usr/bin/env python
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View file

@ -1,4 +1,3 @@
#!/usr/bin/env python
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View file

@ -1,4 +1,3 @@
#!/usr/bin/env python
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View file

@ -1,4 +1,3 @@
#!/usr/bin/env python
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View file

@ -1,4 +1,3 @@
#!/usr/bin/env python
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View file

@ -1,4 +1,3 @@
#!/usr/bin/env python
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View file

@ -1,4 +1,3 @@
#!/usr/bin/env python
# Copyright 2016 OpenMarket Ltd
# Copyright 2020 The Matrix.org Foundation C.I.C.
#

View file

@ -1,4 +1,3 @@
#!/usr/bin/env python
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2019 New Vector Ltd
#

View file

@ -1,4 +1,3 @@
#!/usr/bin/env python
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View file

@ -1,4 +1,3 @@
#!/usr/bin/env python
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View file

@ -1,4 +1,3 @@
#!/usr/bin/env python
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View file

@ -1,4 +1,3 @@
#!/usr/bin/env python
# Copyright 2017 Vector Creations Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View file

@ -33,6 +33,9 @@ DEFAULT_CONFIG = """\
# 'name' gives the database engine to use: either 'sqlite3' (for SQLite) or
# 'psycopg2' (for PostgreSQL).
#
# 'txn_limit' gives the maximum number of transactions to run per connection
# before reconnecting. Defaults to 0, which means no limit.
#
# 'args' gives options which are passed through to the database engine,
# except for options starting 'cp_', which are used to configure the Twisted
# connection pool. For a reference to valid arguments, see:
@ -53,6 +56,7 @@ DEFAULT_CONFIG = """\
#
#database:
# name: psycopg2
# txn_limit: 10000
# args:
# user: synapse_user
# password: secretpassword

View file

@ -39,12 +39,13 @@ DEFAULT_SUBJECTS = {
"messages_from_person_and_others": "[%(app)s] You have messages on %(app)s from %(person)s and others...",
"invite_from_person": "[%(app)s] %(person)s has invited you to chat on %(app)s...",
"invite_from_person_to_room": "[%(app)s] %(person)s has invited you to join the %(room)s room on %(app)s...",
"invite_from_person_to_space": "[%(app)s] %(person)s has invited you to join the %(space)s space on %(app)s...",
"password_reset": "[%(server_name)s] Password reset",
"email_validation": "[%(server_name)s] Validate your email",
}
@attr.s
@attr.s(slots=True, frozen=True)
class EmailSubjectConfig:
message_from_person_in_room = attr.ib(type=str)
message_from_person = attr.ib(type=str)
@ -54,6 +55,7 @@ class EmailSubjectConfig:
messages_from_person_and_others = attr.ib(type=str)
invite_from_person = attr.ib(type=str)
invite_from_person_to_room = attr.ib(type=str)
invite_from_person_to_space = attr.ib(type=str)
password_reset = attr.ib(type=str)
email_validation = attr.ib(type=str)

View file

@ -32,3 +32,9 @@ class ExperimentalConfig(Config):
# MSC2716 (backfill existing history)
self.msc2716_enabled: bool = experimental.get("msc2716_enabled", False)
# MSC2285 (hidden read receipts)
self.msc2285_enabled: bool = experimental.get("msc2285_enabled", False)
# MSC3244 (room version capabilities)
self.msc3244_enabled: bool = experimental.get("msc3244_enabled", False)

View file

@ -71,7 +71,7 @@ handlers:
# will be a delay for INFO/DEBUG logs to get written, but WARNING/ERROR
# logs will still be flushed immediately.
buffer:
class: logging.handlers.MemoryHandler
class: synapse.logging.handlers.PeriodicallyFlushingMemoryHandler
target: file
# The capacity is the number of log lines that are buffered before
# being written to disk. Increasing this will lead to better
@ -79,6 +79,9 @@ handlers:
# be written to disk.
capacity: 10
flushLevel: 30 # Flush for WARNING logs as well
# The period of time, in seconds, between forced flushes.
# Messages will not be delayed for longer than this time.
period: 5
# A handler that writes logs to stderr. Unused by default, but can be used
# instead of "buffer" and "file" in the logger handlers.

View file

@ -106,6 +106,18 @@ def check(
if not event.signatures.get(event_id_domain):
raise AuthError(403, "Event not signed by sending server")
is_invite_via_allow_rule = (
event.type == EventTypes.Member
and event.membership == Membership.JOIN
and "join_authorised_via_users_server" in event.content
)
if is_invite_via_allow_rule:
authoriser_domain = get_domain_from_id(
event.content["join_authorised_via_users_server"]
)
if not event.signatures.get(authoriser_domain):
raise AuthError(403, "Event not signed by authorising server")
# Implementation of https://matrix.org/docs/spec/rooms/v1#authorization-rules
#
# 1. If type is m.room.create:
@ -177,7 +189,7 @@ def check(
# https://github.com/vector-im/vector-web/issues/1208 hopefully
if event.type == EventTypes.ThirdPartyInvite:
user_level = get_user_power_level(event.user_id, auth_events)
invite_level = _get_named_level(auth_events, "invite", 0)
invite_level = get_named_level(auth_events, "invite", 0)
if user_level < invite_level:
raise AuthError(403, "You don't have permission to invite users")
@ -193,6 +205,13 @@ def check(
if event.type == EventTypes.Redaction:
check_redaction(room_version_obj, event, auth_events)
if (
event.type == EventTypes.MSC2716_INSERTION
or event.type == EventTypes.MSC2716_CHUNK
or event.type == EventTypes.MSC2716_MARKER
):
check_historical(room_version_obj, event, auth_events)
logger.debug("Allowing! %s", event)
@ -285,8 +304,8 @@ def _is_membership_change_allowed(
user_level = get_user_power_level(event.user_id, auth_events)
target_level = get_user_power_level(target_user_id, auth_events)
# FIXME (erikj): What should we do here as the default?
ban_level = _get_named_level(auth_events, "ban", 50)
invite_level = get_named_level(auth_events, "invite", 0)
ban_level = get_named_level(auth_events, "ban", 50)
logger.debug(
"_is_membership_change_allowed: %s",
@ -336,8 +355,6 @@ def _is_membership_change_allowed(
elif target_in_room: # the target is already in the room.
raise AuthError(403, "%s is already in the room." % target_user_id)
else:
invite_level = _get_named_level(auth_events, "invite", 0)
if user_level < invite_level:
raise AuthError(403, "You don't have permission to invite users")
elif Membership.JOIN == membership:
@ -345,16 +362,41 @@ def _is_membership_change_allowed(
# * They are not banned.
# * They are accepting a previously sent invitation.
# * They are already joined (it's a NOOP).
# * The room is public or restricted.
# * The room is public.
# * The room is restricted and the user meets the allows rules.
if event.user_id != target_user_id:
raise AuthError(403, "Cannot force another user to join.")
elif target_banned:
raise AuthError(403, "You are banned from this room")
elif join_rule == JoinRules.PUBLIC or (
elif join_rule == JoinRules.PUBLIC:
pass
elif (
room_version.msc3083_join_rules
and join_rule == JoinRules.MSC3083_RESTRICTED
):
pass
# This is the same as public, but the event must contain a reference
# to the server who authorised the join. If the event does not contain
# the proper content it is rejected.
#
# Note that if the caller is in the room or invited, then they do
# not need to meet the allow rules.
if not caller_in_room and not caller_invited:
authorising_user = event.content.get("join_authorised_via_users_server")
if authorising_user is None:
raise AuthError(403, "Join event is missing authorising user.")
# The authorising user must be in the room.
key = (EventTypes.Member, authorising_user)
member_event = auth_events.get(key)
_check_joined_room(member_event, authorising_user, event.room_id)
authorising_user_level = get_user_power_level(
authorising_user, auth_events
)
if authorising_user_level < invite_level:
raise AuthError(403, "Join event authorised by invalid server.")
elif join_rule == JoinRules.INVITE or (
room_version.msc2403_knocking and join_rule == JoinRules.KNOCK
):
@ -369,7 +411,7 @@ def _is_membership_change_allowed(
if target_banned and user_level < ban_level:
raise AuthError(403, "You cannot unban user %s." % (target_user_id,))
elif target_user_id != event.user_id:
kick_level = _get_named_level(auth_events, "kick", 50)
kick_level = get_named_level(auth_events, "kick", 50)
if user_level < kick_level or user_level <= target_level:
raise AuthError(403, "You cannot kick user %s." % target_user_id)
@ -445,7 +487,7 @@ def get_send_level(
def _can_send_event(event: EventBase, auth_events: StateMap[EventBase]) -> bool:
power_levels_event = _get_power_level_event(auth_events)
power_levels_event = get_power_level_event(auth_events)
send_level = get_send_level(event.type, event.get("state_key"), power_levels_event)
user_level = get_user_power_level(event.user_id, auth_events)
@ -485,7 +527,7 @@ def check_redaction(
"""
user_level = get_user_power_level(event.user_id, auth_events)
redact_level = _get_named_level(auth_events, "redact", 50)
redact_level = get_named_level(auth_events, "redact", 50)
if user_level >= redact_level:
return False
@ -504,6 +546,37 @@ def check_redaction(
raise AuthError(403, "You don't have permission to redact events")
def check_historical(
room_version_obj: RoomVersion,
event: EventBase,
auth_events: StateMap[EventBase],
) -> None:
"""Check whether the event sender is allowed to send historical related
events like "insertion", "chunk", and "marker".
Returns:
None
Raises:
AuthError if the event sender is not allowed to send historical related events
("insertion", "chunk", and "marker").
"""
# Ignore the auth checks in room versions that do not support historical
# events
if not room_version_obj.msc2716_historical:
return
user_level = get_user_power_level(event.user_id, auth_events)
historical_level = get_named_level(auth_events, "historical", 100)
if user_level < historical_level:
raise AuthError(
403,
'You don\'t have permission to send send historical related events ("insertion", "chunk", and "marker")',
)
def _check_power_levels(
room_version_obj: RoomVersion,
event: EventBase,
@ -600,7 +673,7 @@ def _check_power_levels(
)
def _get_power_level_event(auth_events: StateMap[EventBase]) -> Optional[EventBase]:
def get_power_level_event(auth_events: StateMap[EventBase]) -> Optional[EventBase]:
return auth_events.get((EventTypes.PowerLevels, ""))
@ -616,10 +689,10 @@ def get_user_power_level(user_id: str, auth_events: StateMap[EventBase]) -> int:
Returns:
the user's power level in this room.
"""
power_level_event = _get_power_level_event(auth_events)
power_level_event = get_power_level_event(auth_events)
if power_level_event:
level = power_level_event.content.get("users", {}).get(user_id)
if not level:
if level is None:
level = power_level_event.content.get("users_default", 0)
if level is None:
@ -640,8 +713,8 @@ def get_user_power_level(user_id: str, auth_events: StateMap[EventBase]) -> int:
return 0
def _get_named_level(auth_events: StateMap[EventBase], name: str, default: int) -> int:
power_level_event = _get_power_level_event(auth_events)
def get_named_level(auth_events: StateMap[EventBase], name: str, default: int) -> int:
power_level_event = get_power_level_event(auth_events)
if not power_level_event:
return default
@ -728,7 +801,9 @@ def get_public_keys(invite_event: EventBase) -> List[Dict[str, Any]]:
return public_keys
def auth_types_for_event(event: Union[EventBase, EventBuilder]) -> Set[Tuple[str, str]]:
def auth_types_for_event(
room_version: RoomVersion, event: Union[EventBase, EventBuilder]
) -> Set[Tuple[str, str]]:
"""Given an event, return a list of (EventType, StateKey) that may be
needed to auth the event. The returned list may be a superset of what
would actually be required depending on the full state of the room.
@ -760,4 +835,12 @@ def auth_types_for_event(event: Union[EventBase, EventBuilder]) -> Set[Tuple[str
)
auth_types.add(key)
if room_version.msc3083_join_rules and membership == Membership.JOIN:
if "join_authorised_via_users_server" in event.content:
key = (
EventTypes.Member,
event.content["join_authorised_via_users_server"],
)
auth_types.add(key)
return auth_types

View file

@ -109,6 +109,8 @@ def prune_event_dict(room_version: RoomVersion, event_dict: dict) -> dict:
add_fields("creator")
elif event_type == EventTypes.JoinRules:
add_fields("join_rule")
if room_version.msc3083_join_rules:
add_fields("allow")
elif event_type == EventTypes.PowerLevels:
add_fields(
"users",
@ -124,6 +126,9 @@ def prune_event_dict(room_version: RoomVersion, event_dict: dict) -> dict:
if room_version.msc2176_redaction_rules:
add_fields("invite")
if room_version.msc2716_historical:
add_fields("historical")
elif event_type == EventTypes.Aliases and room_version.special_case_aliases_auth:
add_fields("aliases")
elif event_type == EventTypes.RoomHistoryVisibility:

View file

@ -178,6 +178,34 @@ async def _check_sigs_on_pdu(
)
raise SynapseError(403, errmsg, Codes.FORBIDDEN)
# If this is a join event for a restricted room it may have been authorised
# via a different server from the sending server. Check those signatures.
if (
room_version.msc3083_join_rules
and pdu.type == EventTypes.Member
and pdu.membership == Membership.JOIN
and "join_authorised_via_users_server" in pdu.content
):
authorising_server = get_domain_from_id(
pdu.content["join_authorised_via_users_server"]
)
try:
await keyring.verify_event_for_server(
authorising_server,
pdu,
pdu.origin_server_ts if room_version.enforce_key_validity else 0,
)
except Exception as e:
errmsg = (
"event id %s: unable to verify signature for authorising server %s: %s"
% (
pdu.event_id,
authorising_server,
e,
)
)
raise SynapseError(403, errmsg, Codes.FORBIDDEN)
def _is_invite_via_3pid(event: EventBase) -> bool:
return (

View file

@ -19,10 +19,10 @@ import itertools
import logging
from typing import (
TYPE_CHECKING,
Any,
Awaitable,
Callable,
Collection,
Container,
Dict,
Iterable,
List,
@ -79,7 +79,15 @@ class InvalidResponseError(RuntimeError):
we couldn't parse
"""
pass
@attr.s(slots=True, frozen=True, auto_attribs=True)
class SendJoinResult:
# The event to persist.
event: EventBase
# A string giving the server the event was sent to.
origin: str
state: List[EventBase]
auth_chain: List[EventBase]
class FederationClient(FederationBase):
@ -506,6 +514,7 @@ class FederationClient(FederationBase):
description: str,
destinations: Iterable[str],
callback: Callable[[str], Awaitable[T]],
failover_errcodes: Optional[Container[str]] = None,
failover_on_unknown_endpoint: bool = False,
) -> T:
"""Try an operation on a series of servers, until it succeeds
@ -526,6 +535,9 @@ class FederationClient(FederationBase):
next server tried. Normally the stacktrace is logged but this is
suppressed if the exception is an InvalidResponseError.
failover_errcodes: Error codes (specific to this endpoint) which should
cause a failover when received as part of an HTTP 400 error.
failover_on_unknown_endpoint: if True, we will try other servers if it looks
like a server doesn't support the endpoint. This is typically useful
if the endpoint in question is new or experimental.
@ -537,6 +549,9 @@ class FederationClient(FederationBase):
SynapseError if the chosen remote server returns a 300/400 code, or
no servers were reachable.
"""
if failover_errcodes is None:
failover_errcodes = ()
for destination in destinations:
if destination == self.server_name:
continue
@ -551,11 +566,17 @@ class FederationClient(FederationBase):
synapse_error = e.to_synapse_error()
failover = False
# Failover on an internal server error, or if the destination
# doesn't implemented the endpoint for some reason.
# Failover should occur:
#
# * On internal server errors.
# * If the destination responds that it cannot complete the request.
# * If the destination doesn't implemented the endpoint for some reason.
if 500 <= e.code < 600:
failover = True
elif e.code == 400 and synapse_error.errcode in failover_errcodes:
failover = True
elif failover_on_unknown_endpoint and self._is_unknown_endpoint(
e, synapse_error
):
@ -671,13 +692,25 @@ class FederationClient(FederationBase):
return destination, ev, room_version
# MSC3083 defines additional error codes for room joins. Unfortunately
# we do not yet know the room version, assume these will only be returned
# by valid room versions.
failover_errcodes = (
(Codes.UNABLE_AUTHORISE_JOIN, Codes.UNABLE_TO_GRANT_JOIN)
if membership == Membership.JOIN
else None
)
return await self._try_destination_list(
"make_" + membership, destinations, send_request
"make_" + membership,
destinations,
send_request,
failover_errcodes=failover_errcodes,
)
async def send_join(
self, destinations: Iterable[str], pdu: EventBase, room_version: RoomVersion
) -> Dict[str, Any]:
) -> SendJoinResult:
"""Sends a join event to one of a list of homeservers.
Doing so will cause the remote server to add the event to the graph,
@ -691,18 +724,38 @@ class FederationClient(FederationBase):
did the make_join)
Returns:
a dict with members ``origin`` (a string
giving the server the event was sent to, ``state`` (?) and
``auth_chain``.
The result of the send join request.
Raises:
SynapseError: if the chosen remote server returns a 300/400 code, or
no servers successfully handle the request.
"""
async def send_request(destination) -> Dict[str, Any]:
async def send_request(destination) -> SendJoinResult:
response = await self._do_send_join(room_version, destination, pdu)
# If an event was returned (and expected to be returned):
#
# * Ensure it has the same event ID (note that the event ID is a hash
# of the event fields for versions which support MSC3083).
# * Ensure the signatures are good.
#
# Otherwise, fallback to the provided event.
if room_version.msc3083_join_rules and response.event:
event = response.event
valid_pdu = await self._check_sigs_and_hash_and_fetch_one(
pdu=event,
origin=destination,
outlier=True,
room_version=room_version,
)
if valid_pdu is None or event.event_id != pdu.event_id:
raise InvalidResponseError("Returned an invalid join event")
else:
event = pdu
state = response.state
auth_chain = response.auth_events
@ -784,13 +837,32 @@ class FederationClient(FederationBase):
% (auth_chain_create_events,)
)
return {
"state": signed_state,
"auth_chain": signed_auth,
"origin": destination,
}
return SendJoinResult(
event=event,
state=signed_state,
auth_chain=signed_auth,
origin=destination,
)
return await self._try_destination_list("send_join", destinations, send_request)
# MSC3083 defines additional error codes for room joins.
failover_errcodes = None
if room_version.msc3083_join_rules:
failover_errcodes = (
Codes.UNABLE_AUTHORISE_JOIN,
Codes.UNABLE_TO_GRANT_JOIN,
)
# If the join is being authorised via allow rules, we need to send
# the /send_join back to the same server that was originally used
# with /make_join.
if "join_authorised_via_users_server" in pdu.content:
destinations = [
get_domain_from_id(pdu.content["join_authorised_via_users_server"])
]
return await self._try_destination_list(
"send_join", destinations, send_request, failover_errcodes=failover_errcodes
)
async def _do_send_join(
self, room_version: RoomVersion, destination: str, pdu: EventBase

View file

@ -45,6 +45,7 @@ from synapse.api.errors import (
UnsupportedRoomVersionError,
)
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersion
from synapse.crypto.event_signing import compute_event_signature
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.federation.federation_base import FederationBase, event_from_pdu_json
@ -64,7 +65,7 @@ from synapse.replication.http.federation import (
ReplicationGetQueryRestServlet,
)
from synapse.storage.databases.main.lock import Lock
from synapse.types import JsonDict
from synapse.types import JsonDict, get_domain_from_id
from synapse.util import glob_to_regex, json_decoder, unwrapFirstError
from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.caches.response_cache import ResponseCache
@ -586,7 +587,7 @@ class FederationServer(FederationBase):
async def on_send_join_request(
self, origin: str, content: JsonDict, room_id: str
) -> Dict[str, Any]:
context = await self._on_send_membership_event(
event, context = await self._on_send_membership_event(
origin, content, Membership.JOIN, room_id
)
@ -597,6 +598,7 @@ class FederationServer(FederationBase):
time_now = self._clock.time_msec()
return {
"org.matrix.msc3083.v2.event": event.get_pdu_json(),
"state": [p.get_pdu_json(time_now) for p in state.values()],
"auth_chain": [p.get_pdu_json(time_now) for p in auth_chain],
}
@ -681,7 +683,7 @@ class FederationServer(FederationBase):
Returns:
The stripped room state.
"""
event_context = await self._on_send_membership_event(
_, context = await self._on_send_membership_event(
origin, content, Membership.KNOCK, room_id
)
@ -690,14 +692,14 @@ class FederationServer(FederationBase):
# related to the room while the knock request is pending.
stripped_room_state = (
await self.store.get_stripped_room_state_from_event_context(
event_context, self._room_prejoin_state_types
context, self._room_prejoin_state_types
)
)
return {"knock_state_events": stripped_room_state}
async def _on_send_membership_event(
self, origin: str, content: JsonDict, membership_type: str, room_id: str
) -> EventContext:
) -> Tuple[EventBase, EventContext]:
"""Handle an on_send_{join,leave,knock} request
Does some preliminary validation before passing the request on to the
@ -712,7 +714,7 @@ class FederationServer(FederationBase):
in the event
Returns:
The context of the event after inserting it into the room graph.
The event and context of the event after inserting it into the room graph.
Raises:
SynapseError if there is a problem with the request, including things like
@ -748,6 +750,33 @@ class FederationServer(FederationBase):
logger.debug("_on_send_membership_event: pdu sigs: %s", event.signatures)
# Sign the event since we're vouching on behalf of the remote server that
# the event is valid to be sent into the room. Currently this is only done
# if the user is being joined via restricted join rules.
if (
room_version.msc3083_join_rules
and event.membership == Membership.JOIN
and "join_authorised_via_users_server" in event.content
):
# We can only authorise our own users.
authorising_server = get_domain_from_id(
event.content["join_authorised_via_users_server"]
)
if authorising_server != self.server_name:
raise SynapseError(
400,
f"Cannot authorise request from resident server: {authorising_server}",
)
event.signatures.update(
compute_event_signature(
room_version,
event.get_pdu_json(),
self.hs.hostname,
self.hs.signing_key,
)
)
event = await self._check_sigs_and_hash(room_version, event)
return await self.handler.on_send_membership_event(origin, event)
@ -995,6 +1024,23 @@ class FederationServer(FederationBase):
origin, event = next
# Prune the event queue if it's getting large.
#
# We do this *after* handling the first event as the common case is
# that the queue is empty (/has the single event in), and so there's
# no need to do this check.
pruned = await self.store.prune_staged_events_in_room(room_id, room_version)
if pruned:
# If we have pruned the queue check we need to refetch the next
# event to handle.
next = await self.store.get_next_staged_event_for_room(
room_id, room_version
)
if not next:
break
origin, event = next
lock = await self.store.try_acquire_lock(
_INBOUND_EVENT_HANDLING_LOCK_NAME, room_id
)

File diff suppressed because it is too large Load diff

View file

@ -984,7 +984,7 @@ class PublicRoomList(BaseFederationServlet):
limit = parse_integer_from_args(query, "limit", 0)
since_token = parse_string_from_args(query, "since", None)
include_all_networks = parse_boolean_from_args(
query, "include_all_networks", False
query, "include_all_networks", default=False
)
third_party_instance_id = parse_string_from_args(
query, "third_party_instance_id", None
@ -1908,16 +1908,7 @@ class FederationSpaceSummaryServlet(BaseFederationServlet):
suggested_only = parse_boolean_from_args(query, "suggested_only", default=False)
max_rooms_per_space = parse_integer_from_args(query, "max_rooms_per_space")
exclude_rooms = []
if b"exclude_rooms" in query:
try:
exclude_rooms = [
room_id.decode("ascii") for room_id in query[b"exclude_rooms"]
]
except Exception:
raise SynapseError(
400, "Bad query parameter for exclude_rooms", Codes.INVALID_PARAM
)
exclude_rooms = parse_strings_from_args(query, "exclude_rooms", default=[])
return 200, await self.handler.federation_space_summary(
origin, room_id, suggested_only, max_rooms_per_space, exclude_rooms

View file

@ -11,6 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Collection, List, Optional, Union
from synapse import event_auth
@ -20,16 +21,18 @@ from synapse.api.constants import (
Membership,
RestrictedJoinRuleTypes,
)
from synapse.api.errors import AuthError
from synapse.api.errors import AuthError, Codes, SynapseError
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersion
from synapse.events import EventBase
from synapse.events.builder import EventBuilder
from synapse.types import StateMap
from synapse.types import StateMap, get_domain_from_id
from synapse.util.metrics import Measure
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class EventAuthHandler:
"""
@ -39,6 +42,7 @@ class EventAuthHandler:
def __init__(self, hs: "HomeServer"):
self._clock = hs.get_clock()
self._store = hs.get_datastore()
self._server_name = hs.hostname
async def check_from_context(
self, room_version: str, event, context, do_sig_check=True
@ -81,15 +85,76 @@ class EventAuthHandler:
# introduce undesirable "state reset" behaviour.
#
# All of which sounds a bit tricky so we don't bother for now.
auth_ids = []
for etype, state_key in event_auth.auth_types_for_event(event):
for etype, state_key in event_auth.auth_types_for_event(
event.room_version, event
):
auth_ev_id = current_state_ids.get((etype, state_key))
if auth_ev_id:
auth_ids.append(auth_ev_id)
return auth_ids
async def get_user_which_could_invite(
self, room_id: str, current_state_ids: StateMap[str]
) -> str:
"""
Searches the room state for a local user who has the power level necessary
to invite other users.
Args:
room_id: The room ID under search.
current_state_ids: The current state of the room.
Returns:
The MXID of the user which could issue an invite.
Raises:
SynapseError if no appropriate user is found.
"""
power_level_event_id = current_state_ids.get((EventTypes.PowerLevels, ""))
invite_level = 0
users_default_level = 0
if power_level_event_id:
power_level_event = await self._store.get_event(power_level_event_id)
invite_level = power_level_event.content.get("invite", invite_level)
users_default_level = power_level_event.content.get(
"users_default", users_default_level
)
users = power_level_event.content.get("users", {})
else:
users = {}
# Find the user with the highest power level.
users_in_room = await self._store.get_users_in_room(room_id)
# Only interested in local users.
local_users_in_room = [
u for u in users_in_room if get_domain_from_id(u) == self._server_name
]
chosen_user = max(
local_users_in_room,
key=lambda user: users.get(user, users_default_level),
default=None,
)
# Return the chosen if they can issue invites.
user_power_level = users.get(chosen_user, users_default_level)
if chosen_user and user_power_level >= invite_level:
logger.debug(
"Found a user who can issue invites %s with power level %d >= invite level %d",
chosen_user,
user_power_level,
invite_level,
)
return chosen_user
# No user was found.
raise SynapseError(
400,
"Unable to find a user which could issue an invite",
Codes.UNABLE_TO_GRANT_JOIN,
)
async def check_host_in_room(self, room_id: str, host: str) -> bool:
with Measure(self._clock, "check_host_in_room"):
return await self._store.is_host_joined(room_id, host)
@ -134,6 +199,18 @@ class EventAuthHandler:
# in any of them.
allowed_rooms = await self.get_rooms_that_allow_join(state_ids)
if not await self.is_user_in_rooms(allowed_rooms, user_id):
# If this is a remote request, the user might be in an allowed room
# that we do not know about.
if get_domain_from_id(user_id) != self._server_name:
for room_id in allowed_rooms:
if not await self._store.is_host_joined(room_id, self._server_name):
raise SynapseError(
400,
f"Unable to check if {user_id} is in allowed rooms.",
Codes.UNABLE_AUTHORISE_JOIN,
)
raise AuthError(
403,
"You do not belong to any of the required rooms to join this room.",

Some files were not shown because too many files have changed in this diff Show more