mirror of
https://mau.dev/maunium/synapse.git
synced 2024-12-14 10:23:57 +01:00
Merge branch 'develop' into babolivier/context_filters
This commit is contained in:
commit
9dc84b7989
189 changed files with 4485 additions and 1443 deletions
|
@ -1,6 +1,6 @@
|
||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
set -ex
|
set -e
|
||||||
|
|
||||||
if [[ "$BUILDKITE_BRANCH" =~ ^(develop|master|dinsic|shhs|release-.*)$ ]]; then
|
if [[ "$BUILDKITE_BRANCH" =~ ^(develop|master|dinsic|shhs|release-.*)$ ]]; then
|
||||||
echo "Not merging forward, as this is a release branch"
|
echo "Not merging forward, as this is a release branch"
|
||||||
|
@ -18,6 +18,8 @@ else
|
||||||
GITBASE=$BUILDKITE_PULL_REQUEST_BASE_BRANCH
|
GITBASE=$BUILDKITE_PULL_REQUEST_BASE_BRANCH
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
echo "--- merge_base_branch $GITBASE"
|
||||||
|
|
||||||
# Show what we are before
|
# Show what we are before
|
||||||
git --no-pager show -s
|
git --no-pager show -s
|
||||||
|
|
||||||
|
|
|
@ -28,3 +28,39 @@ User sees updates to presence from other users in the incremental sync.
|
||||||
Gapped incremental syncs include all state changes
|
Gapped incremental syncs include all state changes
|
||||||
|
|
||||||
Old members are included in gappy incr LL sync if they start speaking
|
Old members are included in gappy incr LL sync if they start speaking
|
||||||
|
|
||||||
|
# new failures as of https://github.com/matrix-org/sytest/pull/732
|
||||||
|
Device list doesn't change if remote server is down
|
||||||
|
Remote servers cannot set power levels in rooms without existing powerlevels
|
||||||
|
Remote servers should reject attempts by non-creators to set the power levels
|
||||||
|
|
||||||
|
# new failures as of https://github.com/matrix-org/sytest/pull/753
|
||||||
|
GET /rooms/:room_id/messages returns a message
|
||||||
|
GET /rooms/:room_id/messages lazy loads members correctly
|
||||||
|
Read receipts are sent as events
|
||||||
|
Only original members of the room can see messages from erased users
|
||||||
|
Device deletion propagates over federation
|
||||||
|
If user leaves room, remote user changes device and rejoins we see update in /sync and /keys/changes
|
||||||
|
Changing user-signing key notifies local users
|
||||||
|
Newly updated tags appear in an incremental v2 /sync
|
||||||
|
Server correctly handles incoming m.device_list_update
|
||||||
|
Local device key changes get to remote servers with correct prev_id
|
||||||
|
AS-ghosted users can use rooms via AS
|
||||||
|
Ghost user must register before joining room
|
||||||
|
Test that a message is pushed
|
||||||
|
Invites are pushed
|
||||||
|
Rooms with aliases are correctly named in pushed
|
||||||
|
Rooms with names are correctly named in pushed
|
||||||
|
Rooms with canonical alias are correctly named in pushed
|
||||||
|
Rooms with many users are correctly pushed
|
||||||
|
Don't get pushed for rooms you've muted
|
||||||
|
Rejected events are not pushed
|
||||||
|
Test that rejected pushers are removed.
|
||||||
|
Events come down the correct room
|
||||||
|
|
||||||
|
# https://buildkite.com/matrix-dot-org/sytest/builds/326#cca62404-a88a-4fcb-ad41-175fd3377603
|
||||||
|
Presence changes to UNAVAILABLE are reported to remote room members
|
||||||
|
If remote user leaves room, changes device and rejoins we see update in sync
|
||||||
|
uploading self-signing key notifies over federation
|
||||||
|
Inbound federation can receive redacted events
|
||||||
|
Outbound federation can request missing events
|
||||||
|
|
116
CHANGES.md
116
CHANGES.md
|
@ -1,3 +1,119 @@
|
||||||
|
Synapse 1.6.1 (2019-11-28)
|
||||||
|
==========================
|
||||||
|
|
||||||
|
Security updates
|
||||||
|
----------------
|
||||||
|
|
||||||
|
This release includes a security fix ([\#6426](https://github.com/matrix-org/synapse/issues/6426), below). Administrators are encouraged to upgrade as soon as possible.
|
||||||
|
|
||||||
|
Bugfixes
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Clean up local threepids from user on account deactivation. ([\#6426](https://github.com/matrix-org/synapse/issues/6426))
|
||||||
|
- Fix startup error when http proxy is defined. ([\#6421](https://github.com/matrix-org/synapse/issues/6421))
|
||||||
|
|
||||||
|
|
||||||
|
Synapse 1.6.0 (2019-11-26)
|
||||||
|
==========================
|
||||||
|
|
||||||
|
Bugfixes
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Fix phone home stats reporting. ([\#6418](https://github.com/matrix-org/synapse/issues/6418))
|
||||||
|
|
||||||
|
|
||||||
|
Synapse 1.6.0rc2 (2019-11-25)
|
||||||
|
=============================
|
||||||
|
|
||||||
|
Bugfixes
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Fix a bug which could cause the background database update hander for event labels to get stuck in a loop raising exceptions. ([\#6407](https://github.com/matrix-org/synapse/issues/6407))
|
||||||
|
|
||||||
|
|
||||||
|
Synapse 1.6.0rc1 (2019-11-20)
|
||||||
|
=============================
|
||||||
|
|
||||||
|
Features
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Add federation support for cross-signing. ([\#5727](https://github.com/matrix-org/synapse/issues/5727))
|
||||||
|
- Increase default room version from 4 to 5, thereby enforcing server key validity period checks. ([\#6220](https://github.com/matrix-org/synapse/issues/6220))
|
||||||
|
- Add support for outbound http proxying via http_proxy/HTTPS_PROXY env vars. ([\#6238](https://github.com/matrix-org/synapse/issues/6238))
|
||||||
|
- Implement label-based filtering on `/sync` and `/messages` ([MSC2326](https://github.com/matrix-org/matrix-doc/pull/2326)). ([\#6301](https://github.com/matrix-org/synapse/issues/6301), [\#6310](https://github.com/matrix-org/synapse/issues/6310), [\#6340](https://github.com/matrix-org/synapse/issues/6340))
|
||||||
|
|
||||||
|
|
||||||
|
Bugfixes
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Fix LruCache callback deduplication for Python 3.8. Contributed by @V02460. ([\#6213](https://github.com/matrix-org/synapse/issues/6213))
|
||||||
|
- Remove a room from a server's public rooms list on room upgrade. ([\#6232](https://github.com/matrix-org/synapse/issues/6232), [\#6235](https://github.com/matrix-org/synapse/issues/6235))
|
||||||
|
- Delete keys from key backup when deleting backup versions. ([\#6253](https://github.com/matrix-org/synapse/issues/6253))
|
||||||
|
- Make notification of cross-signing signatures work with workers. ([\#6254](https://github.com/matrix-org/synapse/issues/6254))
|
||||||
|
- Fix exception when remote servers attempt to join a room that they're not allowed to join. ([\#6278](https://github.com/matrix-org/synapse/issues/6278))
|
||||||
|
- Prevent errors from appearing on Synapse startup if `git` is not installed. ([\#6284](https://github.com/matrix-org/synapse/issues/6284))
|
||||||
|
- Appservice requests will no longer contain a double slash prefix when the appservice url provided ends in a slash. ([\#6306](https://github.com/matrix-org/synapse/issues/6306))
|
||||||
|
- Fix `/purge_room` admin API. ([\#6307](https://github.com/matrix-org/synapse/issues/6307))
|
||||||
|
- Fix the `hidden` field in the `devices` table for SQLite versions prior to 3.23.0. ([\#6313](https://github.com/matrix-org/synapse/issues/6313))
|
||||||
|
- Fix bug which casued rejected events to be persisted with the wrong room state. ([\#6320](https://github.com/matrix-org/synapse/issues/6320))
|
||||||
|
- Fix bug where `rc_login` ratelimiting would prematurely kick in. ([\#6335](https://github.com/matrix-org/synapse/issues/6335))
|
||||||
|
- Prevent the server taking a long time to start up when guest registration is enabled. ([\#6338](https://github.com/matrix-org/synapse/issues/6338))
|
||||||
|
- Fix bug where upgrading a guest account to a full user would fail when account validity is enabled. ([\#6359](https://github.com/matrix-org/synapse/issues/6359))
|
||||||
|
- Fix `to_device` stream ID getting reset every time Synapse restarts, which had the potential to cause unable to decrypt errors. ([\#6363](https://github.com/matrix-org/synapse/issues/6363))
|
||||||
|
- Fix permission denied error when trying to generate a config file with the docker image. ([\#6389](https://github.com/matrix-org/synapse/issues/6389))
|
||||||
|
|
||||||
|
|
||||||
|
Improved Documentation
|
||||||
|
----------------------
|
||||||
|
|
||||||
|
- Contributor documentation now mentions script to run linters. ([\#6164](https://github.com/matrix-org/synapse/issues/6164))
|
||||||
|
- Modify CAPTCHA_SETUP.md to update the terms `private key` and `public key` to `secret key` and `site key` respectively. Contributed by Yash Jipkate. ([\#6257](https://github.com/matrix-org/synapse/issues/6257))
|
||||||
|
- Update `INSTALL.md` Email section to talk about `account_threepid_delegates`. ([\#6272](https://github.com/matrix-org/synapse/issues/6272))
|
||||||
|
- Fix a small typo in `account_threepid_delegates` configuration option. ([\#6273](https://github.com/matrix-org/synapse/issues/6273))
|
||||||
|
|
||||||
|
|
||||||
|
Internal Changes
|
||||||
|
----------------
|
||||||
|
|
||||||
|
- Add a CI job to test the `synapse_port_db` script. ([\#6140](https://github.com/matrix-org/synapse/issues/6140), [\#6276](https://github.com/matrix-org/synapse/issues/6276))
|
||||||
|
- Convert EventContext to an attrs. ([\#6218](https://github.com/matrix-org/synapse/issues/6218))
|
||||||
|
- Move `persist_events` out from main data store. ([\#6240](https://github.com/matrix-org/synapse/issues/6240), [\#6300](https://github.com/matrix-org/synapse/issues/6300))
|
||||||
|
- Reduce verbosity of user/room stats. ([\#6250](https://github.com/matrix-org/synapse/issues/6250))
|
||||||
|
- Reduce impact of debug logging. ([\#6251](https://github.com/matrix-org/synapse/issues/6251))
|
||||||
|
- Expose some homeserver functionality to spam checkers. ([\#6259](https://github.com/matrix-org/synapse/issues/6259))
|
||||||
|
- Change cache descriptors to always return deferreds. ([\#6263](https://github.com/matrix-org/synapse/issues/6263), [\#6291](https://github.com/matrix-org/synapse/issues/6291))
|
||||||
|
- Fix incorrect comment regarding the functionality of an `if` statement. ([\#6269](https://github.com/matrix-org/synapse/issues/6269))
|
||||||
|
- Update CI to run `isort` over the `scripts` and `scripts-dev` directories. ([\#6270](https://github.com/matrix-org/synapse/issues/6270))
|
||||||
|
- Replace every instance of `logger.warn` method with `logger.warning` as the former is deprecated. ([\#6271](https://github.com/matrix-org/synapse/issues/6271), [\#6314](https://github.com/matrix-org/synapse/issues/6314))
|
||||||
|
- Port replication http server endpoints to async/await. ([\#6274](https://github.com/matrix-org/synapse/issues/6274))
|
||||||
|
- Port room rest handlers to async/await. ([\#6275](https://github.com/matrix-org/synapse/issues/6275))
|
||||||
|
- Remove redundant CLI parameters on CI's `flake8` step. ([\#6277](https://github.com/matrix-org/synapse/issues/6277))
|
||||||
|
- Port `federation_server.py` to async/await. ([\#6279](https://github.com/matrix-org/synapse/issues/6279))
|
||||||
|
- Port receipt and read markers to async/wait. ([\#6280](https://github.com/matrix-org/synapse/issues/6280))
|
||||||
|
- Split out state storage into separate data store. ([\#6294](https://github.com/matrix-org/synapse/issues/6294), [\#6295](https://github.com/matrix-org/synapse/issues/6295))
|
||||||
|
- Refactor EventContext for clarity. ([\#6298](https://github.com/matrix-org/synapse/issues/6298))
|
||||||
|
- Update the version of black used to 19.10b0. ([\#6304](https://github.com/matrix-org/synapse/issues/6304))
|
||||||
|
- Add some documentation about worker replication. ([\#6305](https://github.com/matrix-org/synapse/issues/6305))
|
||||||
|
- Move admin endpoints into separate files. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#6308](https://github.com/matrix-org/synapse/issues/6308))
|
||||||
|
- Document the use of `lint.sh` for code style enforcement & extend it to run on specified paths only. ([\#6312](https://github.com/matrix-org/synapse/issues/6312))
|
||||||
|
- Add optional python dependencies and dependant binary libraries to snapcraft packaging. ([\#6317](https://github.com/matrix-org/synapse/issues/6317))
|
||||||
|
- Remove the dependency on psutil and replace functionality with the stdlib `resource` module. ([\#6318](https://github.com/matrix-org/synapse/issues/6318), [\#6336](https://github.com/matrix-org/synapse/issues/6336))
|
||||||
|
- Improve documentation for EventContext fields. ([\#6319](https://github.com/matrix-org/synapse/issues/6319))
|
||||||
|
- Add some checks that we aren't using state from rejected events. ([\#6330](https://github.com/matrix-org/synapse/issues/6330))
|
||||||
|
- Add continuous integration for python 3.8. ([\#6341](https://github.com/matrix-org/synapse/issues/6341))
|
||||||
|
- Correct spacing/case of various instances of the word "homeserver". ([\#6357](https://github.com/matrix-org/synapse/issues/6357))
|
||||||
|
- Temporarily blacklist the failing unit test PurgeRoomTestCase.test_purge_room. ([\#6361](https://github.com/matrix-org/synapse/issues/6361))
|
||||||
|
|
||||||
|
|
||||||
|
Synapse 1.5.1 (2019-11-06)
|
||||||
|
==========================
|
||||||
|
|
||||||
|
Features
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Limit the length of data returned by url previews, to prevent DoS attacks. ([\#6331](https://github.com/matrix-org/synapse/issues/6331), [\#6334](https://github.com/matrix-org/synapse/issues/6334))
|
||||||
|
|
||||||
|
|
||||||
Synapse 1.5.0 (2019-10-29)
|
Synapse 1.5.0 (2019-10-29)
|
||||||
==========================
|
==========================
|
||||||
|
|
||||||
|
|
18
INSTALL.md
18
INSTALL.md
|
@ -36,7 +36,7 @@ that your email address is probably `user@example.com` rather than
|
||||||
System requirements:
|
System requirements:
|
||||||
|
|
||||||
- POSIX-compliant system (tested on Linux & OS X)
|
- POSIX-compliant system (tested on Linux & OS X)
|
||||||
- Python 3.5, 3.6, or 3.7
|
- Python 3.5, 3.6, 3.7 or 3.8.
|
||||||
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
|
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
|
||||||
|
|
||||||
Synapse is written in Python but some of the libraries it uses are written in
|
Synapse is written in Python but some of the libraries it uses are written in
|
||||||
|
@ -109,8 +109,8 @@ Installing prerequisites on Ubuntu or Debian:
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo apt-get install build-essential python3-dev libffi-dev \
|
sudo apt-get install build-essential python3-dev libffi-dev \
|
||||||
python-pip python-setuptools sqlite3 \
|
python3-pip python3-setuptools sqlite3 \
|
||||||
libssl-dev python-virtualenv libjpeg-dev libxslt1-dev
|
libssl-dev python3-virtualenv libjpeg-dev libxslt1-dev
|
||||||
```
|
```
|
||||||
|
|
||||||
#### ArchLinux
|
#### ArchLinux
|
||||||
|
@ -133,9 +133,9 @@ sudo yum install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
|
||||||
sudo yum groupinstall "Development Tools"
|
sudo yum groupinstall "Development Tools"
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Mac OS X
|
#### macOS
|
||||||
|
|
||||||
Installing prerequisites on Mac OS X:
|
Installing prerequisites on macOS:
|
||||||
|
|
||||||
```
|
```
|
||||||
xcode-select --install
|
xcode-select --install
|
||||||
|
@ -144,6 +144,14 @@ sudo pip install virtualenv
|
||||||
brew install pkg-config libffi
|
brew install pkg-config libffi
|
||||||
```
|
```
|
||||||
|
|
||||||
|
On macOS Catalina (10.15) you may need to explicitly install OpenSSL
|
||||||
|
via brew and inform `pip` about it so that `psycopg2` builds:
|
||||||
|
|
||||||
|
```
|
||||||
|
brew install openssl@1.1
|
||||||
|
export LDFLAGS=-L/usr/local/Cellar/openssl\@1.1/1.1.1d/lib/
|
||||||
|
```
|
||||||
|
|
||||||
#### OpenSUSE
|
#### OpenSUSE
|
||||||
|
|
||||||
Installing prerequisites on openSUSE:
|
Installing prerequisites on openSUSE:
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
Add federation support for cross-signing.
|
|
1
changelog.d/5815.feature
Normal file
1
changelog.d/5815.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Implement per-room message retention policies.
|
1
changelog.d/5858.feature
Normal file
1
changelog.d/5858.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add etag and count fields to key backup endpoints to help clients guess if there are new keys.
|
1
changelog.d/6119.feature
Normal file
1
changelog.d/6119.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Require User-Interactive Authentication for `/account/3pid/add`, meaning the user's password will be required to add a third-party ID to their account.
|
|
@ -1 +0,0 @@
|
||||||
Add a CI job to test the `synapse_port_db` script.
|
|
|
@ -1 +0,0 @@
|
||||||
Contributor documentation now mentions script to run linters.
|
|
1
changelog.d/6176.feature
Normal file
1
changelog.d/6176.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Implement the `/_matrix/federation/unstable/net.atleastfornow/state/<context>` API as drafted in MSC2314.
|
|
@ -1 +0,0 @@
|
||||||
Convert EventContext to an attrs.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove a room from a server's public rooms list on room upgrade.
|
|
1
changelog.d/6237.bugfix
Normal file
1
changelog.d/6237.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Transfer non-standard power levels on room upgrade.
|
|
@ -1 +0,0 @@
|
||||||
Add support for outbound http proxying via http_proxy/HTTPS_PROXY env vars.
|
|
|
@ -1 +0,0 @@
|
||||||
Move `persist_events` out from main data store.
|
|
1
changelog.d/6241.bugfix
Normal file
1
changelog.d/6241.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix error from the Pillow library when uploading RGBA images.
|
|
@ -1 +0,0 @@
|
||||||
Reduce verbosity of user/room stats.
|
|
|
@ -1 +0,0 @@
|
||||||
Reduce impact of debug logging.
|
|
|
@ -1 +0,0 @@
|
||||||
Delete keys from key backup when deleting backup versions.
|
|
|
@ -1 +0,0 @@
|
||||||
Make notification of cross-signing signatures work with workers.
|
|
|
@ -1 +0,0 @@
|
||||||
Modify CAPTCHA_SETUP.md to update the terms `private key` and `public key` to `secret key` and `site key` respectively. Contributed by Yash Jipkate.
|
|
|
@ -1 +0,0 @@
|
||||||
Expose some homeserver functionality to spam checkers.
|
|
|
@ -1 +0,0 @@
|
||||||
Change cache descriptors to always return deferreds.
|
|
1
changelog.d/6266.misc
Normal file
1
changelog.d/6266.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add benchmarks for structured logging and improve output performance.
|
|
@ -1 +0,0 @@
|
||||||
Fix incorrect comment regarding the functionality of an `if` statement.
|
|
|
@ -1 +0,0 @@
|
||||||
Update CI to run `isort` over the `scripts` and `scripts-dev` directories.
|
|
|
@ -1 +0,0 @@
|
||||||
Replace every instance of `logger.warn` method with `logger.warning` as the former is deprecated.
|
|
|
@ -1 +0,0 @@
|
||||||
Update `INSTALL.md` Email section to talk about `account_threepid_delegates`.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a small typo in `account_threepid_delegates` configuration option.
|
|
|
@ -1 +0,0 @@
|
||||||
Port replication http server endpoints to async/await.
|
|
|
@ -1 +0,0 @@
|
||||||
Port room rest handlers to async/await.
|
|
|
@ -1 +0,0 @@
|
||||||
Add a CI job to test the `synapse_port_db` script.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove redundant CLI parameters on CI's `flake8` step.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix exception when remote servers attempt to join a room that they're not allowed to join.
|
|
|
@ -1 +0,0 @@
|
||||||
Port `federation_server.py` to async/await.
|
|
|
@ -1 +0,0 @@
|
||||||
Port receipt and read markers to async/wait.
|
|
|
@ -1 +0,0 @@
|
||||||
Prevent errors from appearing on Synapse startup if `git` is not installed.
|
|
|
@ -1 +0,0 @@
|
||||||
Change cache descriptors to always return deferreds.
|
|
|
@ -1 +0,0 @@
|
||||||
Split out state storage into separate data store.
|
|
|
@ -1 +0,0 @@
|
||||||
Refactor EventContext for clarity.
|
|
|
@ -1 +0,0 @@
|
||||||
Move `persist_events` out from main data store.
|
|
|
@ -1 +0,0 @@
|
||||||
Implement label-based filtering on `/sync` and `/messages` ([MSC2326](https://github.com/matrix-org/matrix-doc/pull/2326)).
|
|
|
@ -1 +0,0 @@
|
||||||
Update the version of black used to 19.10b0.
|
|
|
@ -1 +0,0 @@
|
||||||
Add some documentation about worker replication.
|
|
|
@ -1 +0,0 @@
|
||||||
Appservice requests will no longer contain a double slash prefix when the appservice url provided ends in a slash.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix `/purge_room` admin API.
|
|
|
@ -1 +0,0 @@
|
||||||
Document the use of `lint.sh` for code style enforcement & extend it to run on specified paths only.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix the `hidden` field in the `devices` table for SQLite versions prior to 3.23.0.
|
|
|
@ -1 +0,0 @@
|
||||||
Replace every instance of `logger.warn` method with `logger.warning` as the former is deprecated.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove the dependency on psutil and replace functionality with the stdlib `resource` module.
|
|
1
changelog.d/6322.misc
Normal file
1
changelog.d/6322.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Improve the performance of outputting structured logging.
|
1
changelog.d/6332.bugfix
Normal file
1
changelog.d/6332.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix caching devices for remote users when using workers, so that we don't attempt to refetch (and potentially fail) each time a user requests devices.
|
1
changelog.d/6333.bugfix
Normal file
1
changelog.d/6333.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Prevent account data syncs getting lost across TCP replication.
|
1
changelog.d/6343.misc
Normal file
1
changelog.d/6343.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Refactor some code in the event authentication path for clarity.
|
1
changelog.d/6362.misc
Normal file
1
changelog.d/6362.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Clean up some unnecessary quotation marks around the codebase.
|
1
changelog.d/6379.misc
Normal file
1
changelog.d/6379.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Complain on startup instead of 500'ing during runtime when `public_baseurl` isn't set when necessary.
|
1
changelog.d/6388.doc
Normal file
1
changelog.d/6388.doc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix link in the user directory documentation.
|
1
changelog.d/6390.doc
Normal file
1
changelog.d/6390.doc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add build instructions to the docker readme.
|
1
changelog.d/6392.misc
Normal file
1
changelog.d/6392.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add a test scenario to make sure room history purges don't break `/messages` in the future.
|
1
changelog.d/6408.bugfix
Normal file
1
changelog.d/6408.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix an intermittent exception when handling read-receipts.
|
1
changelog.d/6420.bugfix
Normal file
1
changelog.d/6420.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix broken guest registration when there are existing blocks of numeric user IDs.
|
1
changelog.d/6421.bugfix
Normal file
1
changelog.d/6421.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix startup error when http proxy is defined.
|
1
changelog.d/6423.misc
Normal file
1
changelog.d/6423.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Clarifications for the email configuration settings.
|
1
changelog.d/6426.bugfix
Normal file
1
changelog.d/6426.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Clean up local threepids from user on account deactivation.
|
1
changelog.d/6429.misc
Normal file
1
changelog.d/6429.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add more tests to the blacklist when running in worker mode.
|
1
changelog.d/6434.feature
Normal file
1
changelog.d/6434.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add support for MSC 2367, which allows specifying a reason on all membership events.
|
1
changelog.d/6436.bugfix
Normal file
1
changelog.d/6436.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix a bug where a room could become unusable with a low retention policy and a low activity.
|
1
changelog.d/6443.doc
Normal file
1
changelog.d/6443.doc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Switch Ubuntu package install recommendation to use python3 packages in INSTALL.md.
|
18
debian/changelog
vendored
18
debian/changelog
vendored
|
@ -1,3 +1,21 @@
|
||||||
|
matrix-synapse-py3 (1.6.1) stable; urgency=medium
|
||||||
|
|
||||||
|
* New synapse release 1.6.1.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Thu, 28 Nov 2019 11:10:40 +0000
|
||||||
|
|
||||||
|
matrix-synapse-py3 (1.6.0) stable; urgency=medium
|
||||||
|
|
||||||
|
* New synapse release 1.6.0.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Tue, 26 Nov 2019 12:15:40 +0000
|
||||||
|
|
||||||
|
matrix-synapse-py3 (1.5.1) stable; urgency=medium
|
||||||
|
|
||||||
|
* New synapse release 1.5.1.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Wed, 06 Nov 2019 10:02:14 +0000
|
||||||
|
|
||||||
matrix-synapse-py3 (1.5.0) stable; urgency=medium
|
matrix-synapse-py3 (1.5.0) stable; urgency=medium
|
||||||
|
|
||||||
* New synapse release 1.5.0.
|
* New synapse release 1.5.0.
|
||||||
|
|
|
@ -130,3 +130,15 @@ docker run -it --rm \
|
||||||
This will generate the same configuration file as the legacy mode used, but
|
This will generate the same configuration file as the legacy mode used, but
|
||||||
will store it in `/data/homeserver.yaml` instead of a temporary location. You
|
will store it in `/data/homeserver.yaml` instead of a temporary location. You
|
||||||
can then use it as shown above at [Running synapse](#running-synapse).
|
can then use it as shown above at [Running synapse](#running-synapse).
|
||||||
|
|
||||||
|
## Building the image
|
||||||
|
|
||||||
|
If you need to build the image from a Synapse checkout, use the following `docker
|
||||||
|
build` command from the repo's root:
|
||||||
|
|
||||||
|
```
|
||||||
|
docker build -t matrixdotorg/synapse -f docker/Dockerfile .
|
||||||
|
```
|
||||||
|
|
||||||
|
You can choose to build a different docker image by changing the value of the `-f` flag to
|
||||||
|
point to another Dockerfile.
|
||||||
|
|
|
@ -169,11 +169,11 @@ def run_generate_config(environ, ownership):
|
||||||
# log("running %s" % (args, ))
|
# log("running %s" % (args, ))
|
||||||
|
|
||||||
if ownership is not None:
|
if ownership is not None:
|
||||||
args = ["su-exec", ownership] + args
|
|
||||||
os.execv("/sbin/su-exec", args)
|
|
||||||
|
|
||||||
# make sure that synapse has perms to write to the data dir.
|
# make sure that synapse has perms to write to the data dir.
|
||||||
subprocess.check_output(["chown", ownership, data_dir])
|
subprocess.check_output(["chown", ownership, data_dir])
|
||||||
|
|
||||||
|
args = ["su-exec", ownership] + args
|
||||||
|
os.execv("/sbin/su-exec", args)
|
||||||
else:
|
else:
|
||||||
os.execv("/usr/local/bin/python", args)
|
os.execv("/usr/local/bin/python", args)
|
||||||
|
|
||||||
|
|
|
@ -72,7 +72,7 @@ pid_file: DATADIR/homeserver.pid
|
||||||
# For example, for room version 1, default_room_version should be set
|
# For example, for room version 1, default_room_version should be set
|
||||||
# to "1".
|
# to "1".
|
||||||
#
|
#
|
||||||
#default_room_version: "4"
|
#default_room_version: "5"
|
||||||
|
|
||||||
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
|
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
|
||||||
#
|
#
|
||||||
|
@ -287,7 +287,7 @@ listeners:
|
||||||
# Used by phonehome stats to group together related servers.
|
# Used by phonehome stats to group together related servers.
|
||||||
#server_context: context
|
#server_context: context
|
||||||
|
|
||||||
# Resource-constrained Homeserver Settings
|
# Resource-constrained homeserver Settings
|
||||||
#
|
#
|
||||||
# If limit_remote_rooms.enabled is True, the room complexity will be
|
# If limit_remote_rooms.enabled is True, the room complexity will be
|
||||||
# checked before a user joins a new remote room. If it is above
|
# checked before a user joins a new remote room. If it is above
|
||||||
|
@ -328,6 +328,69 @@ listeners:
|
||||||
#
|
#
|
||||||
#user_ips_max_age: 14d
|
#user_ips_max_age: 14d
|
||||||
|
|
||||||
|
# Message retention policy at the server level.
|
||||||
|
#
|
||||||
|
# Room admins and mods can define a retention period for their rooms using the
|
||||||
|
# 'm.room.retention' state event, and server admins can cap this period by setting
|
||||||
|
# the 'allowed_lifetime_min' and 'allowed_lifetime_max' config options.
|
||||||
|
#
|
||||||
|
# If this feature is enabled, Synapse will regularly look for and purge events
|
||||||
|
# which are older than the room's maximum retention period. Synapse will also
|
||||||
|
# filter events received over federation so that events that should have been
|
||||||
|
# purged are ignored and not stored again.
|
||||||
|
#
|
||||||
|
retention:
|
||||||
|
# The message retention policies feature is disabled by default. Uncomment the
|
||||||
|
# following line to enable it.
|
||||||
|
#
|
||||||
|
#enabled: true
|
||||||
|
|
||||||
|
# Default retention policy. If set, Synapse will apply it to rooms that lack the
|
||||||
|
# 'm.room.retention' state event. Currently, the value of 'min_lifetime' doesn't
|
||||||
|
# matter much because Synapse doesn't take it into account yet.
|
||||||
|
#
|
||||||
|
#default_policy:
|
||||||
|
# min_lifetime: 1d
|
||||||
|
# max_lifetime: 1y
|
||||||
|
|
||||||
|
# Retention policy limits. If set, a user won't be able to send a
|
||||||
|
# 'm.room.retention' event which features a 'min_lifetime' or a 'max_lifetime'
|
||||||
|
# that's not within this range. This is especially useful in closed federations,
|
||||||
|
# in which server admins can make sure every federating server applies the same
|
||||||
|
# rules.
|
||||||
|
#
|
||||||
|
#allowed_lifetime_min: 1d
|
||||||
|
#allowed_lifetime_max: 1y
|
||||||
|
|
||||||
|
# Server admins can define the settings of the background jobs purging the
|
||||||
|
# events which lifetime has expired under the 'purge_jobs' section.
|
||||||
|
#
|
||||||
|
# If no configuration is provided, a single job will be set up to delete expired
|
||||||
|
# events in every room daily.
|
||||||
|
#
|
||||||
|
# Each job's configuration defines which range of message lifetimes the job
|
||||||
|
# takes care of. For example, if 'shortest_max_lifetime' is '2d' and
|
||||||
|
# 'longest_max_lifetime' is '3d', the job will handle purging expired events in
|
||||||
|
# rooms whose state defines a 'max_lifetime' that's both higher than 2 days, and
|
||||||
|
# lower than or equal to 3 days. Both the minimum and the maximum value of a
|
||||||
|
# range are optional, e.g. a job with no 'shortest_max_lifetime' and a
|
||||||
|
# 'longest_max_lifetime' of '3d' will handle every room with a retention policy
|
||||||
|
# which 'max_lifetime' is lower than or equal to three days.
|
||||||
|
#
|
||||||
|
# The rationale for this per-job configuration is that some rooms might have a
|
||||||
|
# retention policy with a low 'max_lifetime', where history needs to be purged
|
||||||
|
# of outdated messages on a very frequent basis (e.g. every 5min), but not want
|
||||||
|
# that purge to be performed by a job that's iterating over every room it knows,
|
||||||
|
# which would be quite heavy on the server.
|
||||||
|
#
|
||||||
|
#purge_jobs:
|
||||||
|
# - shortest_max_lifetime: 1d
|
||||||
|
# longest_max_lifetime: 3d
|
||||||
|
# interval: 5m:
|
||||||
|
# - shortest_max_lifetime: 3d
|
||||||
|
# longest_max_lifetime: 1y
|
||||||
|
# interval: 24h
|
||||||
|
|
||||||
|
|
||||||
## TLS ##
|
## TLS ##
|
||||||
|
|
||||||
|
@ -743,11 +806,11 @@ uploads_path: "DATADIR/uploads"
|
||||||
## Captcha ##
|
## Captcha ##
|
||||||
# See docs/CAPTCHA_SETUP for full details of configuring this.
|
# See docs/CAPTCHA_SETUP for full details of configuring this.
|
||||||
|
|
||||||
# This Home Server's ReCAPTCHA public key.
|
# This homeserver's ReCAPTCHA public key.
|
||||||
#
|
#
|
||||||
#recaptcha_public_key: "YOUR_PUBLIC_KEY"
|
#recaptcha_public_key: "YOUR_PUBLIC_KEY"
|
||||||
|
|
||||||
# This Home Server's ReCAPTCHA private key.
|
# This homeserver's ReCAPTCHA private key.
|
||||||
#
|
#
|
||||||
#recaptcha_private_key: "YOUR_PRIVATE_KEY"
|
#recaptcha_private_key: "YOUR_PRIVATE_KEY"
|
||||||
|
|
||||||
|
@ -1270,8 +1333,23 @@ password_config:
|
||||||
# smtp_user: "exampleusername"
|
# smtp_user: "exampleusername"
|
||||||
# smtp_pass: "examplepassword"
|
# smtp_pass: "examplepassword"
|
||||||
# require_transport_security: false
|
# require_transport_security: false
|
||||||
# notif_from: "Your Friendly %(app)s Home Server <noreply@example.com>"
|
#
|
||||||
# app_name: Matrix
|
# # notif_from defines the "From" address to use when sending emails.
|
||||||
|
# # It must be set if email sending is enabled.
|
||||||
|
# #
|
||||||
|
# # The placeholder '%(app)s' will be replaced by the application name,
|
||||||
|
# # which is normally 'app_name' (below), but may be overridden by the
|
||||||
|
# # Matrix client application.
|
||||||
|
# #
|
||||||
|
# # Note that the placeholder must be written '%(app)s', including the
|
||||||
|
# # trailing 's'.
|
||||||
|
# #
|
||||||
|
# notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
|
||||||
|
#
|
||||||
|
# # app_name defines the default value for '%(app)s' in notif_from. It
|
||||||
|
# # defaults to 'Matrix'.
|
||||||
|
# #
|
||||||
|
# #app_name: my_branded_matrix_server
|
||||||
#
|
#
|
||||||
# # Enable email notifications by default
|
# # Enable email notifications by default
|
||||||
# #
|
# #
|
||||||
|
|
|
@ -7,7 +7,6 @@ who are present in a publicly viewable room present on the server.
|
||||||
|
|
||||||
The directory info is stored in various tables, which can (typically after
|
The directory info is stored in various tables, which can (typically after
|
||||||
DB corruption) get stale or out of sync. If this happens, for now the
|
DB corruption) get stale or out of sync. If this happens, for now the
|
||||||
solution to fix it is to execute the SQL here
|
solution to fix it is to execute the SQL [here](../synapse/storage/data_stores/main/schema/delta/53/user_dir_populate.sql)
|
||||||
https://github.com/matrix-org/synapse/blob/master/synapse/storage/schema/delta/53/user_dir_populate.sql
|
|
||||||
and then restart synapse. This should then start a background task to
|
and then restart synapse. This should then start a background task to
|
||||||
flush the current tables and regenerate the directory.
|
flush the current tables and regenerate the directory.
|
||||||
|
|
|
@ -20,11 +20,13 @@ from concurrent.futures import ThreadPoolExecutor
|
||||||
DISTS = (
|
DISTS = (
|
||||||
"debian:stretch",
|
"debian:stretch",
|
||||||
"debian:buster",
|
"debian:buster",
|
||||||
|
"debian:bullseye",
|
||||||
"debian:sid",
|
"debian:sid",
|
||||||
"ubuntu:xenial",
|
"ubuntu:xenial",
|
||||||
"ubuntu:bionic",
|
"ubuntu:bionic",
|
||||||
"ubuntu:cosmic",
|
"ubuntu:cosmic",
|
||||||
"ubuntu:disco",
|
"ubuntu:disco",
|
||||||
|
"ubuntu:eoan",
|
||||||
)
|
)
|
||||||
|
|
||||||
DESC = '''\
|
DESC = '''\
|
||||||
|
|
|
@ -20,3 +20,23 @@ parts:
|
||||||
source: .
|
source: .
|
||||||
plugin: python
|
plugin: python
|
||||||
python-version: python3
|
python-version: python3
|
||||||
|
python-packages:
|
||||||
|
- '.[all]'
|
||||||
|
build-packages:
|
||||||
|
- libffi-dev
|
||||||
|
- libturbojpeg0-dev
|
||||||
|
- libssl-dev
|
||||||
|
- libxslt1-dev
|
||||||
|
- libpq-dev
|
||||||
|
- zlib1g-dev
|
||||||
|
stage-packages:
|
||||||
|
- libasn1-8-heimdal
|
||||||
|
- libgssapi3-heimdal
|
||||||
|
- libhcrypto4-heimdal
|
||||||
|
- libheimbase1-heimdal
|
||||||
|
- libheimntlm0-heimdal
|
||||||
|
- libhx509-5-heimdal
|
||||||
|
- libkrb5-26-heimdal
|
||||||
|
- libldap-2.4-2
|
||||||
|
- libpq5
|
||||||
|
- libsasl2-2
|
||||||
|
|
|
@ -36,7 +36,7 @@ try:
|
||||||
except ImportError:
|
except ImportError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
__version__ = "1.5.0"
|
__version__ = "1.6.1"
|
||||||
|
|
||||||
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||||
# We import here so that we don't have to install a bunch of deps when
|
# We import here so that we don't have to install a bunch of deps when
|
||||||
|
|
|
@ -95,6 +95,8 @@ class EventTypes(object):
|
||||||
ServerACL = "m.room.server_acl"
|
ServerACL = "m.room.server_acl"
|
||||||
Pinned = "m.room.pinned_events"
|
Pinned = "m.room.pinned_events"
|
||||||
|
|
||||||
|
Retention = "m.room.retention"
|
||||||
|
|
||||||
|
|
||||||
class RejectedReason(object):
|
class RejectedReason(object):
|
||||||
AUTH_ERROR = "auth_error"
|
AUTH_ERROR = "auth_error"
|
||||||
|
|
|
@ -69,7 +69,7 @@ class FederationSenderSlaveStore(
|
||||||
self.federation_out_pos_startup = self._get_federation_out_pos(db_conn)
|
self.federation_out_pos_startup = self._get_federation_out_pos(db_conn)
|
||||||
|
|
||||||
def _get_federation_out_pos(self, db_conn):
|
def _get_federation_out_pos(self, db_conn):
|
||||||
sql = "SELECT stream_id FROM federation_stream_position" " WHERE type = ?"
|
sql = "SELECT stream_id FROM federation_stream_position WHERE type = ?"
|
||||||
sql = self.database_engine.convert_param_style(sql)
|
sql = self.database_engine.convert_param_style(sql)
|
||||||
|
|
||||||
txn = db_conn.cursor()
|
txn = db_conn.cursor()
|
||||||
|
|
|
@ -585,7 +585,7 @@ def run(hs):
|
||||||
def performance_stats_init():
|
def performance_stats_init():
|
||||||
_stats_process.clear()
|
_stats_process.clear()
|
||||||
_stats_process.append(
|
_stats_process.append(
|
||||||
(int(hs.get_clock().time(), resource.getrusage(resource.RUSAGE_SELF)))
|
(int(hs.get_clock().time()), resource.getrusage(resource.RUSAGE_SELF))
|
||||||
)
|
)
|
||||||
|
|
||||||
def start_phone_stats_home():
|
def start_phone_stats_home():
|
||||||
|
@ -636,7 +636,7 @@ def run(hs):
|
||||||
|
|
||||||
if hs.config.report_stats:
|
if hs.config.report_stats:
|
||||||
logger.info("Scheduling stats reporting for 3 hour intervals")
|
logger.info("Scheduling stats reporting for 3 hour intervals")
|
||||||
clock.looping_call(start_phone_stats_home, 3 * 60 * 60 * 1000, hs, stats)
|
clock.looping_call(start_phone_stats_home, 3 * 60 * 60 * 1000)
|
||||||
|
|
||||||
# We need to defer this init for the cases that we daemonize
|
# We need to defer this init for the cases that we daemonize
|
||||||
# otherwise the process ID we get is that of the non-daemon process
|
# otherwise the process ID we get is that of the non-daemon process
|
||||||
|
@ -644,7 +644,7 @@ def run(hs):
|
||||||
|
|
||||||
# We wait 5 minutes to send the first set of stats as the server can
|
# We wait 5 minutes to send the first set of stats as the server can
|
||||||
# be quite busy the first few minutes
|
# be quite busy the first few minutes
|
||||||
clock.call_later(5 * 60, start_phone_stats_home, hs, stats)
|
clock.call_later(5 * 60, start_phone_stats_home)
|
||||||
|
|
||||||
_base.start_reactor(
|
_base.start_reactor(
|
||||||
"synapse-homeserver",
|
"synapse-homeserver",
|
||||||
|
|
|
@ -185,7 +185,7 @@ class ApplicationServiceApi(SimpleHttpClient):
|
||||||
|
|
||||||
if not _is_valid_3pe_metadata(info):
|
if not _is_valid_3pe_metadata(info):
|
||||||
logger.warning(
|
logger.warning(
|
||||||
"query_3pe_protocol to %s did not return a" " valid result", uri
|
"query_3pe_protocol to %s did not return a valid result", uri
|
||||||
)
|
)
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
|
@ -134,7 +134,7 @@ def _load_appservice(hostname, as_info, config_filename):
|
||||||
for regex_obj in as_info["namespaces"][ns]:
|
for regex_obj in as_info["namespaces"][ns]:
|
||||||
if not isinstance(regex_obj, dict):
|
if not isinstance(regex_obj, dict):
|
||||||
raise ValueError(
|
raise ValueError(
|
||||||
"Expected namespace entry in %s to be an object," " but got %s",
|
"Expected namespace entry in %s to be an object, but got %s",
|
||||||
ns,
|
ns,
|
||||||
regex_obj,
|
regex_obj,
|
||||||
)
|
)
|
||||||
|
|
|
@ -35,11 +35,11 @@ class CaptchaConfig(Config):
|
||||||
## Captcha ##
|
## Captcha ##
|
||||||
# See docs/CAPTCHA_SETUP for full details of configuring this.
|
# See docs/CAPTCHA_SETUP for full details of configuring this.
|
||||||
|
|
||||||
# This Home Server's ReCAPTCHA public key.
|
# This homeserver's ReCAPTCHA public key.
|
||||||
#
|
#
|
||||||
#recaptcha_public_key: "YOUR_PUBLIC_KEY"
|
#recaptcha_public_key: "YOUR_PUBLIC_KEY"
|
||||||
|
|
||||||
# This Home Server's ReCAPTCHA private key.
|
# This homeserver's ReCAPTCHA private key.
|
||||||
#
|
#
|
||||||
#recaptcha_private_key: "YOUR_PRIVATE_KEY"
|
#recaptcha_private_key: "YOUR_PRIVATE_KEY"
|
||||||
|
|
||||||
|
|
|
@ -146,6 +146,8 @@ class EmailConfig(Config):
|
||||||
if k not in email_config:
|
if k not in email_config:
|
||||||
missing.append("email." + k)
|
missing.append("email." + k)
|
||||||
|
|
||||||
|
# public_baseurl is required to build password reset and validation links that
|
||||||
|
# will be emailed to users
|
||||||
if config.get("public_baseurl") is None:
|
if config.get("public_baseurl") is None:
|
||||||
missing.append("public_baseurl")
|
missing.append("public_baseurl")
|
||||||
|
|
||||||
|
@ -305,8 +307,23 @@ class EmailConfig(Config):
|
||||||
# smtp_user: "exampleusername"
|
# smtp_user: "exampleusername"
|
||||||
# smtp_pass: "examplepassword"
|
# smtp_pass: "examplepassword"
|
||||||
# require_transport_security: false
|
# require_transport_security: false
|
||||||
# notif_from: "Your Friendly %(app)s Home Server <noreply@example.com>"
|
#
|
||||||
# app_name: Matrix
|
# # notif_from defines the "From" address to use when sending emails.
|
||||||
|
# # It must be set if email sending is enabled.
|
||||||
|
# #
|
||||||
|
# # The placeholder '%(app)s' will be replaced by the application name,
|
||||||
|
# # which is normally 'app_name' (below), but may be overridden by the
|
||||||
|
# # Matrix client application.
|
||||||
|
# #
|
||||||
|
# # Note that the placeholder must be written '%(app)s', including the
|
||||||
|
# # trailing 's'.
|
||||||
|
# #
|
||||||
|
# notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
|
||||||
|
#
|
||||||
|
# # app_name defines the default value for '%(app)s' in notif_from. It
|
||||||
|
# # defaults to 'Matrix'.
|
||||||
|
# #
|
||||||
|
# #app_name: my_branded_matrix_server
|
||||||
#
|
#
|
||||||
# # Enable email notifications by default
|
# # Enable email notifications by default
|
||||||
# #
|
# #
|
||||||
|
|
|
@ -106,6 +106,13 @@ class RegistrationConfig(Config):
|
||||||
account_threepid_delegates = config.get("account_threepid_delegates") or {}
|
account_threepid_delegates = config.get("account_threepid_delegates") or {}
|
||||||
self.account_threepid_delegate_email = account_threepid_delegates.get("email")
|
self.account_threepid_delegate_email = account_threepid_delegates.get("email")
|
||||||
self.account_threepid_delegate_msisdn = account_threepid_delegates.get("msisdn")
|
self.account_threepid_delegate_msisdn = account_threepid_delegates.get("msisdn")
|
||||||
|
if self.account_threepid_delegate_msisdn and not self.public_baseurl:
|
||||||
|
raise ConfigError(
|
||||||
|
"The configuration option `public_baseurl` is required if "
|
||||||
|
"`account_threepid_delegate.msisdn` is set, such that "
|
||||||
|
"clients know where to submit validation tokens to. Please "
|
||||||
|
"configure `public_baseurl`."
|
||||||
|
)
|
||||||
|
|
||||||
self.default_identity_server = config.get("default_identity_server")
|
self.default_identity_server = config.get("default_identity_server")
|
||||||
self.allow_guest_access = config.get("allow_guest_access", False)
|
self.allow_guest_access = config.get("allow_guest_access", False)
|
||||||
|
|
|
@ -170,7 +170,7 @@ class _RoomDirectoryRule(object):
|
||||||
self.action = action
|
self.action = action
|
||||||
else:
|
else:
|
||||||
raise ConfigError(
|
raise ConfigError(
|
||||||
"%s rules can only have action of 'allow'" " or 'deny'" % (option_name,)
|
"%s rules can only have action of 'allow' or 'deny'" % (option_name,)
|
||||||
)
|
)
|
||||||
|
|
||||||
self._alias_matches_all = alias == "*"
|
self._alias_matches_all = alias == "*"
|
||||||
|
|
|
@ -19,7 +19,7 @@ import logging
|
||||||
import os.path
|
import os.path
|
||||||
import re
|
import re
|
||||||
from textwrap import indent
|
from textwrap import indent
|
||||||
from typing import List
|
from typing import Dict, List, Optional
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
import yaml
|
import yaml
|
||||||
|
@ -41,7 +41,7 @@ logger = logging.Logger(__name__)
|
||||||
# in the list.
|
# in the list.
|
||||||
DEFAULT_BIND_ADDRESSES = ["::", "0.0.0.0"]
|
DEFAULT_BIND_ADDRESSES = ["::", "0.0.0.0"]
|
||||||
|
|
||||||
DEFAULT_ROOM_VERSION = "4"
|
DEFAULT_ROOM_VERSION = "5"
|
||||||
|
|
||||||
ROOM_COMPLEXITY_TOO_GREAT = (
|
ROOM_COMPLEXITY_TOO_GREAT = (
|
||||||
"Your homeserver is unable to join rooms this large or complex. "
|
"Your homeserver is unable to join rooms this large or complex. "
|
||||||
|
@ -223,7 +223,7 @@ class ServerConfig(Config):
|
||||||
self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
|
self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
raise ConfigError(
|
raise ConfigError(
|
||||||
"Invalid range(s) provided in " "federation_ip_range_blacklist: %s" % e
|
"Invalid range(s) provided in federation_ip_range_blacklist: %s" % e
|
||||||
)
|
)
|
||||||
|
|
||||||
if self.public_baseurl is not None:
|
if self.public_baseurl is not None:
|
||||||
|
@ -246,6 +246,124 @@ class ServerConfig(Config):
|
||||||
# events with profile information that differ from the target's global profile.
|
# events with profile information that differ from the target's global profile.
|
||||||
self.allow_per_room_profiles = config.get("allow_per_room_profiles", True)
|
self.allow_per_room_profiles = config.get("allow_per_room_profiles", True)
|
||||||
|
|
||||||
|
retention_config = config.get("retention")
|
||||||
|
if retention_config is None:
|
||||||
|
retention_config = {}
|
||||||
|
|
||||||
|
self.retention_enabled = retention_config.get("enabled", False)
|
||||||
|
|
||||||
|
retention_default_policy = retention_config.get("default_policy")
|
||||||
|
|
||||||
|
if retention_default_policy is not None:
|
||||||
|
self.retention_default_min_lifetime = retention_default_policy.get(
|
||||||
|
"min_lifetime"
|
||||||
|
)
|
||||||
|
if self.retention_default_min_lifetime is not None:
|
||||||
|
self.retention_default_min_lifetime = self.parse_duration(
|
||||||
|
self.retention_default_min_lifetime
|
||||||
|
)
|
||||||
|
|
||||||
|
self.retention_default_max_lifetime = retention_default_policy.get(
|
||||||
|
"max_lifetime"
|
||||||
|
)
|
||||||
|
if self.retention_default_max_lifetime is not None:
|
||||||
|
self.retention_default_max_lifetime = self.parse_duration(
|
||||||
|
self.retention_default_max_lifetime
|
||||||
|
)
|
||||||
|
|
||||||
|
if (
|
||||||
|
self.retention_default_min_lifetime is not None
|
||||||
|
and self.retention_default_max_lifetime is not None
|
||||||
|
and (
|
||||||
|
self.retention_default_min_lifetime
|
||||||
|
> self.retention_default_max_lifetime
|
||||||
|
)
|
||||||
|
):
|
||||||
|
raise ConfigError(
|
||||||
|
"The default retention policy's 'min_lifetime' can not be greater"
|
||||||
|
" than its 'max_lifetime'"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
self.retention_default_min_lifetime = None
|
||||||
|
self.retention_default_max_lifetime = None
|
||||||
|
|
||||||
|
self.retention_allowed_lifetime_min = retention_config.get(
|
||||||
|
"allowed_lifetime_min"
|
||||||
|
)
|
||||||
|
if self.retention_allowed_lifetime_min is not None:
|
||||||
|
self.retention_allowed_lifetime_min = self.parse_duration(
|
||||||
|
self.retention_allowed_lifetime_min
|
||||||
|
)
|
||||||
|
|
||||||
|
self.retention_allowed_lifetime_max = retention_config.get(
|
||||||
|
"allowed_lifetime_max"
|
||||||
|
)
|
||||||
|
if self.retention_allowed_lifetime_max is not None:
|
||||||
|
self.retention_allowed_lifetime_max = self.parse_duration(
|
||||||
|
self.retention_allowed_lifetime_max
|
||||||
|
)
|
||||||
|
|
||||||
|
if (
|
||||||
|
self.retention_allowed_lifetime_min is not None
|
||||||
|
and self.retention_allowed_lifetime_max is not None
|
||||||
|
and self.retention_allowed_lifetime_min
|
||||||
|
> self.retention_allowed_lifetime_max
|
||||||
|
):
|
||||||
|
raise ConfigError(
|
||||||
|
"Invalid retention policy limits: 'allowed_lifetime_min' can not be"
|
||||||
|
" greater than 'allowed_lifetime_max'"
|
||||||
|
)
|
||||||
|
|
||||||
|
self.retention_purge_jobs = [] # type: List[Dict[str, Optional[int]]]
|
||||||
|
for purge_job_config in retention_config.get("purge_jobs", []):
|
||||||
|
interval_config = purge_job_config.get("interval")
|
||||||
|
|
||||||
|
if interval_config is None:
|
||||||
|
raise ConfigError(
|
||||||
|
"A retention policy's purge jobs configuration must have the"
|
||||||
|
" 'interval' key set."
|
||||||
|
)
|
||||||
|
|
||||||
|
interval = self.parse_duration(interval_config)
|
||||||
|
|
||||||
|
shortest_max_lifetime = purge_job_config.get("shortest_max_lifetime")
|
||||||
|
|
||||||
|
if shortest_max_lifetime is not None:
|
||||||
|
shortest_max_lifetime = self.parse_duration(shortest_max_lifetime)
|
||||||
|
|
||||||
|
longest_max_lifetime = purge_job_config.get("longest_max_lifetime")
|
||||||
|
|
||||||
|
if longest_max_lifetime is not None:
|
||||||
|
longest_max_lifetime = self.parse_duration(longest_max_lifetime)
|
||||||
|
|
||||||
|
if (
|
||||||
|
shortest_max_lifetime is not None
|
||||||
|
and longest_max_lifetime is not None
|
||||||
|
and shortest_max_lifetime > longest_max_lifetime
|
||||||
|
):
|
||||||
|
raise ConfigError(
|
||||||
|
"A retention policy's purge jobs configuration's"
|
||||||
|
" 'shortest_max_lifetime' value can not be greater than its"
|
||||||
|
" 'longest_max_lifetime' value."
|
||||||
|
)
|
||||||
|
|
||||||
|
self.retention_purge_jobs.append(
|
||||||
|
{
|
||||||
|
"interval": interval,
|
||||||
|
"shortest_max_lifetime": shortest_max_lifetime,
|
||||||
|
"longest_max_lifetime": longest_max_lifetime,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
if not self.retention_purge_jobs:
|
||||||
|
self.retention_purge_jobs = [
|
||||||
|
{
|
||||||
|
"interval": self.parse_duration("1d"),
|
||||||
|
"shortest_max_lifetime": None,
|
||||||
|
"longest_max_lifetime": None,
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
self.listeners = [] # type: List[dict]
|
self.listeners = [] # type: List[dict]
|
||||||
for listener in config.get("listeners", []):
|
for listener in config.get("listeners", []):
|
||||||
if not isinstance(listener.get("port", None), int):
|
if not isinstance(listener.get("port", None), int):
|
||||||
|
@ -721,7 +839,7 @@ class ServerConfig(Config):
|
||||||
# Used by phonehome stats to group together related servers.
|
# Used by phonehome stats to group together related servers.
|
||||||
#server_context: context
|
#server_context: context
|
||||||
|
|
||||||
# Resource-constrained Homeserver Settings
|
# Resource-constrained homeserver Settings
|
||||||
#
|
#
|
||||||
# If limit_remote_rooms.enabled is True, the room complexity will be
|
# If limit_remote_rooms.enabled is True, the room complexity will be
|
||||||
# checked before a user joins a new remote room. If it is above
|
# checked before a user joins a new remote room. If it is above
|
||||||
|
@ -761,6 +879,69 @@ class ServerConfig(Config):
|
||||||
# Defaults to `28d`. Set to `null` to disable clearing out of old rows.
|
# Defaults to `28d`. Set to `null` to disable clearing out of old rows.
|
||||||
#
|
#
|
||||||
#user_ips_max_age: 14d
|
#user_ips_max_age: 14d
|
||||||
|
|
||||||
|
# Message retention policy at the server level.
|
||||||
|
#
|
||||||
|
# Room admins and mods can define a retention period for their rooms using the
|
||||||
|
# 'm.room.retention' state event, and server admins can cap this period by setting
|
||||||
|
# the 'allowed_lifetime_min' and 'allowed_lifetime_max' config options.
|
||||||
|
#
|
||||||
|
# If this feature is enabled, Synapse will regularly look for and purge events
|
||||||
|
# which are older than the room's maximum retention period. Synapse will also
|
||||||
|
# filter events received over federation so that events that should have been
|
||||||
|
# purged are ignored and not stored again.
|
||||||
|
#
|
||||||
|
retention:
|
||||||
|
# The message retention policies feature is disabled by default. Uncomment the
|
||||||
|
# following line to enable it.
|
||||||
|
#
|
||||||
|
#enabled: true
|
||||||
|
|
||||||
|
# Default retention policy. If set, Synapse will apply it to rooms that lack the
|
||||||
|
# 'm.room.retention' state event. Currently, the value of 'min_lifetime' doesn't
|
||||||
|
# matter much because Synapse doesn't take it into account yet.
|
||||||
|
#
|
||||||
|
#default_policy:
|
||||||
|
# min_lifetime: 1d
|
||||||
|
# max_lifetime: 1y
|
||||||
|
|
||||||
|
# Retention policy limits. If set, a user won't be able to send a
|
||||||
|
# 'm.room.retention' event which features a 'min_lifetime' or a 'max_lifetime'
|
||||||
|
# that's not within this range. This is especially useful in closed federations,
|
||||||
|
# in which server admins can make sure every federating server applies the same
|
||||||
|
# rules.
|
||||||
|
#
|
||||||
|
#allowed_lifetime_min: 1d
|
||||||
|
#allowed_lifetime_max: 1y
|
||||||
|
|
||||||
|
# Server admins can define the settings of the background jobs purging the
|
||||||
|
# events which lifetime has expired under the 'purge_jobs' section.
|
||||||
|
#
|
||||||
|
# If no configuration is provided, a single job will be set up to delete expired
|
||||||
|
# events in every room daily.
|
||||||
|
#
|
||||||
|
# Each job's configuration defines which range of message lifetimes the job
|
||||||
|
# takes care of. For example, if 'shortest_max_lifetime' is '2d' and
|
||||||
|
# 'longest_max_lifetime' is '3d', the job will handle purging expired events in
|
||||||
|
# rooms whose state defines a 'max_lifetime' that's both higher than 2 days, and
|
||||||
|
# lower than or equal to 3 days. Both the minimum and the maximum value of a
|
||||||
|
# range are optional, e.g. a job with no 'shortest_max_lifetime' and a
|
||||||
|
# 'longest_max_lifetime' of '3d' will handle every room with a retention policy
|
||||||
|
# which 'max_lifetime' is lower than or equal to three days.
|
||||||
|
#
|
||||||
|
# The rationale for this per-job configuration is that some rooms might have a
|
||||||
|
# retention policy with a low 'max_lifetime', where history needs to be purged
|
||||||
|
# of outdated messages on a very frequent basis (e.g. every 5min), but not want
|
||||||
|
# that purge to be performed by a job that's iterating over every room it knows,
|
||||||
|
# which would be quite heavy on the server.
|
||||||
|
#
|
||||||
|
#purge_jobs:
|
||||||
|
# - shortest_max_lifetime: 1d
|
||||||
|
# longest_max_lifetime: 3d
|
||||||
|
# interval: 5m:
|
||||||
|
# - shortest_max_lifetime: 3d
|
||||||
|
# longest_max_lifetime: 1y
|
||||||
|
# interval: 24h
|
||||||
"""
|
"""
|
||||||
% locals()
|
% locals()
|
||||||
)
|
)
|
||||||
|
@ -787,14 +968,14 @@ class ServerConfig(Config):
|
||||||
"--print-pidfile",
|
"--print-pidfile",
|
||||||
action="store_true",
|
action="store_true",
|
||||||
default=None,
|
default=None,
|
||||||
help="Print the path to the pidfile just" " before daemonizing",
|
help="Print the path to the pidfile just before daemonizing",
|
||||||
)
|
)
|
||||||
server_group.add_argument(
|
server_group.add_argument(
|
||||||
"--manhole",
|
"--manhole",
|
||||||
metavar="PORT",
|
metavar="PORT",
|
||||||
dest="manhole",
|
dest="manhole",
|
||||||
type=int,
|
type=int,
|
||||||
help="Turn on the twisted telnet manhole" " service on the given port.",
|
help="Turn on the twisted telnet manhole service on the given port.",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -12,6 +12,8 @@
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
from typing import Dict, Optional, Tuple, Union
|
||||||
|
|
||||||
from six import iteritems
|
from six import iteritems
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
|
@ -19,54 +21,113 @@ from frozendict import frozendict
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
|
||||||
|
from synapse.appservice import ApplicationService
|
||||||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||||
|
|
||||||
|
|
||||||
@attr.s(slots=True)
|
@attr.s(slots=True)
|
||||||
class EventContext:
|
class EventContext:
|
||||||
"""
|
"""
|
||||||
|
Holds information relevant to persisting an event
|
||||||
|
|
||||||
Attributes:
|
Attributes:
|
||||||
state_group (int|None): state group id, if the state has been stored
|
rejected: A rejection reason if the event was rejected, else False
|
||||||
as a state group. This is usually only None if e.g. the event is
|
|
||||||
an outlier.
|
|
||||||
rejected (bool|str): A rejection reason if the event was rejected, else
|
|
||||||
False
|
|
||||||
|
|
||||||
prev_group (int): Previously persisted state group. ``None`` for an
|
_state_group: The ID of the state group for this event. Note that state events
|
||||||
outlier.
|
are persisted with a state group which includes the new event, so this is
|
||||||
delta_ids (dict[(str, str), str]): Delta from ``prev_group``.
|
effectively the state *after* the event in question.
|
||||||
(type, state_key) -> event_id. ``None`` for an outlier.
|
|
||||||
|
|
||||||
app_service: FIXME
|
For a *rejected* state event, where the state of the rejected event is
|
||||||
|
ignored, this state_group should never make it into the
|
||||||
|
event_to_state_groups table. Indeed, inspecting this value for a rejected
|
||||||
|
state event is almost certainly incorrect.
|
||||||
|
|
||||||
|
For an outlier, where we don't have the state at the event, this will be
|
||||||
|
None.
|
||||||
|
|
||||||
|
Note that this is a private attribute: it should be accessed via
|
||||||
|
the ``state_group`` property.
|
||||||
|
|
||||||
|
state_group_before_event: The ID of the state group representing the state
|
||||||
|
of the room before this event.
|
||||||
|
|
||||||
|
If this is a non-state event, this will be the same as ``state_group``. If
|
||||||
|
it's a state event, it will be the same as ``prev_group``.
|
||||||
|
|
||||||
|
If ``state_group`` is None (ie, the event is an outlier),
|
||||||
|
``state_group_before_event`` will always also be ``None``.
|
||||||
|
|
||||||
|
prev_group: If it is known, ``state_group``'s prev_group. Note that this being
|
||||||
|
None does not necessarily mean that ``state_group`` does not have
|
||||||
|
a prev_group!
|
||||||
|
|
||||||
|
If the event is a state event, this is normally the same as ``prev_group``.
|
||||||
|
|
||||||
|
If ``state_group`` is None (ie, the event is an outlier), ``prev_group``
|
||||||
|
will always also be ``None``.
|
||||||
|
|
||||||
|
Note that this *not* (necessarily) the state group associated with
|
||||||
|
``_prev_state_ids``.
|
||||||
|
|
||||||
|
delta_ids: If ``prev_group`` is not None, the state delta between ``prev_group``
|
||||||
|
and ``state_group``.
|
||||||
|
|
||||||
|
app_service: If this event is being sent by a (local) application service, that
|
||||||
|
app service.
|
||||||
|
|
||||||
|
_current_state_ids: The room state map, including this event - ie, the state
|
||||||
|
in ``state_group``.
|
||||||
|
|
||||||
_current_state_ids (dict[(str, str), str]|None):
|
|
||||||
The current state map including the current event. None if outlier
|
|
||||||
or we haven't fetched the state from DB yet.
|
|
||||||
(type, state_key) -> event_id
|
(type, state_key) -> event_id
|
||||||
|
|
||||||
_prev_state_ids (dict[(str, str), str]|None):
|
FIXME: what is this for an outlier? it seems ill-defined. It seems like
|
||||||
The current state map excluding the current event. None if outlier
|
it could be either {}, or the state we were given by the remote
|
||||||
or we haven't fetched the state from DB yet.
|
server, depending on $THINGS
|
||||||
|
|
||||||
|
Note that this is a private attribute: it should be accessed via
|
||||||
|
``get_current_state_ids``. _AsyncEventContext impl calculates this
|
||||||
|
on-demand: it will be None until that happens.
|
||||||
|
|
||||||
|
_prev_state_ids: The room state map, excluding this event - ie, the state
|
||||||
|
in ``state_group_before_event``. For a non-state
|
||||||
|
event, this will be the same as _current_state_events.
|
||||||
|
|
||||||
|
Note that it is a completely different thing to prev_group!
|
||||||
|
|
||||||
(type, state_key) -> event_id
|
(type, state_key) -> event_id
|
||||||
|
|
||||||
|
FIXME: again, what is this for an outlier?
|
||||||
|
|
||||||
|
As with _current_state_ids, this is a private attribute. It should be
|
||||||
|
accessed via get_prev_state_ids.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
state_group = attr.ib(default=None)
|
rejected = attr.ib(default=False, type=Union[bool, str])
|
||||||
rejected = attr.ib(default=False)
|
_state_group = attr.ib(default=None, type=Optional[int])
|
||||||
prev_group = attr.ib(default=None)
|
state_group_before_event = attr.ib(default=None, type=Optional[int])
|
||||||
delta_ids = attr.ib(default=None)
|
prev_group = attr.ib(default=None, type=Optional[int])
|
||||||
app_service = attr.ib(default=None)
|
delta_ids = attr.ib(default=None, type=Optional[Dict[Tuple[str, str], str]])
|
||||||
|
app_service = attr.ib(default=None, type=Optional[ApplicationService])
|
||||||
|
|
||||||
_prev_state_ids = attr.ib(default=None)
|
_current_state_ids = attr.ib(
|
||||||
_current_state_ids = attr.ib(default=None)
|
default=None, type=Optional[Dict[Tuple[str, str], str]]
|
||||||
|
)
|
||||||
|
_prev_state_ids = attr.ib(default=None, type=Optional[Dict[Tuple[str, str], str]])
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def with_state(
|
def with_state(
|
||||||
state_group, current_state_ids, prev_state_ids, prev_group=None, delta_ids=None
|
state_group,
|
||||||
|
state_group_before_event,
|
||||||
|
current_state_ids,
|
||||||
|
prev_state_ids,
|
||||||
|
prev_group=None,
|
||||||
|
delta_ids=None,
|
||||||
):
|
):
|
||||||
return EventContext(
|
return EventContext(
|
||||||
current_state_ids=current_state_ids,
|
current_state_ids=current_state_ids,
|
||||||
prev_state_ids=prev_state_ids,
|
prev_state_ids=prev_state_ids,
|
||||||
state_group=state_group,
|
state_group=state_group,
|
||||||
|
state_group_before_event=state_group_before_event,
|
||||||
prev_group=prev_group,
|
prev_group=prev_group,
|
||||||
delta_ids=delta_ids,
|
delta_ids=delta_ids,
|
||||||
)
|
)
|
||||||
|
@ -97,7 +158,8 @@ class EventContext:
|
||||||
"prev_state_id": prev_state_id,
|
"prev_state_id": prev_state_id,
|
||||||
"event_type": event.type,
|
"event_type": event.type,
|
||||||
"event_state_key": event.state_key if event.is_state() else None,
|
"event_state_key": event.state_key if event.is_state() else None,
|
||||||
"state_group": self.state_group,
|
"state_group": self._state_group,
|
||||||
|
"state_group_before_event": self.state_group_before_event,
|
||||||
"rejected": self.rejected,
|
"rejected": self.rejected,
|
||||||
"prev_group": self.prev_group,
|
"prev_group": self.prev_group,
|
||||||
"delta_ids": _encode_state_dict(self.delta_ids),
|
"delta_ids": _encode_state_dict(self.delta_ids),
|
||||||
|
@ -123,6 +185,7 @@ class EventContext:
|
||||||
event_type=input["event_type"],
|
event_type=input["event_type"],
|
||||||
event_state_key=input["event_state_key"],
|
event_state_key=input["event_state_key"],
|
||||||
state_group=input["state_group"],
|
state_group=input["state_group"],
|
||||||
|
state_group_before_event=input["state_group_before_event"],
|
||||||
prev_group=input["prev_group"],
|
prev_group=input["prev_group"],
|
||||||
delta_ids=_decode_state_dict(input["delta_ids"]),
|
delta_ids=_decode_state_dict(input["delta_ids"]),
|
||||||
rejected=input["rejected"],
|
rejected=input["rejected"],
|
||||||
|
@ -134,22 +197,52 @@ class EventContext:
|
||||||
|
|
||||||
return context
|
return context
|
||||||
|
|
||||||
|
@property
|
||||||
|
def state_group(self) -> Optional[int]:
|
||||||
|
"""The ID of the state group for this event.
|
||||||
|
|
||||||
|
Note that state events are persisted with a state group which includes the new
|
||||||
|
event, so this is effectively the state *after* the event in question.
|
||||||
|
|
||||||
|
For an outlier, where we don't have the state at the event, this will be None.
|
||||||
|
|
||||||
|
It is an error to access this for a rejected event, since rejected state should
|
||||||
|
not make it into the room state. Accessing this property will raise an exception
|
||||||
|
if ``rejected`` is set.
|
||||||
|
"""
|
||||||
|
if self.rejected:
|
||||||
|
raise RuntimeError("Attempt to access state_group of rejected event")
|
||||||
|
|
||||||
|
return self._state_group
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def get_current_state_ids(self, store):
|
def get_current_state_ids(self, store):
|
||||||
"""Gets the current state IDs
|
"""
|
||||||
|
Gets the room state map, including this event - ie, the state in ``state_group``
|
||||||
|
|
||||||
|
It is an error to access this for a rejected event, since rejected state should
|
||||||
|
not make it into the room state. This method will raise an exception if
|
||||||
|
``rejected`` is set.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Deferred[dict[(str, str), str]|None]: Returns None if state_group
|
Deferred[dict[(str, str), str]|None]: Returns None if state_group
|
||||||
is None, which happens when the associated event is an outlier.
|
is None, which happens when the associated event is an outlier.
|
||||||
|
|
||||||
Maps a (type, state_key) to the event ID of the state event matching
|
Maps a (type, state_key) to the event ID of the state event matching
|
||||||
this tuple.
|
this tuple.
|
||||||
"""
|
"""
|
||||||
|
if self.rejected:
|
||||||
|
raise RuntimeError("Attempt to access state_ids of rejected event")
|
||||||
|
|
||||||
yield self._ensure_fetched(store)
|
yield self._ensure_fetched(store)
|
||||||
return self._current_state_ids
|
return self._current_state_ids
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def get_prev_state_ids(self, store):
|
def get_prev_state_ids(self, store):
|
||||||
"""Gets the prev state IDs
|
"""
|
||||||
|
Gets the room state map, excluding this event.
|
||||||
|
|
||||||
|
For a non-state event, this will be the same as get_current_state_ids().
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Deferred[dict[(str, str), str]|None]: Returns None if state_group
|
Deferred[dict[(str, str), str]|None]: Returns None if state_group
|
||||||
|
@ -163,11 +256,17 @@ class EventContext:
|
||||||
def get_cached_current_state_ids(self):
|
def get_cached_current_state_ids(self):
|
||||||
"""Gets the current state IDs if we have them already cached.
|
"""Gets the current state IDs if we have them already cached.
|
||||||
|
|
||||||
|
It is an error to access this for a rejected event, since rejected state should
|
||||||
|
not make it into the room state. This method will raise an exception if
|
||||||
|
``rejected`` is set.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
dict[(str, str), str]|None: Returns None if we haven't cached the
|
dict[(str, str), str]|None: Returns None if we haven't cached the
|
||||||
state or if state_group is None, which happens when the associated
|
state or if state_group is None, which happens when the associated
|
||||||
event is an outlier.
|
event is an outlier.
|
||||||
"""
|
"""
|
||||||
|
if self.rejected:
|
||||||
|
raise RuntimeError("Attempt to access state_ids of rejected event")
|
||||||
|
|
||||||
return self._current_state_ids
|
return self._current_state_ids
|
||||||
|
|
||||||
|
|
|
@ -13,7 +13,7 @@
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from six import string_types
|
from six import integer_types, string_types
|
||||||
|
|
||||||
from synapse.api.constants import MAX_ALIAS_LENGTH, EventTypes, Membership
|
from synapse.api.constants import MAX_ALIAS_LENGTH, EventTypes, Membership
|
||||||
from synapse.api.errors import Codes, SynapseError
|
from synapse.api.errors import Codes, SynapseError
|
||||||
|
@ -22,11 +22,12 @@ from synapse.types import EventID, RoomID, UserID
|
||||||
|
|
||||||
|
|
||||||
class EventValidator(object):
|
class EventValidator(object):
|
||||||
def validate_new(self, event):
|
def validate_new(self, event, config):
|
||||||
"""Validates the event has roughly the right format
|
"""Validates the event has roughly the right format
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
event (FrozenEvent)
|
event (FrozenEvent): The event to validate.
|
||||||
|
config (Config): The homeserver's configuration.
|
||||||
"""
|
"""
|
||||||
self.validate_builder(event)
|
self.validate_builder(event)
|
||||||
|
|
||||||
|
@ -67,6 +68,99 @@ class EventValidator(object):
|
||||||
Codes.INVALID_PARAM,
|
Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if event.type == EventTypes.Retention:
|
||||||
|
self._validate_retention(event, config)
|
||||||
|
|
||||||
|
def _validate_retention(self, event, config):
|
||||||
|
"""Checks that an event that defines the retention policy for a room respects the
|
||||||
|
boundaries imposed by the server's administrator.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
event (FrozenEvent): The event to validate.
|
||||||
|
config (Config): The homeserver's configuration.
|
||||||
|
"""
|
||||||
|
min_lifetime = event.content.get("min_lifetime")
|
||||||
|
max_lifetime = event.content.get("max_lifetime")
|
||||||
|
|
||||||
|
if min_lifetime is not None:
|
||||||
|
if not isinstance(min_lifetime, integer_types):
|
||||||
|
raise SynapseError(
|
||||||
|
code=400,
|
||||||
|
msg="'min_lifetime' must be an integer",
|
||||||
|
errcode=Codes.BAD_JSON,
|
||||||
|
)
|
||||||
|
|
||||||
|
if (
|
||||||
|
config.retention_allowed_lifetime_min is not None
|
||||||
|
and min_lifetime < config.retention_allowed_lifetime_min
|
||||||
|
):
|
||||||
|
raise SynapseError(
|
||||||
|
code=400,
|
||||||
|
msg=(
|
||||||
|
"'min_lifetime' can't be lower than the minimum allowed"
|
||||||
|
" value enforced by the server's administrator"
|
||||||
|
),
|
||||||
|
errcode=Codes.BAD_JSON,
|
||||||
|
)
|
||||||
|
|
||||||
|
if (
|
||||||
|
config.retention_allowed_lifetime_max is not None
|
||||||
|
and min_lifetime > config.retention_allowed_lifetime_max
|
||||||
|
):
|
||||||
|
raise SynapseError(
|
||||||
|
code=400,
|
||||||
|
msg=(
|
||||||
|
"'min_lifetime' can't be greater than the maximum allowed"
|
||||||
|
" value enforced by the server's administrator"
|
||||||
|
),
|
||||||
|
errcode=Codes.BAD_JSON,
|
||||||
|
)
|
||||||
|
|
||||||
|
if max_lifetime is not None:
|
||||||
|
if not isinstance(max_lifetime, integer_types):
|
||||||
|
raise SynapseError(
|
||||||
|
code=400,
|
||||||
|
msg="'max_lifetime' must be an integer",
|
||||||
|
errcode=Codes.BAD_JSON,
|
||||||
|
)
|
||||||
|
|
||||||
|
if (
|
||||||
|
config.retention_allowed_lifetime_min is not None
|
||||||
|
and max_lifetime < config.retention_allowed_lifetime_min
|
||||||
|
):
|
||||||
|
raise SynapseError(
|
||||||
|
code=400,
|
||||||
|
msg=(
|
||||||
|
"'max_lifetime' can't be lower than the minimum allowed value"
|
||||||
|
" enforced by the server's administrator"
|
||||||
|
),
|
||||||
|
errcode=Codes.BAD_JSON,
|
||||||
|
)
|
||||||
|
|
||||||
|
if (
|
||||||
|
config.retention_allowed_lifetime_max is not None
|
||||||
|
and max_lifetime > config.retention_allowed_lifetime_max
|
||||||
|
):
|
||||||
|
raise SynapseError(
|
||||||
|
code=400,
|
||||||
|
msg=(
|
||||||
|
"'max_lifetime' can't be greater than the maximum allowed"
|
||||||
|
" value enforced by the server's administrator"
|
||||||
|
),
|
||||||
|
errcode=Codes.BAD_JSON,
|
||||||
|
)
|
||||||
|
|
||||||
|
if (
|
||||||
|
min_lifetime is not None
|
||||||
|
and max_lifetime is not None
|
||||||
|
and min_lifetime > max_lifetime
|
||||||
|
):
|
||||||
|
raise SynapseError(
|
||||||
|
code=400,
|
||||||
|
msg="'min_lifetime' can't be greater than 'max_lifetime",
|
||||||
|
errcode=Codes.BAD_JSON,
|
||||||
|
)
|
||||||
|
|
||||||
def validate_builder(self, event):
|
def validate_builder(self, event):
|
||||||
"""Validates that the builder/event has roughly the right format. Only
|
"""Validates that the builder/event has roughly the right format. Only
|
||||||
checks values that we expect a proto event to have, rather than all the
|
checks values that we expect a proto event to have, rather than all the
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
# Copyright 2015, 2016 OpenMarket Ltd
|
# Copyright 2015, 2016 OpenMarket Ltd
|
||||||
# Copyright 2018 New Vector Ltd
|
# Copyright 2018 New Vector Ltd
|
||||||
|
# Copyright 2019 Matrix.org Federation C.I.C
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
|
@ -73,6 +74,7 @@ class FederationServer(FederationBase):
|
||||||
|
|
||||||
self.auth = hs.get_auth()
|
self.auth = hs.get_auth()
|
||||||
self.handler = hs.get_handlers().federation_handler
|
self.handler = hs.get_handlers().federation_handler
|
||||||
|
self.state = hs.get_state_handler()
|
||||||
|
|
||||||
self._server_linearizer = Linearizer("fed_server")
|
self._server_linearizer = Linearizer("fed_server")
|
||||||
self._transaction_linearizer = Linearizer("fed_txn_handler")
|
self._transaction_linearizer = Linearizer("fed_txn_handler")
|
||||||
|
@ -264,9 +266,6 @@ class FederationServer(FederationBase):
|
||||||
await self.registry.on_edu(edu_type, origin, content)
|
await self.registry.on_edu(edu_type, origin, content)
|
||||||
|
|
||||||
async def on_context_state_request(self, origin, room_id, event_id):
|
async def on_context_state_request(self, origin, room_id, event_id):
|
||||||
if not event_id:
|
|
||||||
raise NotImplementedError("Specify an event")
|
|
||||||
|
|
||||||
origin_host, _ = parse_server_name(origin)
|
origin_host, _ = parse_server_name(origin)
|
||||||
await self.check_server_matches_acl(origin_host, room_id)
|
await self.check_server_matches_acl(origin_host, room_id)
|
||||||
|
|
||||||
|
@ -280,12 +279,17 @@ class FederationServer(FederationBase):
|
||||||
# - but that's non-trivial to get right, and anyway somewhat defeats
|
# - but that's non-trivial to get right, and anyway somewhat defeats
|
||||||
# the point of the linearizer.
|
# the point of the linearizer.
|
||||||
with (await self._server_linearizer.queue((origin, room_id))):
|
with (await self._server_linearizer.queue((origin, room_id))):
|
||||||
resp = await self._state_resp_cache.wrap(
|
resp = dict(
|
||||||
|
await self._state_resp_cache.wrap(
|
||||||
(room_id, event_id),
|
(room_id, event_id),
|
||||||
self._on_context_state_request_compute,
|
self._on_context_state_request_compute,
|
||||||
room_id,
|
room_id,
|
||||||
event_id,
|
event_id,
|
||||||
)
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
room_version = await self.store.get_room_version(room_id)
|
||||||
|
resp["room_version"] = room_version
|
||||||
|
|
||||||
return 200, resp
|
return 200, resp
|
||||||
|
|
||||||
|
@ -306,7 +310,11 @@ class FederationServer(FederationBase):
|
||||||
return 200, {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids}
|
return 200, {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids}
|
||||||
|
|
||||||
async def _on_context_state_request_compute(self, room_id, event_id):
|
async def _on_context_state_request_compute(self, room_id, event_id):
|
||||||
|
if event_id:
|
||||||
pdus = await self.handler.get_state_for_pdu(room_id, event_id)
|
pdus = await self.handler.get_state_for_pdu(room_id, event_id)
|
||||||
|
else:
|
||||||
|
pdus = (await self.state.get_current_state(room_id)).values()
|
||||||
|
|
||||||
auth_chain = await self.store.get_auth_chain([pdu.event_id for pdu in pdus])
|
auth_chain = await self.store.get_auth_chain([pdu.event_id for pdu in pdus])
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
|
|
@ -44,7 +44,7 @@ class TransactionActions(object):
|
||||||
response code and response body.
|
response code and response body.
|
||||||
"""
|
"""
|
||||||
if not transaction.transaction_id:
|
if not transaction.transaction_id:
|
||||||
raise RuntimeError("Cannot persist a transaction with no " "transaction_id")
|
raise RuntimeError("Cannot persist a transaction with no transaction_id")
|
||||||
|
|
||||||
return self.store.get_received_txn_response(transaction.transaction_id, origin)
|
return self.store.get_received_txn_response(transaction.transaction_id, origin)
|
||||||
|
|
||||||
|
@ -56,7 +56,7 @@ class TransactionActions(object):
|
||||||
Deferred
|
Deferred
|
||||||
"""
|
"""
|
||||||
if not transaction.transaction_id:
|
if not transaction.transaction_id:
|
||||||
raise RuntimeError("Cannot persist a transaction with no " "transaction_id")
|
raise RuntimeError("Cannot persist a transaction with no transaction_id")
|
||||||
|
|
||||||
return self.store.set_received_txn_response(
|
return self.store.set_received_txn_response(
|
||||||
transaction.transaction_id, origin, code, response
|
transaction.transaction_id, origin, code, response
|
||||||
|
|
|
@ -49,7 +49,7 @@ sent_pdus_destination_dist_count = Counter(
|
||||||
|
|
||||||
sent_pdus_destination_dist_total = Counter(
|
sent_pdus_destination_dist_total = Counter(
|
||||||
"synapse_federation_client_sent_pdu_destinations:total",
|
"synapse_federation_client_sent_pdu_destinations:total",
|
||||||
"" "Total number of PDUs queued for sending across all destinations",
|
"Total number of PDUs queued for sending across all destinations",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -84,7 +84,7 @@ class TransactionManager(object):
|
||||||
txn_id = str(self._next_txn_id)
|
txn_id = str(self._next_txn_id)
|
||||||
|
|
||||||
logger.debug(
|
logger.debug(
|
||||||
"TX [%s] {%s} Attempting new transaction" " (pdus: %d, edus: %d)",
|
"TX [%s] {%s} Attempting new transaction (pdus: %d, edus: %d)",
|
||||||
destination,
|
destination,
|
||||||
txn_id,
|
txn_id,
|
||||||
len(pdus),
|
len(pdus),
|
||||||
|
@ -103,7 +103,7 @@ class TransactionManager(object):
|
||||||
self._next_txn_id += 1
|
self._next_txn_id += 1
|
||||||
|
|
||||||
logger.info(
|
logger.info(
|
||||||
"TX [%s] {%s} Sending transaction [%s]," " (PDUs: %d, EDUs: %d)",
|
"TX [%s] {%s} Sending transaction [%s], (PDUs: %d, EDUs: %d)",
|
||||||
destination,
|
destination,
|
||||||
txn_id,
|
txn_id,
|
||||||
transaction.transaction_id,
|
transaction.transaction_id,
|
||||||
|
|
|
@ -421,7 +421,7 @@ class FederationEventServlet(BaseFederationServlet):
|
||||||
return await self.handler.on_pdu_request(origin, event_id)
|
return await self.handler.on_pdu_request(origin, event_id)
|
||||||
|
|
||||||
|
|
||||||
class FederationStateServlet(BaseFederationServlet):
|
class FederationStateV1Servlet(BaseFederationServlet):
|
||||||
PATH = "/state/(?P<context>[^/]*)/?"
|
PATH = "/state/(?P<context>[^/]*)/?"
|
||||||
|
|
||||||
# This is when someone asks for all data for a given context.
|
# This is when someone asks for all data for a given context.
|
||||||
|
@ -429,7 +429,7 @@ class FederationStateServlet(BaseFederationServlet):
|
||||||
return await self.handler.on_context_state_request(
|
return await self.handler.on_context_state_request(
|
||||||
origin,
|
origin,
|
||||||
context,
|
context,
|
||||||
parse_string_from_args(query, "event_id", None, required=True),
|
parse_string_from_args(query, "event_id", None, required=False),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@ -1360,7 +1360,7 @@ class RoomComplexityServlet(BaseFederationServlet):
|
||||||
FEDERATION_SERVLET_CLASSES = (
|
FEDERATION_SERVLET_CLASSES = (
|
||||||
FederationSendServlet,
|
FederationSendServlet,
|
||||||
FederationEventServlet,
|
FederationEventServlet,
|
||||||
FederationStateServlet,
|
FederationStateV1Servlet,
|
||||||
FederationStateIdsServlet,
|
FederationStateIdsServlet,
|
||||||
FederationBackfillServlet,
|
FederationBackfillServlet,
|
||||||
FederationQueryServlet,
|
FederationQueryServlet,
|
||||||
|
|
|
@ -102,8 +102,9 @@ class AuthHandler(BaseHandler):
|
||||||
login_types.append(t)
|
login_types.append(t)
|
||||||
self._supported_login_types = login_types
|
self._supported_login_types = login_types
|
||||||
|
|
||||||
self._account_ratelimiter = Ratelimiter()
|
# Ratelimiter for failed auth during UIA. Uses same ratelimit config
|
||||||
self._failed_attempts_ratelimiter = Ratelimiter()
|
# as per `rc_login.failed_attempts`.
|
||||||
|
self._failed_uia_attempts_ratelimiter = Ratelimiter()
|
||||||
|
|
||||||
self._clock = self.hs.get_clock()
|
self._clock = self.hs.get_clock()
|
||||||
|
|
||||||
|
@ -133,12 +134,38 @@ class AuthHandler(BaseHandler):
|
||||||
|
|
||||||
AuthError if the client has completed a login flow, and it gives
|
AuthError if the client has completed a login flow, and it gives
|
||||||
a different user to `requester`
|
a different user to `requester`
|
||||||
|
|
||||||
|
LimitExceededError if the ratelimiter's failed request count for this
|
||||||
|
user is too high to proceed
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
user_id = requester.user.to_string()
|
||||||
|
|
||||||
|
# Check if we should be ratelimited due to too many previous failed attempts
|
||||||
|
self._failed_uia_attempts_ratelimiter.ratelimit(
|
||||||
|
user_id,
|
||||||
|
time_now_s=self._clock.time(),
|
||||||
|
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
|
||||||
|
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
|
||||||
|
update=False,
|
||||||
|
)
|
||||||
|
|
||||||
# build a list of supported flows
|
# build a list of supported flows
|
||||||
flows = [[login_type] for login_type in self._supported_login_types]
|
flows = [[login_type] for login_type in self._supported_login_types]
|
||||||
|
|
||||||
|
try:
|
||||||
result, params, _ = yield self.check_auth(flows, request_body, clientip)
|
result, params, _ = yield self.check_auth(flows, request_body, clientip)
|
||||||
|
except LoginError:
|
||||||
|
# Update the ratelimite to say we failed (`can_do_action` doesn't raise).
|
||||||
|
self._failed_uia_attempts_ratelimiter.can_do_action(
|
||||||
|
user_id,
|
||||||
|
time_now_s=self._clock.time(),
|
||||||
|
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
|
||||||
|
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
|
||||||
|
update=True,
|
||||||
|
)
|
||||||
|
raise
|
||||||
|
|
||||||
# find the completed login type
|
# find the completed login type
|
||||||
for login_type in self._supported_login_types:
|
for login_type in self._supported_login_types:
|
||||||
|
@ -501,11 +528,8 @@ class AuthHandler(BaseHandler):
|
||||||
multiple matches
|
multiple matches
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
LimitExceededError if the ratelimiter's login requests count for this
|
|
||||||
user is too high too proceed.
|
|
||||||
UserDeactivatedError if a user is found but is deactivated.
|
UserDeactivatedError if a user is found but is deactivated.
|
||||||
"""
|
"""
|
||||||
self.ratelimit_login_per_account(user_id)
|
|
||||||
res = yield self._find_user_id_and_pwd_hash(user_id)
|
res = yield self._find_user_id_and_pwd_hash(user_id)
|
||||||
if res is not None:
|
if res is not None:
|
||||||
return res[0]
|
return res[0]
|
||||||
|
@ -572,8 +596,6 @@ class AuthHandler(BaseHandler):
|
||||||
StoreError if there was a problem accessing the database
|
StoreError if there was a problem accessing the database
|
||||||
SynapseError if there was a problem with the request
|
SynapseError if there was a problem with the request
|
||||||
LoginError if there was an authentication problem.
|
LoginError if there was an authentication problem.
|
||||||
LimitExceededError if the ratelimiter's login requests count for this
|
|
||||||
user is too high too proceed.
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if username.startswith("@"):
|
if username.startswith("@"):
|
||||||
|
@ -581,8 +603,6 @@ class AuthHandler(BaseHandler):
|
||||||
else:
|
else:
|
||||||
qualified_user_id = UserID(username, self.hs.hostname).to_string()
|
qualified_user_id = UserID(username, self.hs.hostname).to_string()
|
||||||
|
|
||||||
self.ratelimit_login_per_account(qualified_user_id)
|
|
||||||
|
|
||||||
login_type = login_submission.get("type")
|
login_type = login_submission.get("type")
|
||||||
known_login_type = False
|
known_login_type = False
|
||||||
|
|
||||||
|
@ -650,15 +670,6 @@ class AuthHandler(BaseHandler):
|
||||||
if not known_login_type:
|
if not known_login_type:
|
||||||
raise SynapseError(400, "Unknown login type %s" % login_type)
|
raise SynapseError(400, "Unknown login type %s" % login_type)
|
||||||
|
|
||||||
# unknown username or invalid password.
|
|
||||||
self._failed_attempts_ratelimiter.ratelimit(
|
|
||||||
qualified_user_id.lower(),
|
|
||||||
time_now_s=self._clock.time(),
|
|
||||||
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
|
|
||||||
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
|
|
||||||
update=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
# We raise a 403 here, but note that if we're doing user-interactive
|
# We raise a 403 here, but note that if we're doing user-interactive
|
||||||
# login, it turns all LoginErrors into a 401 anyway.
|
# login, it turns all LoginErrors into a 401 anyway.
|
||||||
raise LoginError(403, "Invalid password", errcode=Codes.FORBIDDEN)
|
raise LoginError(403, "Invalid password", errcode=Codes.FORBIDDEN)
|
||||||
|
@ -710,10 +721,6 @@ class AuthHandler(BaseHandler):
|
||||||
Returns:
|
Returns:
|
||||||
Deferred[unicode] the canonical_user_id, or Deferred[None] if
|
Deferred[unicode] the canonical_user_id, or Deferred[None] if
|
||||||
unknown user/bad password
|
unknown user/bad password
|
||||||
|
|
||||||
Raises:
|
|
||||||
LimitExceededError if the ratelimiter's login requests count for this
|
|
||||||
user is too high too proceed.
|
|
||||||
"""
|
"""
|
||||||
lookupres = yield self._find_user_id_and_pwd_hash(user_id)
|
lookupres = yield self._find_user_id_and_pwd_hash(user_id)
|
||||||
if not lookupres:
|
if not lookupres:
|
||||||
|
@ -742,7 +749,7 @@ class AuthHandler(BaseHandler):
|
||||||
auth_api.validate_macaroon(macaroon, "login", user_id)
|
auth_api.validate_macaroon(macaroon, "login", user_id)
|
||||||
except Exception:
|
except Exception:
|
||||||
raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN)
|
raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN)
|
||||||
self.ratelimit_login_per_account(user_id)
|
|
||||||
yield self.auth.check_auth_blocking(user_id)
|
yield self.auth.check_auth_blocking(user_id)
|
||||||
return user_id
|
return user_id
|
||||||
|
|
||||||
|
@ -810,7 +817,7 @@ class AuthHandler(BaseHandler):
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def add_threepid(self, user_id, medium, address, validated_at):
|
def add_threepid(self, user_id, medium, address, validated_at):
|
||||||
# 'Canonicalise' email addresses down to lower case.
|
# 'Canonicalise' email addresses down to lower case.
|
||||||
# We've now moving towards the Home Server being the entity that
|
# We've now moving towards the homeserver being the entity that
|
||||||
# is responsible for validating threepids used for resetting passwords
|
# is responsible for validating threepids used for resetting passwords
|
||||||
# on accounts, so in future Synapse will gain knowledge of specific
|
# on accounts, so in future Synapse will gain knowledge of specific
|
||||||
# types (mediums) of threepid. For now, we still use the existing
|
# types (mediums) of threepid. For now, we still use the existing
|
||||||
|
@ -912,35 +919,6 @@ class AuthHandler(BaseHandler):
|
||||||
else:
|
else:
|
||||||
return defer.succeed(False)
|
return defer.succeed(False)
|
||||||
|
|
||||||
def ratelimit_login_per_account(self, user_id):
|
|
||||||
"""Checks whether the process must be stopped because of ratelimiting.
|
|
||||||
|
|
||||||
Checks against two ratelimiters: the generic one for login attempts per
|
|
||||||
account and the one specific to failed attempts.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
user_id (unicode): complete @user:id
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
LimitExceededError if one of the ratelimiters' login requests count
|
|
||||||
for this user is too high too proceed.
|
|
||||||
"""
|
|
||||||
self._failed_attempts_ratelimiter.ratelimit(
|
|
||||||
user_id.lower(),
|
|
||||||
time_now_s=self._clock.time(),
|
|
||||||
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
|
|
||||||
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
|
|
||||||
update=False,
|
|
||||||
)
|
|
||||||
|
|
||||||
self._account_ratelimiter.ratelimit(
|
|
||||||
user_id.lower(),
|
|
||||||
time_now_s=self._clock.time(),
|
|
||||||
rate_hz=self.hs.config.rc_login_account.per_second,
|
|
||||||
burst_count=self.hs.config.rc_login_account.burst_count,
|
|
||||||
update=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@attr.s
|
@attr.s
|
||||||
class MacaroonGenerator(object):
|
class MacaroonGenerator(object):
|
||||||
|
|
|
@ -95,6 +95,9 @@ class DeactivateAccountHandler(BaseHandler):
|
||||||
user_id, threepid["medium"], threepid["address"]
|
user_id, threepid["medium"], threepid["address"]
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Remove all 3PIDs this user has bound to the homeserver
|
||||||
|
yield self.store.user_delete_threepids(user_id)
|
||||||
|
|
||||||
# delete any devices belonging to the user, which will also
|
# delete any devices belonging to the user, which will also
|
||||||
# delete corresponding access tokens.
|
# delete corresponding access tokens.
|
||||||
yield self._device_handler.delete_all_devices_for_user(user_id)
|
yield self._device_handler.delete_all_devices_for_user(user_id)
|
||||||
|
|
|
@ -119,7 +119,7 @@ class DirectoryHandler(BaseHandler):
|
||||||
if not service.is_interested_in_alias(room_alias.to_string()):
|
if not service.is_interested_in_alias(room_alias.to_string()):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
400,
|
||||||
"This application service has not reserved" " this kind of alias.",
|
"This application service has not reserved this kind of alias.",
|
||||||
errcode=Codes.EXCLUSIVE,
|
errcode=Codes.EXCLUSIVE,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
|
@ -283,7 +283,7 @@ class DirectoryHandler(BaseHandler):
|
||||||
def on_directory_query(self, args):
|
def on_directory_query(self, args):
|
||||||
room_alias = RoomAlias.from_string(args["room_alias"])
|
room_alias = RoomAlias.from_string(args["room_alias"])
|
||||||
if not self.hs.is_mine(room_alias):
|
if not self.hs.is_mine(room_alias):
|
||||||
raise SynapseError(400, "Room Alias is not hosted on this Home Server")
|
raise SynapseError(400, "Room Alias is not hosted on this homeserver")
|
||||||
|
|
||||||
result = yield self.get_association_from_room_alias(room_alias)
|
result = yield self.get_association_from_room_alias(room_alias)
|
||||||
|
|
||||||
|
|
|
@ -30,6 +30,7 @@ from twisted.internet import defer
|
||||||
from synapse.api.errors import CodeMessageException, Codes, NotFoundError, SynapseError
|
from synapse.api.errors import CodeMessageException, Codes, NotFoundError, SynapseError
|
||||||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||||
from synapse.logging.opentracing import log_kv, set_tag, tag_args, trace
|
from synapse.logging.opentracing import log_kv, set_tag, tag_args, trace
|
||||||
|
from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
|
||||||
from synapse.types import (
|
from synapse.types import (
|
||||||
UserID,
|
UserID,
|
||||||
get_domain_from_id,
|
get_domain_from_id,
|
||||||
|
@ -53,6 +54,12 @@ class E2eKeysHandler(object):
|
||||||
|
|
||||||
self._edu_updater = SigningKeyEduUpdater(hs, self)
|
self._edu_updater = SigningKeyEduUpdater(hs, self)
|
||||||
|
|
||||||
|
self._is_master = hs.config.worker_app is None
|
||||||
|
if not self._is_master:
|
||||||
|
self._user_device_resync_client = ReplicationUserDevicesResyncRestServlet.make_client(
|
||||||
|
hs
|
||||||
|
)
|
||||||
|
|
||||||
federation_registry = hs.get_federation_registry()
|
federation_registry = hs.get_federation_registry()
|
||||||
|
|
||||||
# FIXME: switch to m.signing_key_update when MSC1756 is merged into the spec
|
# FIXME: switch to m.signing_key_update when MSC1756 is merged into the spec
|
||||||
|
@ -191,9 +198,15 @@ class E2eKeysHandler(object):
|
||||||
# probably be tracking their device lists. However, we haven't
|
# probably be tracking their device lists. However, we haven't
|
||||||
# done an initial sync on the device list so we do it now.
|
# done an initial sync on the device list so we do it now.
|
||||||
try:
|
try:
|
||||||
|
if self._is_master:
|
||||||
user_devices = yield self.device_handler.device_list_updater.user_device_resync(
|
user_devices = yield self.device_handler.device_list_updater.user_device_resync(
|
||||||
user_id
|
user_id
|
||||||
)
|
)
|
||||||
|
else:
|
||||||
|
user_devices = yield self._user_device_resync_client(
|
||||||
|
user_id=user_id
|
||||||
|
)
|
||||||
|
|
||||||
user_devices = user_devices["devices"]
|
user_devices = user_devices["devices"]
|
||||||
for device in user_devices:
|
for device in user_devices:
|
||||||
results[user_id] = {device["device_id"]: device["keys"]}
|
results[user_id] = {device["device_id"]: device["keys"]}
|
||||||
|
|
|
@ -1,5 +1,6 @@
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
# Copyright 2017, 2018 New Vector Ltd
|
# Copyright 2017, 2018 New Vector Ltd
|
||||||
|
# Copyright 2019 Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
|
@ -103,14 +104,35 @@ class E2eRoomKeysHandler(object):
|
||||||
rooms
|
rooms
|
||||||
session_id(string): session ID to delete keys for, for None to delete keys
|
session_id(string): session ID to delete keys for, for None to delete keys
|
||||||
for all sessions
|
for all sessions
|
||||||
|
Raises:
|
||||||
|
NotFoundError: if the backup version does not exist
|
||||||
Returns:
|
Returns:
|
||||||
A deferred of the deletion transaction
|
A dict containing the count and etag for the backup version
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# lock for consistency with uploading
|
# lock for consistency with uploading
|
||||||
with (yield self._upload_linearizer.queue(user_id)):
|
with (yield self._upload_linearizer.queue(user_id)):
|
||||||
|
# make sure the backup version exists
|
||||||
|
try:
|
||||||
|
version_info = yield self.store.get_e2e_room_keys_version_info(
|
||||||
|
user_id, version
|
||||||
|
)
|
||||||
|
except StoreError as e:
|
||||||
|
if e.code == 404:
|
||||||
|
raise NotFoundError("Unknown backup version")
|
||||||
|
else:
|
||||||
|
raise
|
||||||
|
|
||||||
yield self.store.delete_e2e_room_keys(user_id, version, room_id, session_id)
|
yield self.store.delete_e2e_room_keys(user_id, version, room_id, session_id)
|
||||||
|
|
||||||
|
version_etag = version_info["etag"] + 1
|
||||||
|
yield self.store.update_e2e_room_keys_version(
|
||||||
|
user_id, version, None, version_etag
|
||||||
|
)
|
||||||
|
|
||||||
|
count = yield self.store.count_e2e_room_keys(user_id, version)
|
||||||
|
return {"etag": str(version_etag), "count": count}
|
||||||
|
|
||||||
@trace
|
@trace
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def upload_room_keys(self, user_id, version, room_keys):
|
def upload_room_keys(self, user_id, version, room_keys):
|
||||||
|
@ -138,6 +160,9 @@ class E2eRoomKeysHandler(object):
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A dict containing the count and etag for the backup version
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
NotFoundError: if there are no versions defined
|
NotFoundError: if there are no versions defined
|
||||||
RoomKeysVersionError: if the uploaded version is not the current version
|
RoomKeysVersionError: if the uploaded version is not the current version
|
||||||
|
@ -171,26 +196,17 @@ class E2eRoomKeysHandler(object):
|
||||||
else:
|
else:
|
||||||
raise
|
raise
|
||||||
|
|
||||||
# go through the room_keys.
|
# Fetch any existing room keys for the sessions that have been
|
||||||
# XXX: this should/could be done concurrently, given we're in a lock.
|
# submitted. Then compare them with the submitted keys. If the
|
||||||
for room_id, room in iteritems(room_keys["rooms"]):
|
# key is new, insert it; if the key should be updated, then update
|
||||||
for session_id, session in iteritems(room["sessions"]):
|
# it; otherwise, drop it.
|
||||||
yield self._upload_room_key(
|
existing_keys = yield self.store.get_e2e_room_keys_multi(
|
||||||
user_id, version, room_id, session_id, session
|
user_id, version, room_keys["rooms"]
|
||||||
)
|
)
|
||||||
|
to_insert = [] # batch the inserts together
|
||||||
@defer.inlineCallbacks
|
changed = False # if anything has changed, we need to update the etag
|
||||||
def _upload_room_key(self, user_id, version, room_id, session_id, room_key):
|
for room_id, room in iteritems(room_keys["rooms"]):
|
||||||
"""Upload a given room_key for a given room and session into a given
|
for session_id, room_key in iteritems(room["sessions"]):
|
||||||
version of the backup. Merges the key with any which might already exist.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
user_id(str): the user whose backup we're setting
|
|
||||||
version(str): the version ID of the backup we're updating
|
|
||||||
room_id(str): the ID of the room whose keys we're setting
|
|
||||||
session_id(str): the session whose room_key we're setting
|
|
||||||
room_key(dict): the room_key being set
|
|
||||||
"""
|
|
||||||
log_kv(
|
log_kv(
|
||||||
{
|
{
|
||||||
"message": "Trying to upload room key",
|
"message": "Trying to upload room key",
|
||||||
|
@ -199,14 +215,20 @@ class E2eRoomKeysHandler(object):
|
||||||
"user_id": user_id,
|
"user_id": user_id,
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
# get the room_key for this particular row
|
current_room_key = existing_keys.get(room_id, {}).get(session_id)
|
||||||
current_room_key = None
|
if current_room_key:
|
||||||
try:
|
if self._should_replace_room_key(current_room_key, room_key):
|
||||||
current_room_key = yield self.store.get_e2e_room_key(
|
log_kv({"message": "Replacing room key."})
|
||||||
user_id, version, room_id, session_id
|
# updates are done one at a time in the DB, so send
|
||||||
|
# updates right away rather than batching them up,
|
||||||
|
# like we do with the inserts
|
||||||
|
yield self.store.update_e2e_room_key(
|
||||||
|
user_id, version, room_id, session_id, room_key
|
||||||
)
|
)
|
||||||
except StoreError as e:
|
changed = True
|
||||||
if e.code == 404:
|
else:
|
||||||
|
log_kv({"message": "Not replacing room_key."})
|
||||||
|
else:
|
||||||
log_kv(
|
log_kv(
|
||||||
{
|
{
|
||||||
"message": "Room key not found.",
|
"message": "Room key not found.",
|
||||||
|
@ -214,16 +236,22 @@ class E2eRoomKeysHandler(object):
|
||||||
"user_id": user_id,
|
"user_id": user_id,
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
else:
|
|
||||||
raise
|
|
||||||
|
|
||||||
if self._should_replace_room_key(current_room_key, room_key):
|
|
||||||
log_kv({"message": "Replacing room key."})
|
log_kv({"message": "Replacing room key."})
|
||||||
yield self.store.set_e2e_room_key(
|
to_insert.append((room_id, session_id, room_key))
|
||||||
user_id, version, room_id, session_id, room_key
|
changed = True
|
||||||
|
|
||||||
|
if len(to_insert):
|
||||||
|
yield self.store.add_e2e_room_keys(user_id, version, to_insert)
|
||||||
|
|
||||||
|
version_etag = version_info["etag"]
|
||||||
|
if changed:
|
||||||
|
version_etag = version_etag + 1
|
||||||
|
yield self.store.update_e2e_room_keys_version(
|
||||||
|
user_id, version, None, version_etag
|
||||||
)
|
)
|
||||||
else:
|
|
||||||
log_kv({"message": "Not replacing room_key."})
|
count = yield self.store.count_e2e_room_keys(user_id, version)
|
||||||
|
return {"etag": str(version_etag), "count": count}
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _should_replace_room_key(current_room_key, room_key):
|
def _should_replace_room_key(current_room_key, room_key):
|
||||||
|
@ -314,6 +342,8 @@ class E2eRoomKeysHandler(object):
|
||||||
raise NotFoundError("Unknown backup version")
|
raise NotFoundError("Unknown backup version")
|
||||||
else:
|
else:
|
||||||
raise
|
raise
|
||||||
|
|
||||||
|
res["count"] = yield self.store.count_e2e_room_keys(user_id, res["version"])
|
||||||
return res
|
return res
|
||||||
|
|
||||||
@trace
|
@trace
|
||||||
|
|
|
@ -1428,9 +1428,9 @@ class FederationHandler(BaseHandler):
|
||||||
return event
|
return event
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def do_remotely_reject_invite(self, target_hosts, room_id, user_id):
|
def do_remotely_reject_invite(self, target_hosts, room_id, user_id, content):
|
||||||
origin, event, event_format_version = yield self._make_and_verify_event(
|
origin, event, event_format_version = yield self._make_and_verify_event(
|
||||||
target_hosts, room_id, user_id, "leave"
|
target_hosts, room_id, user_id, "leave", content=content,
|
||||||
)
|
)
|
||||||
# Mark as outlier as we don't have any state for this event; we're not
|
# Mark as outlier as we don't have any state for this event; we're not
|
||||||
# even in the room.
|
# even in the room.
|
||||||
|
@ -1688,7 +1688,11 @@ class FederationHandler(BaseHandler):
|
||||||
# hack around with a try/finally instead.
|
# hack around with a try/finally instead.
|
||||||
success = False
|
success = False
|
||||||
try:
|
try:
|
||||||
if not event.internal_metadata.is_outlier() and not backfilled:
|
if (
|
||||||
|
not event.internal_metadata.is_outlier()
|
||||||
|
and not backfilled
|
||||||
|
and not context.rejected
|
||||||
|
):
|
||||||
yield self.action_generator.handle_push_actions_for_event(
|
yield self.action_generator.handle_push_actions_for_event(
|
||||||
event, context
|
event, context
|
||||||
)
|
)
|
||||||
|
@ -2036,8 +2040,10 @@ class FederationHandler(BaseHandler):
|
||||||
auth_events (dict[(str, str)->synapse.events.EventBase]):
|
auth_events (dict[(str, str)->synapse.events.EventBase]):
|
||||||
Map from (event_type, state_key) to event
|
Map from (event_type, state_key) to event
|
||||||
|
|
||||||
What we expect the event's auth_events to be, based on the event's
|
Normally, our calculated auth_events based on the state of the room
|
||||||
position in the dag. I think? maybe??
|
at the event's position in the DAG, though occasionally (eg if the
|
||||||
|
event is an outlier), may be the auth events claimed by the remote
|
||||||
|
server.
|
||||||
|
|
||||||
Also NB that this function adds entries to it.
|
Also NB that this function adds entries to it.
|
||||||
Returns:
|
Returns:
|
||||||
|
@ -2087,30 +2093,35 @@ class FederationHandler(BaseHandler):
|
||||||
origin (str):
|
origin (str):
|
||||||
event (synapse.events.EventBase):
|
event (synapse.events.EventBase):
|
||||||
context (synapse.events.snapshot.EventContext):
|
context (synapse.events.snapshot.EventContext):
|
||||||
|
|
||||||
auth_events (dict[(str, str)->synapse.events.EventBase]):
|
auth_events (dict[(str, str)->synapse.events.EventBase]):
|
||||||
|
Map from (event_type, state_key) to event
|
||||||
|
|
||||||
|
Normally, our calculated auth_events based on the state of the room
|
||||||
|
at the event's position in the DAG, though occasionally (eg if the
|
||||||
|
event is an outlier), may be the auth events claimed by the remote
|
||||||
|
server.
|
||||||
|
|
||||||
|
Also NB that this function adds entries to it.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
defer.Deferred[EventContext]: updated context
|
defer.Deferred[EventContext]: updated context
|
||||||
"""
|
"""
|
||||||
event_auth_events = set(event.auth_event_ids())
|
event_auth_events = set(event.auth_event_ids())
|
||||||
|
|
||||||
if event.is_state():
|
# missing_auth is the set of the event's auth_events which we don't yet have
|
||||||
event_key = (event.type, event.state_key)
|
# in auth_events.
|
||||||
else:
|
|
||||||
event_key = None
|
|
||||||
|
|
||||||
# if the event's auth_events refers to events which are not in our
|
|
||||||
# calculated auth_events, we need to fetch those events from somewhere.
|
|
||||||
#
|
|
||||||
# we start by fetching them from the store, and then try calling /event_auth/.
|
|
||||||
missing_auth = event_auth_events.difference(
|
missing_auth = event_auth_events.difference(
|
||||||
e.event_id for e in auth_events.values()
|
e.event_id for e in auth_events.values()
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# if we have missing events, we need to fetch those events from somewhere.
|
||||||
|
#
|
||||||
|
# we start by checking if they are in the store, and then try calling /event_auth/.
|
||||||
if missing_auth:
|
if missing_auth:
|
||||||
# TODO: can we use store.have_seen_events here instead?
|
# TODO: can we use store.have_seen_events here instead?
|
||||||
have_events = yield self.store.get_seen_events_with_rejections(missing_auth)
|
have_events = yield self.store.get_seen_events_with_rejections(missing_auth)
|
||||||
logger.debug("Got events %s from store", have_events)
|
logger.debug("Found events %s in the store", have_events)
|
||||||
missing_auth.difference_update(have_events.keys())
|
missing_auth.difference_update(have_events.keys())
|
||||||
else:
|
else:
|
||||||
have_events = {}
|
have_events = {}
|
||||||
|
@ -2165,15 +2176,17 @@ class FederationHandler(BaseHandler):
|
||||||
event.auth_event_ids()
|
event.auth_event_ids()
|
||||||
)
|
)
|
||||||
except Exception:
|
except Exception:
|
||||||
# FIXME:
|
|
||||||
logger.exception("Failed to get auth chain")
|
logger.exception("Failed to get auth chain")
|
||||||
|
|
||||||
if event.internal_metadata.is_outlier():
|
if event.internal_metadata.is_outlier():
|
||||||
|
# XXX: given that, for an outlier, we'll be working with the
|
||||||
|
# event's *claimed* auth events rather than those we calculated:
|
||||||
|
# (a) is there any point in this test, since different_auth below will
|
||||||
|
# obviously be empty
|
||||||
|
# (b) alternatively, why don't we do it earlier?
|
||||||
logger.info("Skipping auth_event fetch for outlier")
|
logger.info("Skipping auth_event fetch for outlier")
|
||||||
return context
|
return context
|
||||||
|
|
||||||
# FIXME: Assumes we have and stored all the state for all the
|
|
||||||
# prev_events
|
|
||||||
different_auth = event_auth_events.difference(
|
different_auth = event_auth_events.difference(
|
||||||
e.event_id for e in auth_events.values()
|
e.event_id for e in auth_events.values()
|
||||||
)
|
)
|
||||||
|
@ -2187,27 +2200,22 @@ class FederationHandler(BaseHandler):
|
||||||
different_auth,
|
different_auth,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# now we state-resolve between our own idea of the auth events, and the remote's
|
||||||
|
# idea of them.
|
||||||
|
|
||||||
room_version = yield self.store.get_room_version(event.room_id)
|
room_version = yield self.store.get_room_version(event.room_id)
|
||||||
|
different_event_ids = [
|
||||||
|
d for d in different_auth if d in have_events and not have_events[d]
|
||||||
|
]
|
||||||
|
|
||||||
different_events = yield make_deferred_yieldable(
|
if different_event_ids:
|
||||||
defer.gatherResults(
|
# XXX: currently this checks for redactions but I'm not convinced that is
|
||||||
[
|
# necessary?
|
||||||
run_in_background(
|
different_events = yield self.store.get_events_as_list(different_event_ids)
|
||||||
self.store.get_event, d, allow_none=True, allow_rejected=False
|
|
||||||
)
|
|
||||||
for d in different_auth
|
|
||||||
if d in have_events and not have_events[d]
|
|
||||||
],
|
|
||||||
consumeErrors=True,
|
|
||||||
)
|
|
||||||
).addErrback(unwrapFirstError)
|
|
||||||
|
|
||||||
if different_events:
|
|
||||||
local_view = dict(auth_events)
|
local_view = dict(auth_events)
|
||||||
remote_view = dict(auth_events)
|
remote_view = dict(auth_events)
|
||||||
remote_view.update(
|
remote_view.update({(d.type, d.state_key): d for d in different_events})
|
||||||
{(d.type, d.state_key): d for d in different_events if d}
|
|
||||||
)
|
|
||||||
|
|
||||||
new_state = yield self.state_handler.resolve_events(
|
new_state = yield self.state_handler.resolve_events(
|
||||||
room_version,
|
room_version,
|
||||||
|
@ -2227,13 +2235,13 @@ class FederationHandler(BaseHandler):
|
||||||
auth_events.update(new_state)
|
auth_events.update(new_state)
|
||||||
|
|
||||||
context = yield self._update_context_for_auth_events(
|
context = yield self._update_context_for_auth_events(
|
||||||
event, context, auth_events, event_key
|
event, context, auth_events
|
||||||
)
|
)
|
||||||
|
|
||||||
return context
|
return context
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def _update_context_for_auth_events(self, event, context, auth_events, event_key):
|
def _update_context_for_auth_events(self, event, context, auth_events):
|
||||||
"""Update the state_ids in an event context after auth event resolution,
|
"""Update the state_ids in an event context after auth event resolution,
|
||||||
storing the changes as a new state group.
|
storing the changes as a new state group.
|
||||||
|
|
||||||
|
@ -2242,18 +2250,21 @@ class FederationHandler(BaseHandler):
|
||||||
|
|
||||||
context (synapse.events.snapshot.EventContext): initial event context
|
context (synapse.events.snapshot.EventContext): initial event context
|
||||||
|
|
||||||
auth_events (dict[(str, str)->str]): Events to update in the event
|
auth_events (dict[(str, str)->EventBase]): Events to update in the event
|
||||||
context.
|
context.
|
||||||
|
|
||||||
event_key ((str, str)): (type, state_key) for the current event.
|
|
||||||
this will not be included in the current_state in the context.
|
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Deferred[EventContext]: new event context
|
Deferred[EventContext]: new event context
|
||||||
"""
|
"""
|
||||||
|
# exclude the state key of the new event from the current_state in the context.
|
||||||
|
if event.is_state():
|
||||||
|
event_key = (event.type, event.state_key)
|
||||||
|
else:
|
||||||
|
event_key = None
|
||||||
state_updates = {
|
state_updates = {
|
||||||
k: a.event_id for k, a in iteritems(auth_events) if k != event_key
|
k: a.event_id for k, a in iteritems(auth_events) if k != event_key
|
||||||
}
|
}
|
||||||
|
|
||||||
current_state_ids = yield context.get_current_state_ids(self.store)
|
current_state_ids = yield context.get_current_state_ids(self.store)
|
||||||
current_state_ids = dict(current_state_ids)
|
current_state_ids = dict(current_state_ids)
|
||||||
|
|
||||||
|
@ -2276,6 +2287,7 @@ class FederationHandler(BaseHandler):
|
||||||
|
|
||||||
return EventContext.with_state(
|
return EventContext.with_state(
|
||||||
state_group=state_group,
|
state_group=state_group,
|
||||||
|
state_group_before_event=context.state_group_before_event,
|
||||||
current_state_ids=current_state_ids,
|
current_state_ids=current_state_ids,
|
||||||
prev_state_ids=prev_state_ids,
|
prev_state_ids=prev_state_ids,
|
||||||
prev_group=prev_group,
|
prev_group=prev_group,
|
||||||
|
@ -2454,7 +2466,7 @@ class FederationHandler(BaseHandler):
|
||||||
room_version, event_dict, event, context
|
room_version, event_dict, event, context
|
||||||
)
|
)
|
||||||
|
|
||||||
EventValidator().validate_new(event)
|
EventValidator().validate_new(event, self.config)
|
||||||
|
|
||||||
# We need to tell the transaction queue to send this out, even
|
# We need to tell the transaction queue to send this out, even
|
||||||
# though the sender isn't a local user.
|
# though the sender isn't a local user.
|
||||||
|
@ -2569,7 +2581,7 @@ class FederationHandler(BaseHandler):
|
||||||
event, context = yield self.event_creation_handler.create_new_client_event(
|
event, context = yield self.event_creation_handler.create_new_client_event(
|
||||||
builder=builder
|
builder=builder
|
||||||
)
|
)
|
||||||
EventValidator().validate_new(event)
|
EventValidator().validate_new(event, self.config)
|
||||||
return (event, context)
|
return (event, context)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue