forked from MirrorHub/synapse
Merge branch 'develop' into babolivier/message_retention
This commit is contained in:
commit
9e937c28ee
138 changed files with 2594 additions and 1268 deletions
101
CHANGES.md
101
CHANGES.md
|
@ -1,3 +1,104 @@
|
|||
Synapse 1.6.0 (2019-11-26)
|
||||
==========================
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix phone home stats reporting. ([\#6418](https://github.com/matrix-org/synapse/issues/6418))
|
||||
|
||||
|
||||
Synapse 1.6.0rc2 (2019-11-25)
|
||||
=============================
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a bug which could cause the background database update hander for event labels to get stuck in a loop raising exceptions. ([\#6407](https://github.com/matrix-org/synapse/issues/6407))
|
||||
|
||||
|
||||
Synapse 1.6.0rc1 (2019-11-20)
|
||||
=============================
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- Add federation support for cross-signing. ([\#5727](https://github.com/matrix-org/synapse/issues/5727))
|
||||
- Increase default room version from 4 to 5, thereby enforcing server key validity period checks. ([\#6220](https://github.com/matrix-org/synapse/issues/6220))
|
||||
- Add support for outbound http proxying via http_proxy/HTTPS_PROXY env vars. ([\#6238](https://github.com/matrix-org/synapse/issues/6238))
|
||||
- Implement label-based filtering on `/sync` and `/messages` ([MSC2326](https://github.com/matrix-org/matrix-doc/pull/2326)). ([\#6301](https://github.com/matrix-org/synapse/issues/6301), [\#6310](https://github.com/matrix-org/synapse/issues/6310), [\#6340](https://github.com/matrix-org/synapse/issues/6340))
|
||||
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix LruCache callback deduplication for Python 3.8. Contributed by @V02460. ([\#6213](https://github.com/matrix-org/synapse/issues/6213))
|
||||
- Remove a room from a server's public rooms list on room upgrade. ([\#6232](https://github.com/matrix-org/synapse/issues/6232), [\#6235](https://github.com/matrix-org/synapse/issues/6235))
|
||||
- Delete keys from key backup when deleting backup versions. ([\#6253](https://github.com/matrix-org/synapse/issues/6253))
|
||||
- Make notification of cross-signing signatures work with workers. ([\#6254](https://github.com/matrix-org/synapse/issues/6254))
|
||||
- Fix exception when remote servers attempt to join a room that they're not allowed to join. ([\#6278](https://github.com/matrix-org/synapse/issues/6278))
|
||||
- Prevent errors from appearing on Synapse startup if `git` is not installed. ([\#6284](https://github.com/matrix-org/synapse/issues/6284))
|
||||
- Appservice requests will no longer contain a double slash prefix when the appservice url provided ends in a slash. ([\#6306](https://github.com/matrix-org/synapse/issues/6306))
|
||||
- Fix `/purge_room` admin API. ([\#6307](https://github.com/matrix-org/synapse/issues/6307))
|
||||
- Fix the `hidden` field in the `devices` table for SQLite versions prior to 3.23.0. ([\#6313](https://github.com/matrix-org/synapse/issues/6313))
|
||||
- Fix bug which casued rejected events to be persisted with the wrong room state. ([\#6320](https://github.com/matrix-org/synapse/issues/6320))
|
||||
- Fix bug where `rc_login` ratelimiting would prematurely kick in. ([\#6335](https://github.com/matrix-org/synapse/issues/6335))
|
||||
- Prevent the server taking a long time to start up when guest registration is enabled. ([\#6338](https://github.com/matrix-org/synapse/issues/6338))
|
||||
- Fix bug where upgrading a guest account to a full user would fail when account validity is enabled. ([\#6359](https://github.com/matrix-org/synapse/issues/6359))
|
||||
- Fix `to_device` stream ID getting reset every time Synapse restarts, which had the potential to cause unable to decrypt errors. ([\#6363](https://github.com/matrix-org/synapse/issues/6363))
|
||||
- Fix permission denied error when trying to generate a config file with the docker image. ([\#6389](https://github.com/matrix-org/synapse/issues/6389))
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- Contributor documentation now mentions script to run linters. ([\#6164](https://github.com/matrix-org/synapse/issues/6164))
|
||||
- Modify CAPTCHA_SETUP.md to update the terms `private key` and `public key` to `secret key` and `site key` respectively. Contributed by Yash Jipkate. ([\#6257](https://github.com/matrix-org/synapse/issues/6257))
|
||||
- Update `INSTALL.md` Email section to talk about `account_threepid_delegates`. ([\#6272](https://github.com/matrix-org/synapse/issues/6272))
|
||||
- Fix a small typo in `account_threepid_delegates` configuration option. ([\#6273](https://github.com/matrix-org/synapse/issues/6273))
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Add a CI job to test the `synapse_port_db` script. ([\#6140](https://github.com/matrix-org/synapse/issues/6140), [\#6276](https://github.com/matrix-org/synapse/issues/6276))
|
||||
- Convert EventContext to an attrs. ([\#6218](https://github.com/matrix-org/synapse/issues/6218))
|
||||
- Move `persist_events` out from main data store. ([\#6240](https://github.com/matrix-org/synapse/issues/6240), [\#6300](https://github.com/matrix-org/synapse/issues/6300))
|
||||
- Reduce verbosity of user/room stats. ([\#6250](https://github.com/matrix-org/synapse/issues/6250))
|
||||
- Reduce impact of debug logging. ([\#6251](https://github.com/matrix-org/synapse/issues/6251))
|
||||
- Expose some homeserver functionality to spam checkers. ([\#6259](https://github.com/matrix-org/synapse/issues/6259))
|
||||
- Change cache descriptors to always return deferreds. ([\#6263](https://github.com/matrix-org/synapse/issues/6263), [\#6291](https://github.com/matrix-org/synapse/issues/6291))
|
||||
- Fix incorrect comment regarding the functionality of an `if` statement. ([\#6269](https://github.com/matrix-org/synapse/issues/6269))
|
||||
- Update CI to run `isort` over the `scripts` and `scripts-dev` directories. ([\#6270](https://github.com/matrix-org/synapse/issues/6270))
|
||||
- Replace every instance of `logger.warn` method with `logger.warning` as the former is deprecated. ([\#6271](https://github.com/matrix-org/synapse/issues/6271), [\#6314](https://github.com/matrix-org/synapse/issues/6314))
|
||||
- Port replication http server endpoints to async/await. ([\#6274](https://github.com/matrix-org/synapse/issues/6274))
|
||||
- Port room rest handlers to async/await. ([\#6275](https://github.com/matrix-org/synapse/issues/6275))
|
||||
- Remove redundant CLI parameters on CI's `flake8` step. ([\#6277](https://github.com/matrix-org/synapse/issues/6277))
|
||||
- Port `federation_server.py` to async/await. ([\#6279](https://github.com/matrix-org/synapse/issues/6279))
|
||||
- Port receipt and read markers to async/wait. ([\#6280](https://github.com/matrix-org/synapse/issues/6280))
|
||||
- Split out state storage into separate data store. ([\#6294](https://github.com/matrix-org/synapse/issues/6294), [\#6295](https://github.com/matrix-org/synapse/issues/6295))
|
||||
- Refactor EventContext for clarity. ([\#6298](https://github.com/matrix-org/synapse/issues/6298))
|
||||
- Update the version of black used to 19.10b0. ([\#6304](https://github.com/matrix-org/synapse/issues/6304))
|
||||
- Add some documentation about worker replication. ([\#6305](https://github.com/matrix-org/synapse/issues/6305))
|
||||
- Move admin endpoints into separate files. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#6308](https://github.com/matrix-org/synapse/issues/6308))
|
||||
- Document the use of `lint.sh` for code style enforcement & extend it to run on specified paths only. ([\#6312](https://github.com/matrix-org/synapse/issues/6312))
|
||||
- Add optional python dependencies and dependant binary libraries to snapcraft packaging. ([\#6317](https://github.com/matrix-org/synapse/issues/6317))
|
||||
- Remove the dependency on psutil and replace functionality with the stdlib `resource` module. ([\#6318](https://github.com/matrix-org/synapse/issues/6318), [\#6336](https://github.com/matrix-org/synapse/issues/6336))
|
||||
- Improve documentation for EventContext fields. ([\#6319](https://github.com/matrix-org/synapse/issues/6319))
|
||||
- Add some checks that we aren't using state from rejected events. ([\#6330](https://github.com/matrix-org/synapse/issues/6330))
|
||||
- Add continuous integration for python 3.8. ([\#6341](https://github.com/matrix-org/synapse/issues/6341))
|
||||
- Correct spacing/case of various instances of the word "homeserver". ([\#6357](https://github.com/matrix-org/synapse/issues/6357))
|
||||
- Temporarily blacklist the failing unit test PurgeRoomTestCase.test_purge_room. ([\#6361](https://github.com/matrix-org/synapse/issues/6361))
|
||||
|
||||
|
||||
Synapse 1.5.1 (2019-11-06)
|
||||
==========================
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- Limit the length of data returned by url previews, to prevent DoS attacks. ([\#6331](https://github.com/matrix-org/synapse/issues/6331), [\#6334](https://github.com/matrix-org/synapse/issues/6334))
|
||||
|
||||
|
||||
Synapse 1.5.0 (2019-10-29)
|
||||
==========================
|
||||
|
||||
|
|
14
INSTALL.md
14
INSTALL.md
|
@ -36,7 +36,7 @@ that your email address is probably `user@example.com` rather than
|
|||
System requirements:
|
||||
|
||||
- POSIX-compliant system (tested on Linux & OS X)
|
||||
- Python 3.5, 3.6, or 3.7
|
||||
- Python 3.5, 3.6, 3.7 or 3.8.
|
||||
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
|
||||
|
||||
Synapse is written in Python but some of the libraries it uses are written in
|
||||
|
@ -133,9 +133,9 @@ sudo yum install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
|
|||
sudo yum groupinstall "Development Tools"
|
||||
```
|
||||
|
||||
#### Mac OS X
|
||||
#### macOS
|
||||
|
||||
Installing prerequisites on Mac OS X:
|
||||
Installing prerequisites on macOS:
|
||||
|
||||
```
|
||||
xcode-select --install
|
||||
|
@ -144,6 +144,14 @@ sudo pip install virtualenv
|
|||
brew install pkg-config libffi
|
||||
```
|
||||
|
||||
On macOS Catalina (10.15) you may need to explicitly install OpenSSL
|
||||
via brew and inform `pip` about it so that `psycopg2` builds:
|
||||
|
||||
```
|
||||
brew install openssl@1.1
|
||||
export LDFLAGS=-L/usr/local/Cellar/openssl\@1.1/1.1.1d/lib/
|
||||
```
|
||||
|
||||
#### OpenSUSE
|
||||
|
||||
Installing prerequisites on openSUSE:
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
Add federation support for cross-signing.
|
|
@ -1 +0,0 @@
|
|||
Add a CI job to test the `synapse_port_db` script.
|
|
@ -1 +0,0 @@
|
|||
Contributor documentation now mentions script to run linters.
|
|
@ -1 +0,0 @@
|
|||
Convert EventContext to an attrs.
|
|
@ -1 +0,0 @@
|
|||
Remove a room from a server's public rooms list on room upgrade.
|
|
@ -1 +0,0 @@
|
|||
Add support for outbound http proxying via http_proxy/HTTPS_PROXY env vars.
|
|
@ -1 +0,0 @@
|
|||
Move `persist_events` out from main data store.
|
|
@ -1 +0,0 @@
|
|||
Reduce verbosity of user/room stats.
|
|
@ -1 +0,0 @@
|
|||
Reduce impact of debug logging.
|
|
@ -1 +0,0 @@
|
|||
Delete keys from key backup when deleting backup versions.
|
|
@ -1 +0,0 @@
|
|||
Make notification of cross-signing signatures work with workers.
|
|
@ -1 +0,0 @@
|
|||
Modify CAPTCHA_SETUP.md to update the terms `private key` and `public key` to `secret key` and `site key` respectively. Contributed by Yash Jipkate.
|
|
@ -1 +0,0 @@
|
|||
Expose some homeserver functionality to spam checkers.
|
|
@ -1 +0,0 @@
|
|||
Change cache descriptors to always return deferreds.
|
|
@ -1 +0,0 @@
|
|||
Fix incorrect comment regarding the functionality of an `if` statement.
|
|
@ -1 +0,0 @@
|
|||
Update CI to run `isort` over the `scripts` and `scripts-dev` directories.
|
|
@ -1 +0,0 @@
|
|||
Replace every instance of `logger.warn` method with `logger.warning` as the former is deprecated.
|
|
@ -1 +0,0 @@
|
|||
Update `INSTALL.md` Email section to talk about `account_threepid_delegates`.
|
|
@ -1 +0,0 @@
|
|||
Fix a small typo in `account_threepid_delegates` configuration option.
|
|
@ -1 +0,0 @@
|
|||
Port replication http server endpoints to async/await.
|
|
@ -1 +0,0 @@
|
|||
Port room rest handlers to async/await.
|
|
@ -1 +0,0 @@
|
|||
Add a CI job to test the `synapse_port_db` script.
|
|
@ -1 +0,0 @@
|
|||
Remove redundant CLI parameters on CI's `flake8` step.
|
|
@ -1 +0,0 @@
|
|||
Fix exception when remote servers attempt to join a room that they're not allowed to join.
|
|
@ -1 +0,0 @@
|
|||
Port `federation_server.py` to async/await.
|
|
@ -1 +0,0 @@
|
|||
Port receipt and read markers to async/wait.
|
|
@ -1 +0,0 @@
|
|||
Prevent errors from appearing on Synapse startup if `git` is not installed.
|
|
@ -1 +0,0 @@
|
|||
Change cache descriptors to always return deferreds.
|
|
@ -1 +0,0 @@
|
|||
Split out state storage into separate data store.
|
|
@ -1 +0,0 @@
|
|||
Refactor EventContext for clarity.
|
|
@ -1 +0,0 @@
|
|||
Move `persist_events` out from main data store.
|
|
@ -1 +0,0 @@
|
|||
Implement label-based filtering on `/sync` and `/messages` ([MSC2326](https://github.com/matrix-org/matrix-doc/pull/2326)).
|
|
@ -1 +0,0 @@
|
|||
Update the version of black used to 19.10b0.
|
|
@ -1 +0,0 @@
|
|||
Appservice requests will no longer contain a double slash prefix when the appservice url provided ends in a slash.
|
|
@ -1 +0,0 @@
|
|||
Fix `/purge_room` admin API.
|
|
@ -1 +0,0 @@
|
|||
Document the use of `lint.sh` for code style enforcement & extend it to run on specified paths only.
|
|
@ -1 +0,0 @@
|
|||
Fix the `hidden` field in the `devices` table for SQLite versions prior to 3.23.0.
|
|
@ -1 +0,0 @@
|
|||
Replace every instance of `logger.warn` method with `logger.warning` as the former is deprecated.
|
1
changelog.d/6322.misc
Normal file
1
changelog.d/6322.misc
Normal file
|
@ -0,0 +1 @@
|
|||
Improve the performance of outputting structured logging.
|
1
changelog.d/6332.bugfix
Normal file
1
changelog.d/6332.bugfix
Normal file
|
@ -0,0 +1 @@
|
|||
Fix caching devices for remote users when using workers, so that we don't attempt to refetch (and potentially fail) each time a user requests devices.
|
1
changelog.d/6333.bugfix
Normal file
1
changelog.d/6333.bugfix
Normal file
|
@ -0,0 +1 @@
|
|||
Prevent account data syncs getting lost across TCP replication.
|
1
changelog.d/6362.misc
Normal file
1
changelog.d/6362.misc
Normal file
|
@ -0,0 +1 @@
|
|||
Clean up some unnecessary quotation marks around the codebase.
|
1
changelog.d/6388.doc
Normal file
1
changelog.d/6388.doc
Normal file
|
@ -0,0 +1 @@
|
|||
Fix link in the user directory documentation.
|
1
changelog.d/6390.doc
Normal file
1
changelog.d/6390.doc
Normal file
|
@ -0,0 +1 @@
|
|||
Add build instructions to the docker readme.
|
1
changelog.d/6392.misc
Normal file
1
changelog.d/6392.misc
Normal file
|
@ -0,0 +1 @@
|
|||
Add a test scenario to make sure room history purges don't break `/messages` in the future.
|
1
changelog.d/6408.bugfix
Normal file
1
changelog.d/6408.bugfix
Normal file
|
@ -0,0 +1 @@
|
|||
Fix an intermittent exception when handling read-receipts.
|
1
changelog.d/6420.bugfix
Normal file
1
changelog.d/6420.bugfix
Normal file
|
@ -0,0 +1 @@
|
|||
Fix broken guest registration when there are existing blocks of numeric user IDs.
|
12
debian/changelog
vendored
12
debian/changelog
vendored
|
@ -1,3 +1,15 @@
|
|||
matrix-synapse-py3 (1.6.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.6.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 26 Nov 2019 12:15:40 +0000
|
||||
|
||||
matrix-synapse-py3 (1.5.1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.5.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Wed, 06 Nov 2019 10:02:14 +0000
|
||||
|
||||
matrix-synapse-py3 (1.5.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.5.0.
|
||||
|
|
|
@ -130,3 +130,15 @@ docker run -it --rm \
|
|||
This will generate the same configuration file as the legacy mode used, but
|
||||
will store it in `/data/homeserver.yaml` instead of a temporary location. You
|
||||
can then use it as shown above at [Running synapse](#running-synapse).
|
||||
|
||||
## Building the image
|
||||
|
||||
If you need to build the image from a Synapse checkout, use the following `docker
|
||||
build` command from the repo's root:
|
||||
|
||||
```
|
||||
docker build -t matrixdotorg/synapse -f docker/Dockerfile .
|
||||
```
|
||||
|
||||
You can choose to build a different docker image by changing the value of the `-f` flag to
|
||||
point to another Dockerfile.
|
||||
|
|
|
@ -169,11 +169,11 @@ def run_generate_config(environ, ownership):
|
|||
# log("running %s" % (args, ))
|
||||
|
||||
if ownership is not None:
|
||||
args = ["su-exec", ownership] + args
|
||||
os.execv("/sbin/su-exec", args)
|
||||
|
||||
# make sure that synapse has perms to write to the data dir.
|
||||
subprocess.check_output(["chown", ownership, data_dir])
|
||||
|
||||
args = ["su-exec", ownership] + args
|
||||
os.execv("/sbin/su-exec", args)
|
||||
else:
|
||||
os.execv("/usr/local/bin/python", args)
|
||||
|
||||
|
|
|
@ -72,7 +72,7 @@ pid_file: DATADIR/homeserver.pid
|
|||
# For example, for room version 1, default_room_version should be set
|
||||
# to "1".
|
||||
#
|
||||
#default_room_version: "4"
|
||||
#default_room_version: "5"
|
||||
|
||||
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
|
||||
#
|
||||
|
@ -287,7 +287,7 @@ listeners:
|
|||
# Used by phonehome stats to group together related servers.
|
||||
#server_context: context
|
||||
|
||||
# Resource-constrained Homeserver Settings
|
||||
# Resource-constrained homeserver Settings
|
||||
#
|
||||
# If limit_remote_rooms.enabled is True, the room complexity will be
|
||||
# checked before a user joins a new remote room. If it is above
|
||||
|
@ -806,11 +806,11 @@ uploads_path: "DATADIR/uploads"
|
|||
## Captcha ##
|
||||
# See docs/CAPTCHA_SETUP for full details of configuring this.
|
||||
|
||||
# This Home Server's ReCAPTCHA public key.
|
||||
# This homeserver's ReCAPTCHA public key.
|
||||
#
|
||||
#recaptcha_public_key: "YOUR_PUBLIC_KEY"
|
||||
|
||||
# This Home Server's ReCAPTCHA private key.
|
||||
# This homeserver's ReCAPTCHA private key.
|
||||
#
|
||||
#recaptcha_private_key: "YOUR_PRIVATE_KEY"
|
||||
|
||||
|
@ -1333,7 +1333,7 @@ password_config:
|
|||
# smtp_user: "exampleusername"
|
||||
# smtp_pass: "examplepassword"
|
||||
# require_transport_security: false
|
||||
# notif_from: "Your Friendly %(app)s Home Server <noreply@example.com>"
|
||||
# notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
|
||||
# app_name: Matrix
|
||||
#
|
||||
# # Enable email notifications by default
|
||||
|
|
|
@ -199,7 +199,20 @@ client (C):
|
|||
|
||||
#### REPLICATE (C)
|
||||
|
||||
Asks the server to replicate a given stream
|
||||
Asks the server to replicate a given stream. The syntax is:
|
||||
|
||||
```
|
||||
REPLICATE <stream_name> <token>
|
||||
```
|
||||
|
||||
Where `<token>` may be either:
|
||||
* a numeric stream_id to stream updates since (exclusive)
|
||||
* `NOW` to stream all subsequent updates.
|
||||
|
||||
The `<stream_name>` is the name of a replication stream to subscribe
|
||||
to (see [here](../synapse/replication/tcp/streams/_base.py) for a list
|
||||
of streams). It can also be `ALL` to subscribe to all known streams,
|
||||
in which case the `<token>` must be set to `NOW`.
|
||||
|
||||
#### USER_SYNC (C)
|
||||
|
||||
|
|
|
@ -7,7 +7,6 @@ who are present in a publicly viewable room present on the server.
|
|||
|
||||
The directory info is stored in various tables, which can (typically after
|
||||
DB corruption) get stale or out of sync. If this happens, for now the
|
||||
solution to fix it is to execute the SQL here
|
||||
https://github.com/matrix-org/synapse/blob/master/synapse/storage/schema/delta/53/user_dir_populate.sql
|
||||
solution to fix it is to execute the SQL [here](../synapse/storage/data_stores/main/schema/delta/53/user_dir_populate.sql)
|
||||
and then restart synapse. This should then start a background task to
|
||||
flush the current tables and regenerate the directory.
|
||||
|
|
|
@ -20,11 +20,13 @@ from concurrent.futures import ThreadPoolExecutor
|
|||
DISTS = (
|
||||
"debian:stretch",
|
||||
"debian:buster",
|
||||
"debian:bullseye",
|
||||
"debian:sid",
|
||||
"ubuntu:xenial",
|
||||
"ubuntu:bionic",
|
||||
"ubuntu:cosmic",
|
||||
"ubuntu:disco",
|
||||
"ubuntu:eoan",
|
||||
)
|
||||
|
||||
DESC = '''\
|
||||
|
|
|
@ -20,3 +20,23 @@ parts:
|
|||
source: .
|
||||
plugin: python
|
||||
python-version: python3
|
||||
python-packages:
|
||||
- '.[all]'
|
||||
build-packages:
|
||||
- libffi-dev
|
||||
- libturbojpeg0-dev
|
||||
- libssl-dev
|
||||
- libxslt1-dev
|
||||
- libpq-dev
|
||||
- zlib1g-dev
|
||||
stage-packages:
|
||||
- libasn1-8-heimdal
|
||||
- libgssapi3-heimdal
|
||||
- libhcrypto4-heimdal
|
||||
- libheimbase1-heimdal
|
||||
- libheimntlm0-heimdal
|
||||
- libhx509-5-heimdal
|
||||
- libkrb5-26-heimdal
|
||||
- libldap-2.4-2
|
||||
- libpq5
|
||||
- libsasl2-2
|
||||
|
|
|
@ -14,7 +14,7 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
""" This is a reference implementation of a Matrix home server.
|
||||
""" This is a reference implementation of a Matrix homeserver.
|
||||
"""
|
||||
|
||||
import os
|
||||
|
@ -36,7 +36,7 @@ try:
|
|||
except ImportError:
|
||||
pass
|
||||
|
||||
__version__ = "1.5.0"
|
||||
__version__ = "1.6.0"
|
||||
|
||||
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||
# We import here so that we don't have to install a bunch of deps when
|
||||
|
|
|
@ -144,8 +144,8 @@ def main():
|
|||
logging.captureWarnings(True)
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Used to register new users with a given home server when"
|
||||
" registration has been disabled. The home server must be"
|
||||
description="Used to register new users with a given homeserver when"
|
||||
" registration has been disabled. The homeserver must be"
|
||||
" configured with the 'registration_shared_secret' option"
|
||||
" set."
|
||||
)
|
||||
|
@ -202,7 +202,7 @@ def main():
|
|||
"server_url",
|
||||
default="https://localhost:8448",
|
||||
nargs="?",
|
||||
help="URL to use to talk to the home server. Defaults to "
|
||||
help="URL to use to talk to the homeserver. Defaults to "
|
||||
" 'https://localhost:8448'.",
|
||||
)
|
||||
|
||||
|
|
|
@ -457,7 +457,7 @@ def cs_error(msg, code=Codes.UNKNOWN, **kwargs):
|
|||
|
||||
|
||||
class FederationError(RuntimeError):
|
||||
""" This class is used to inform remote home servers about erroneous
|
||||
""" This class is used to inform remote homeservers about erroneous
|
||||
PDUs they sent us.
|
||||
|
||||
FATAL: The remote server could not interpret the source event.
|
||||
|
|
|
@ -69,7 +69,7 @@ class FederationSenderSlaveStore(
|
|||
self.federation_out_pos_startup = self._get_federation_out_pos(db_conn)
|
||||
|
||||
def _get_federation_out_pos(self, db_conn):
|
||||
sql = "SELECT stream_id FROM federation_stream_position" " WHERE type = ?"
|
||||
sql = "SELECT stream_id FROM federation_stream_position WHERE type = ?"
|
||||
sql = self.database_engine.convert_param_style(sql)
|
||||
|
||||
txn = db_conn.cursor()
|
||||
|
|
|
@ -19,12 +19,13 @@ from __future__ import print_function
|
|||
|
||||
import gc
|
||||
import logging
|
||||
import math
|
||||
import os
|
||||
import resource
|
||||
import sys
|
||||
|
||||
from six import iteritems
|
||||
|
||||
import psutil
|
||||
from prometheus_client import Gauge
|
||||
|
||||
from twisted.application import service
|
||||
|
@ -471,6 +472,87 @@ class SynapseService(service.Service):
|
|||
return self._port.stopListening()
|
||||
|
||||
|
||||
# Contains the list of processes we will be monitoring
|
||||
# currently either 0 or 1
|
||||
_stats_process = []
|
||||
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def phone_stats_home(hs, stats, stats_process=_stats_process):
|
||||
logger.info("Gathering stats for reporting")
|
||||
now = int(hs.get_clock().time())
|
||||
uptime = int(now - hs.start_time)
|
||||
if uptime < 0:
|
||||
uptime = 0
|
||||
|
||||
stats["homeserver"] = hs.config.server_name
|
||||
stats["server_context"] = hs.config.server_context
|
||||
stats["timestamp"] = now
|
||||
stats["uptime_seconds"] = uptime
|
||||
version = sys.version_info
|
||||
stats["python_version"] = "{}.{}.{}".format(
|
||||
version.major, version.minor, version.micro
|
||||
)
|
||||
stats["total_users"] = yield hs.get_datastore().count_all_users()
|
||||
|
||||
total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users()
|
||||
stats["total_nonbridged_users"] = total_nonbridged_users
|
||||
|
||||
daily_user_type_results = yield hs.get_datastore().count_daily_user_type()
|
||||
for name, count in iteritems(daily_user_type_results):
|
||||
stats["daily_user_type_" + name] = count
|
||||
|
||||
room_count = yield hs.get_datastore().get_room_count()
|
||||
stats["total_room_count"] = room_count
|
||||
|
||||
stats["daily_active_users"] = yield hs.get_datastore().count_daily_users()
|
||||
stats["monthly_active_users"] = yield hs.get_datastore().count_monthly_users()
|
||||
stats["daily_active_rooms"] = yield hs.get_datastore().count_daily_active_rooms()
|
||||
stats["daily_messages"] = yield hs.get_datastore().count_daily_messages()
|
||||
|
||||
r30_results = yield hs.get_datastore().count_r30_users()
|
||||
for name, count in iteritems(r30_results):
|
||||
stats["r30_users_" + name] = count
|
||||
|
||||
daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages()
|
||||
stats["daily_sent_messages"] = daily_sent_messages
|
||||
stats["cache_factor"] = CACHE_SIZE_FACTOR
|
||||
stats["event_cache_size"] = hs.config.event_cache_size
|
||||
|
||||
#
|
||||
# Performance statistics
|
||||
#
|
||||
old = stats_process[0]
|
||||
new = (now, resource.getrusage(resource.RUSAGE_SELF))
|
||||
stats_process[0] = new
|
||||
|
||||
# Get RSS in bytes
|
||||
stats["memory_rss"] = new[1].ru_maxrss
|
||||
|
||||
# Get CPU time in % of a single core, not % of all cores
|
||||
used_cpu_time = (new[1].ru_utime + new[1].ru_stime) - (
|
||||
old[1].ru_utime + old[1].ru_stime
|
||||
)
|
||||
if used_cpu_time == 0 or new[0] == old[0]:
|
||||
stats["cpu_average"] = 0
|
||||
else:
|
||||
stats["cpu_average"] = math.floor(used_cpu_time / (new[0] - old[0]) * 100)
|
||||
|
||||
#
|
||||
# Database version
|
||||
#
|
||||
|
||||
stats["database_engine"] = hs.get_datastore().database_engine_name
|
||||
stats["database_server_version"] = hs.get_datastore().get_server_version()
|
||||
logger.info("Reporting stats to %s: %s" % (hs.config.report_stats_endpoint, stats))
|
||||
try:
|
||||
yield hs.get_proxied_http_client().put_json(
|
||||
hs.config.report_stats_endpoint, stats
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning("Error reporting stats: %s", e)
|
||||
|
||||
|
||||
def run(hs):
|
||||
PROFILE_SYNAPSE = False
|
||||
if PROFILE_SYNAPSE:
|
||||
|
@ -497,91 +579,19 @@ def run(hs):
|
|||
reactor.run = profile(reactor.run)
|
||||
|
||||
clock = hs.get_clock()
|
||||
start_time = clock.time()
|
||||
|
||||
stats = {}
|
||||
|
||||
# Contains the list of processes we will be monitoring
|
||||
# currently either 0 or 1
|
||||
stats_process = []
|
||||
def performance_stats_init():
|
||||
_stats_process.clear()
|
||||
_stats_process.append(
|
||||
(int(hs.get_clock().time()), resource.getrusage(resource.RUSAGE_SELF))
|
||||
)
|
||||
|
||||
def start_phone_stats_home():
|
||||
return run_as_background_process("phone_stats_home", phone_stats_home)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def phone_stats_home():
|
||||
logger.info("Gathering stats for reporting")
|
||||
now = int(hs.get_clock().time())
|
||||
uptime = int(now - start_time)
|
||||
if uptime < 0:
|
||||
uptime = 0
|
||||
|
||||
stats["homeserver"] = hs.config.server_name
|
||||
stats["server_context"] = hs.config.server_context
|
||||
stats["timestamp"] = now
|
||||
stats["uptime_seconds"] = uptime
|
||||
version = sys.version_info
|
||||
stats["python_version"] = "{}.{}.{}".format(
|
||||
version.major, version.minor, version.micro
|
||||
return run_as_background_process(
|
||||
"phone_stats_home", phone_stats_home, hs, stats
|
||||
)
|
||||
stats["total_users"] = yield hs.get_datastore().count_all_users()
|
||||
|
||||
total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users()
|
||||
stats["total_nonbridged_users"] = total_nonbridged_users
|
||||
|
||||
daily_user_type_results = yield hs.get_datastore().count_daily_user_type()
|
||||
for name, count in iteritems(daily_user_type_results):
|
||||
stats["daily_user_type_" + name] = count
|
||||
|
||||
room_count = yield hs.get_datastore().get_room_count()
|
||||
stats["total_room_count"] = room_count
|
||||
|
||||
stats["daily_active_users"] = yield hs.get_datastore().count_daily_users()
|
||||
stats["monthly_active_users"] = yield hs.get_datastore().count_monthly_users()
|
||||
stats[
|
||||
"daily_active_rooms"
|
||||
] = yield hs.get_datastore().count_daily_active_rooms()
|
||||
stats["daily_messages"] = yield hs.get_datastore().count_daily_messages()
|
||||
|
||||
r30_results = yield hs.get_datastore().count_r30_users()
|
||||
for name, count in iteritems(r30_results):
|
||||
stats["r30_users_" + name] = count
|
||||
|
||||
daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages()
|
||||
stats["daily_sent_messages"] = daily_sent_messages
|
||||
stats["cache_factor"] = CACHE_SIZE_FACTOR
|
||||
stats["event_cache_size"] = hs.config.event_cache_size
|
||||
|
||||
if len(stats_process) > 0:
|
||||
stats["memory_rss"] = 0
|
||||
stats["cpu_average"] = 0
|
||||
for process in stats_process:
|
||||
stats["memory_rss"] += process.memory_info().rss
|
||||
stats["cpu_average"] += int(process.cpu_percent(interval=None))
|
||||
|
||||
stats["database_engine"] = hs.get_datastore().database_engine_name
|
||||
stats["database_server_version"] = hs.get_datastore().get_server_version()
|
||||
logger.info(
|
||||
"Reporting stats to %s: %s" % (hs.config.report_stats_endpoint, stats)
|
||||
)
|
||||
try:
|
||||
yield hs.get_proxied_http_client().put_json(
|
||||
hs.config.report_stats_endpoint, stats
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning("Error reporting stats: %s", e)
|
||||
|
||||
def performance_stats_init():
|
||||
try:
|
||||
process = psutil.Process()
|
||||
# Ensure we can fetch both, and make the initial request for cpu_percent
|
||||
# so the next request will use this as the initial point.
|
||||
process.memory_info().rss
|
||||
process.cpu_percent(interval=None)
|
||||
logger.info("report_stats can use psutil")
|
||||
stats_process.append(process)
|
||||
except (AttributeError):
|
||||
logger.warning("Unable to read memory/cpu stats. Disabling reporting.")
|
||||
|
||||
def generate_user_daily_visit_stats():
|
||||
return run_as_background_process(
|
||||
|
|
|
@ -185,7 +185,7 @@ class ApplicationServiceApi(SimpleHttpClient):
|
|||
|
||||
if not _is_valid_3pe_metadata(info):
|
||||
logger.warning(
|
||||
"query_3pe_protocol to %s did not return a" " valid result", uri
|
||||
"query_3pe_protocol to %s did not return a valid result", uri
|
||||
)
|
||||
return None
|
||||
|
||||
|
|
|
@ -134,7 +134,7 @@ def _load_appservice(hostname, as_info, config_filename):
|
|||
for regex_obj in as_info["namespaces"][ns]:
|
||||
if not isinstance(regex_obj, dict):
|
||||
raise ValueError(
|
||||
"Expected namespace entry in %s to be an object," " but got %s",
|
||||
"Expected namespace entry in %s to be an object, but got %s",
|
||||
ns,
|
||||
regex_obj,
|
||||
)
|
||||
|
|
|
@ -35,11 +35,11 @@ class CaptchaConfig(Config):
|
|||
## Captcha ##
|
||||
# See docs/CAPTCHA_SETUP for full details of configuring this.
|
||||
|
||||
# This Home Server's ReCAPTCHA public key.
|
||||
# This homeserver's ReCAPTCHA public key.
|
||||
#
|
||||
#recaptcha_public_key: "YOUR_PUBLIC_KEY"
|
||||
|
||||
# This Home Server's ReCAPTCHA private key.
|
||||
# This homeserver's ReCAPTCHA private key.
|
||||
#
|
||||
#recaptcha_private_key: "YOUR_PRIVATE_KEY"
|
||||
|
||||
|
|
|
@ -305,7 +305,7 @@ class EmailConfig(Config):
|
|||
# smtp_user: "exampleusername"
|
||||
# smtp_pass: "examplepassword"
|
||||
# require_transport_security: false
|
||||
# notif_from: "Your Friendly %(app)s Home Server <noreply@example.com>"
|
||||
# notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
|
||||
# app_name: Matrix
|
||||
#
|
||||
# # Enable email notifications by default
|
||||
|
|
|
@ -170,7 +170,7 @@ class _RoomDirectoryRule(object):
|
|||
self.action = action
|
||||
else:
|
||||
raise ConfigError(
|
||||
"%s rules can only have action of 'allow'" " or 'deny'" % (option_name,)
|
||||
"%s rules can only have action of 'allow' or 'deny'" % (option_name,)
|
||||
)
|
||||
|
||||
self._alias_matches_all = alias == "*"
|
||||
|
|
|
@ -41,7 +41,7 @@ logger = logging.Logger(__name__)
|
|||
# in the list.
|
||||
DEFAULT_BIND_ADDRESSES = ["::", "0.0.0.0"]
|
||||
|
||||
DEFAULT_ROOM_VERSION = "4"
|
||||
DEFAULT_ROOM_VERSION = "5"
|
||||
|
||||
ROOM_COMPLEXITY_TOO_GREAT = (
|
||||
"Your homeserver is unable to join rooms this large or complex. "
|
||||
|
@ -223,7 +223,7 @@ class ServerConfig(Config):
|
|||
self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
|
||||
except Exception as e:
|
||||
raise ConfigError(
|
||||
"Invalid range(s) provided in " "federation_ip_range_blacklist: %s" % e
|
||||
"Invalid range(s) provided in federation_ip_range_blacklist: %s" % e
|
||||
)
|
||||
|
||||
if self.public_baseurl is not None:
|
||||
|
@ -839,7 +839,7 @@ class ServerConfig(Config):
|
|||
# Used by phonehome stats to group together related servers.
|
||||
#server_context: context
|
||||
|
||||
# Resource-constrained Homeserver Settings
|
||||
# Resource-constrained homeserver Settings
|
||||
#
|
||||
# If limit_remote_rooms.enabled is True, the room complexity will be
|
||||
# checked before a user joins a new remote room. If it is above
|
||||
|
@ -962,20 +962,20 @@ class ServerConfig(Config):
|
|||
"--daemonize",
|
||||
action="store_true",
|
||||
default=None,
|
||||
help="Daemonize the home server",
|
||||
help="Daemonize the homeserver",
|
||||
)
|
||||
server_group.add_argument(
|
||||
"--print-pidfile",
|
||||
action="store_true",
|
||||
default=None,
|
||||
help="Print the path to the pidfile just" " before daemonizing",
|
||||
help="Print the path to the pidfile just before daemonizing",
|
||||
)
|
||||
server_group.add_argument(
|
||||
"--manhole",
|
||||
metavar="PORT",
|
||||
dest="manhole",
|
||||
type=int,
|
||||
help="Turn on the twisted telnet manhole" " service on the given port.",
|
||||
help="Turn on the twisted telnet manhole service on the given port.",
|
||||
)
|
||||
|
||||
|
||||
|
|
|
@ -12,6 +12,8 @@
|
|||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from typing import Dict, Optional, Tuple, Union
|
||||
|
||||
from six import iteritems
|
||||
|
||||
import attr
|
||||
|
@ -19,54 +21,113 @@ from frozendict import frozendict
|
|||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.appservice import ApplicationService
|
||||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||
|
||||
|
||||
@attr.s(slots=True)
|
||||
class EventContext:
|
||||
"""
|
||||
Holds information relevant to persisting an event
|
||||
|
||||
Attributes:
|
||||
state_group (int|None): state group id, if the state has been stored
|
||||
as a state group. This is usually only None if e.g. the event is
|
||||
an outlier.
|
||||
rejected (bool|str): A rejection reason if the event was rejected, else
|
||||
False
|
||||
rejected: A rejection reason if the event was rejected, else False
|
||||
|
||||
prev_group (int): Previously persisted state group. ``None`` for an
|
||||
outlier.
|
||||
delta_ids (dict[(str, str), str]): Delta from ``prev_group``.
|
||||
(type, state_key) -> event_id. ``None`` for an outlier.
|
||||
_state_group: The ID of the state group for this event. Note that state events
|
||||
are persisted with a state group which includes the new event, so this is
|
||||
effectively the state *after* the event in question.
|
||||
|
||||
app_service: FIXME
|
||||
For a *rejected* state event, where the state of the rejected event is
|
||||
ignored, this state_group should never make it into the
|
||||
event_to_state_groups table. Indeed, inspecting this value for a rejected
|
||||
state event is almost certainly incorrect.
|
||||
|
||||
For an outlier, where we don't have the state at the event, this will be
|
||||
None.
|
||||
|
||||
Note that this is a private attribute: it should be accessed via
|
||||
the ``state_group`` property.
|
||||
|
||||
state_group_before_event: The ID of the state group representing the state
|
||||
of the room before this event.
|
||||
|
||||
If this is a non-state event, this will be the same as ``state_group``. If
|
||||
it's a state event, it will be the same as ``prev_group``.
|
||||
|
||||
If ``state_group`` is None (ie, the event is an outlier),
|
||||
``state_group_before_event`` will always also be ``None``.
|
||||
|
||||
prev_group: If it is known, ``state_group``'s prev_group. Note that this being
|
||||
None does not necessarily mean that ``state_group`` does not have
|
||||
a prev_group!
|
||||
|
||||
If the event is a state event, this is normally the same as ``prev_group``.
|
||||
|
||||
If ``state_group`` is None (ie, the event is an outlier), ``prev_group``
|
||||
will always also be ``None``.
|
||||
|
||||
Note that this *not* (necessarily) the state group associated with
|
||||
``_prev_state_ids``.
|
||||
|
||||
delta_ids: If ``prev_group`` is not None, the state delta between ``prev_group``
|
||||
and ``state_group``.
|
||||
|
||||
app_service: If this event is being sent by a (local) application service, that
|
||||
app service.
|
||||
|
||||
_current_state_ids: The room state map, including this event - ie, the state
|
||||
in ``state_group``.
|
||||
|
||||
_current_state_ids (dict[(str, str), str]|None):
|
||||
The current state map including the current event. None if outlier
|
||||
or we haven't fetched the state from DB yet.
|
||||
(type, state_key) -> event_id
|
||||
|
||||
_prev_state_ids (dict[(str, str), str]|None):
|
||||
The current state map excluding the current event. None if outlier
|
||||
or we haven't fetched the state from DB yet.
|
||||
FIXME: what is this for an outlier? it seems ill-defined. It seems like
|
||||
it could be either {}, or the state we were given by the remote
|
||||
server, depending on $THINGS
|
||||
|
||||
Note that this is a private attribute: it should be accessed via
|
||||
``get_current_state_ids``. _AsyncEventContext impl calculates this
|
||||
on-demand: it will be None until that happens.
|
||||
|
||||
_prev_state_ids: The room state map, excluding this event - ie, the state
|
||||
in ``state_group_before_event``. For a non-state
|
||||
event, this will be the same as _current_state_events.
|
||||
|
||||
Note that it is a completely different thing to prev_group!
|
||||
|
||||
(type, state_key) -> event_id
|
||||
|
||||
FIXME: again, what is this for an outlier?
|
||||
|
||||
As with _current_state_ids, this is a private attribute. It should be
|
||||
accessed via get_prev_state_ids.
|
||||
"""
|
||||
|
||||
state_group = attr.ib(default=None)
|
||||
rejected = attr.ib(default=False)
|
||||
prev_group = attr.ib(default=None)
|
||||
delta_ids = attr.ib(default=None)
|
||||
app_service = attr.ib(default=None)
|
||||
rejected = attr.ib(default=False, type=Union[bool, str])
|
||||
_state_group = attr.ib(default=None, type=Optional[int])
|
||||
state_group_before_event = attr.ib(default=None, type=Optional[int])
|
||||
prev_group = attr.ib(default=None, type=Optional[int])
|
||||
delta_ids = attr.ib(default=None, type=Optional[Dict[Tuple[str, str], str]])
|
||||
app_service = attr.ib(default=None, type=Optional[ApplicationService])
|
||||
|
||||
_prev_state_ids = attr.ib(default=None)
|
||||
_current_state_ids = attr.ib(default=None)
|
||||
_current_state_ids = attr.ib(
|
||||
default=None, type=Optional[Dict[Tuple[str, str], str]]
|
||||
)
|
||||
_prev_state_ids = attr.ib(default=None, type=Optional[Dict[Tuple[str, str], str]])
|
||||
|
||||
@staticmethod
|
||||
def with_state(
|
||||
state_group, current_state_ids, prev_state_ids, prev_group=None, delta_ids=None
|
||||
state_group,
|
||||
state_group_before_event,
|
||||
current_state_ids,
|
||||
prev_state_ids,
|
||||
prev_group=None,
|
||||
delta_ids=None,
|
||||
):
|
||||
return EventContext(
|
||||
current_state_ids=current_state_ids,
|
||||
prev_state_ids=prev_state_ids,
|
||||
state_group=state_group,
|
||||
state_group_before_event=state_group_before_event,
|
||||
prev_group=prev_group,
|
||||
delta_ids=delta_ids,
|
||||
)
|
||||
|
@ -97,7 +158,8 @@ class EventContext:
|
|||
"prev_state_id": prev_state_id,
|
||||
"event_type": event.type,
|
||||
"event_state_key": event.state_key if event.is_state() else None,
|
||||
"state_group": self.state_group,
|
||||
"state_group": self._state_group,
|
||||
"state_group_before_event": self.state_group_before_event,
|
||||
"rejected": self.rejected,
|
||||
"prev_group": self.prev_group,
|
||||
"delta_ids": _encode_state_dict(self.delta_ids),
|
||||
|
@ -123,6 +185,7 @@ class EventContext:
|
|||
event_type=input["event_type"],
|
||||
event_state_key=input["event_state_key"],
|
||||
state_group=input["state_group"],
|
||||
state_group_before_event=input["state_group_before_event"],
|
||||
prev_group=input["prev_group"],
|
||||
delta_ids=_decode_state_dict(input["delta_ids"]),
|
||||
rejected=input["rejected"],
|
||||
|
@ -134,22 +197,52 @@ class EventContext:
|
|||
|
||||
return context
|
||||
|
||||
@property
|
||||
def state_group(self) -> Optional[int]:
|
||||
"""The ID of the state group for this event.
|
||||
|
||||
Note that state events are persisted with a state group which includes the new
|
||||
event, so this is effectively the state *after* the event in question.
|
||||
|
||||
For an outlier, where we don't have the state at the event, this will be None.
|
||||
|
||||
It is an error to access this for a rejected event, since rejected state should
|
||||
not make it into the room state. Accessing this property will raise an exception
|
||||
if ``rejected`` is set.
|
||||
"""
|
||||
if self.rejected:
|
||||
raise RuntimeError("Attempt to access state_group of rejected event")
|
||||
|
||||
return self._state_group
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_current_state_ids(self, store):
|
||||
"""Gets the current state IDs
|
||||
"""
|
||||
Gets the room state map, including this event - ie, the state in ``state_group``
|
||||
|
||||
It is an error to access this for a rejected event, since rejected state should
|
||||
not make it into the room state. This method will raise an exception if
|
||||
``rejected`` is set.
|
||||
|
||||
Returns:
|
||||
Deferred[dict[(str, str), str]|None]: Returns None if state_group
|
||||
is None, which happens when the associated event is an outlier.
|
||||
|
||||
Maps a (type, state_key) to the event ID of the state event matching
|
||||
this tuple.
|
||||
"""
|
||||
if self.rejected:
|
||||
raise RuntimeError("Attempt to access state_ids of rejected event")
|
||||
|
||||
yield self._ensure_fetched(store)
|
||||
return self._current_state_ids
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_prev_state_ids(self, store):
|
||||
"""Gets the prev state IDs
|
||||
"""
|
||||
Gets the room state map, excluding this event.
|
||||
|
||||
For a non-state event, this will be the same as get_current_state_ids().
|
||||
|
||||
Returns:
|
||||
Deferred[dict[(str, str), str]|None]: Returns None if state_group
|
||||
|
@ -163,11 +256,17 @@ class EventContext:
|
|||
def get_cached_current_state_ids(self):
|
||||
"""Gets the current state IDs if we have them already cached.
|
||||
|
||||
It is an error to access this for a rejected event, since rejected state should
|
||||
not make it into the room state. This method will raise an exception if
|
||||
``rejected`` is set.
|
||||
|
||||
Returns:
|
||||
dict[(str, str), str]|None: Returns None if we haven't cached the
|
||||
state or if state_group is None, which happens when the associated
|
||||
event is an outlier.
|
||||
"""
|
||||
if self.rejected:
|
||||
raise RuntimeError("Attempt to access state_ids of rejected event")
|
||||
|
||||
return self._current_state_ids
|
||||
|
||||
|
|
|
@ -177,7 +177,7 @@ class FederationClient(FederationBase):
|
|||
given destination server.
|
||||
|
||||
Args:
|
||||
dest (str): The remote home server to ask.
|
||||
dest (str): The remote homeserver to ask.
|
||||
room_id (str): The room_id to backfill.
|
||||
limit (int): The maximum number of PDUs to return.
|
||||
extremities (list): List of PDU id and origins of the first pdus
|
||||
|
@ -227,7 +227,7 @@ class FederationClient(FederationBase):
|
|||
one succeeds.
|
||||
|
||||
Args:
|
||||
destinations (list): Which home servers to query
|
||||
destinations (list): Which homeservers to query
|
||||
event_id (str): event to fetch
|
||||
room_version (str): version of the room
|
||||
outlier (bool): Indicates whether the PDU is an `outlier`, i.e. if
|
||||
|
@ -312,7 +312,7 @@ class FederationClient(FederationBase):
|
|||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def get_state_for_room(self, destination, room_id, event_id):
|
||||
"""Requests all of the room state at a given event from a remote home server.
|
||||
"""Requests all of the room state at a given event from a remote homeserver.
|
||||
|
||||
Args:
|
||||
destination (str): The remote homeserver to query for the state.
|
||||
|
|
|
@ -44,7 +44,7 @@ class TransactionActions(object):
|
|||
response code and response body.
|
||||
"""
|
||||
if not transaction.transaction_id:
|
||||
raise RuntimeError("Cannot persist a transaction with no " "transaction_id")
|
||||
raise RuntimeError("Cannot persist a transaction with no transaction_id")
|
||||
|
||||
return self.store.get_received_txn_response(transaction.transaction_id, origin)
|
||||
|
||||
|
@ -56,7 +56,7 @@ class TransactionActions(object):
|
|||
Deferred
|
||||
"""
|
||||
if not transaction.transaction_id:
|
||||
raise RuntimeError("Cannot persist a transaction with no " "transaction_id")
|
||||
raise RuntimeError("Cannot persist a transaction with no transaction_id")
|
||||
|
||||
return self.store.set_received_txn_response(
|
||||
transaction.transaction_id, origin, code, response
|
||||
|
|
|
@ -49,7 +49,7 @@ sent_pdus_destination_dist_count = Counter(
|
|||
|
||||
sent_pdus_destination_dist_total = Counter(
|
||||
"synapse_federation_client_sent_pdu_destinations:total",
|
||||
"" "Total number of PDUs queued for sending across all destinations",
|
||||
"Total number of PDUs queued for sending across all destinations",
|
||||
)
|
||||
|
||||
|
||||
|
|
|
@ -84,7 +84,7 @@ class TransactionManager(object):
|
|||
txn_id = str(self._next_txn_id)
|
||||
|
||||
logger.debug(
|
||||
"TX [%s] {%s} Attempting new transaction" " (pdus: %d, edus: %d)",
|
||||
"TX [%s] {%s} Attempting new transaction (pdus: %d, edus: %d)",
|
||||
destination,
|
||||
txn_id,
|
||||
len(pdus),
|
||||
|
@ -103,7 +103,7 @@ class TransactionManager(object):
|
|||
self._next_txn_id += 1
|
||||
|
||||
logger.info(
|
||||
"TX [%s] {%s} Sending transaction [%s]," " (PDUs: %d, EDUs: %d)",
|
||||
"TX [%s] {%s} Sending transaction [%s], (PDUs: %d, EDUs: %d)",
|
||||
destination,
|
||||
txn_id,
|
||||
transaction.transaction_id,
|
||||
|
|
|
@ -14,9 +14,9 @@
|
|||
# limitations under the License.
|
||||
|
||||
"""The transport layer is responsible for both sending transactions to remote
|
||||
home servers and receiving a variety of requests from other home servers.
|
||||
homeservers and receiving a variety of requests from other homeservers.
|
||||
|
||||
By default this is done over HTTPS (and all home servers are required to
|
||||
By default this is done over HTTPS (and all homeservers are required to
|
||||
support HTTPS), however individual pairings of servers may decide to
|
||||
communicate over a different (albeit still reliable) protocol.
|
||||
"""
|
||||
|
|
|
@ -44,7 +44,7 @@ class TransportLayerClient(object):
|
|||
given event.
|
||||
|
||||
Args:
|
||||
destination (str): The host name of the remote home server we want
|
||||
destination (str): The host name of the remote homeserver we want
|
||||
to get the state from.
|
||||
context (str): The name of the context we want the state of
|
||||
event_id (str): The event we want the context at.
|
||||
|
@ -68,7 +68,7 @@ class TransportLayerClient(object):
|
|||
given event. Returns the state's event_id's
|
||||
|
||||
Args:
|
||||
destination (str): The host name of the remote home server we want
|
||||
destination (str): The host name of the remote homeserver we want
|
||||
to get the state from.
|
||||
context (str): The name of the context we want the state of
|
||||
event_id (str): The event we want the context at.
|
||||
|
@ -91,7 +91,7 @@ class TransportLayerClient(object):
|
|||
""" Requests the pdu with give id and origin from the given server.
|
||||
|
||||
Args:
|
||||
destination (str): The host name of the remote home server we want
|
||||
destination (str): The host name of the remote homeserver we want
|
||||
to get the state from.
|
||||
event_id (str): The id of the event being requested.
|
||||
timeout (int): How long to try (in ms) the destination for before
|
||||
|
|
|
@ -714,7 +714,7 @@ class PublicRoomList(BaseFederationServlet):
|
|||
|
||||
This API returns information in the same format as /publicRooms on the
|
||||
client API, but will only ever include local public rooms and hence is
|
||||
intended for consumption by other home servers.
|
||||
intended for consumption by other homeservers.
|
||||
|
||||
GET /publicRooms HTTP/1.1
|
||||
|
||||
|
|
|
@ -102,8 +102,9 @@ class AuthHandler(BaseHandler):
|
|||
login_types.append(t)
|
||||
self._supported_login_types = login_types
|
||||
|
||||
self._account_ratelimiter = Ratelimiter()
|
||||
self._failed_attempts_ratelimiter = Ratelimiter()
|
||||
# Ratelimiter for failed auth during UIA. Uses same ratelimit config
|
||||
# as per `rc_login.failed_attempts`.
|
||||
self._failed_uia_attempts_ratelimiter = Ratelimiter()
|
||||
|
||||
self._clock = self.hs.get_clock()
|
||||
|
||||
|
@ -133,12 +134,38 @@ class AuthHandler(BaseHandler):
|
|||
|
||||
AuthError if the client has completed a login flow, and it gives
|
||||
a different user to `requester`
|
||||
|
||||
LimitExceededError if the ratelimiter's failed request count for this
|
||||
user is too high to proceed
|
||||
|
||||
"""
|
||||
|
||||
user_id = requester.user.to_string()
|
||||
|
||||
# Check if we should be ratelimited due to too many previous failed attempts
|
||||
self._failed_uia_attempts_ratelimiter.ratelimit(
|
||||
user_id,
|
||||
time_now_s=self._clock.time(),
|
||||
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
|
||||
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
|
||||
update=False,
|
||||
)
|
||||
|
||||
# build a list of supported flows
|
||||
flows = [[login_type] for login_type in self._supported_login_types]
|
||||
|
||||
result, params, _ = yield self.check_auth(flows, request_body, clientip)
|
||||
try:
|
||||
result, params, _ = yield self.check_auth(flows, request_body, clientip)
|
||||
except LoginError:
|
||||
# Update the ratelimite to say we failed (`can_do_action` doesn't raise).
|
||||
self._failed_uia_attempts_ratelimiter.can_do_action(
|
||||
user_id,
|
||||
time_now_s=self._clock.time(),
|
||||
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
|
||||
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
|
||||
update=True,
|
||||
)
|
||||
raise
|
||||
|
||||
# find the completed login type
|
||||
for login_type in self._supported_login_types:
|
||||
|
@ -223,7 +250,7 @@ class AuthHandler(BaseHandler):
|
|||
# could continue registration from your phone having clicked the
|
||||
# email auth link on there). It's probably too open to abuse
|
||||
# because it lets unauthenticated clients store arbitrary objects
|
||||
# on a home server.
|
||||
# on a homeserver.
|
||||
# Revisit: Assumimg the REST APIs do sensible validation, the data
|
||||
# isn't arbintrary.
|
||||
session["clientdict"] = clientdict
|
||||
|
@ -501,11 +528,8 @@ class AuthHandler(BaseHandler):
|
|||
multiple matches
|
||||
|
||||
Raises:
|
||||
LimitExceededError if the ratelimiter's login requests count for this
|
||||
user is too high too proceed.
|
||||
UserDeactivatedError if a user is found but is deactivated.
|
||||
"""
|
||||
self.ratelimit_login_per_account(user_id)
|
||||
res = yield self._find_user_id_and_pwd_hash(user_id)
|
||||
if res is not None:
|
||||
return res[0]
|
||||
|
@ -572,8 +596,6 @@ class AuthHandler(BaseHandler):
|
|||
StoreError if there was a problem accessing the database
|
||||
SynapseError if there was a problem with the request
|
||||
LoginError if there was an authentication problem.
|
||||
LimitExceededError if the ratelimiter's login requests count for this
|
||||
user is too high too proceed.
|
||||
"""
|
||||
|
||||
if username.startswith("@"):
|
||||
|
@ -581,8 +603,6 @@ class AuthHandler(BaseHandler):
|
|||
else:
|
||||
qualified_user_id = UserID(username, self.hs.hostname).to_string()
|
||||
|
||||
self.ratelimit_login_per_account(qualified_user_id)
|
||||
|
||||
login_type = login_submission.get("type")
|
||||
known_login_type = False
|
||||
|
||||
|
@ -650,15 +670,6 @@ class AuthHandler(BaseHandler):
|
|||
if not known_login_type:
|
||||
raise SynapseError(400, "Unknown login type %s" % login_type)
|
||||
|
||||
# unknown username or invalid password.
|
||||
self._failed_attempts_ratelimiter.ratelimit(
|
||||
qualified_user_id.lower(),
|
||||
time_now_s=self._clock.time(),
|
||||
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
|
||||
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
|
||||
update=True,
|
||||
)
|
||||
|
||||
# We raise a 403 here, but note that if we're doing user-interactive
|
||||
# login, it turns all LoginErrors into a 401 anyway.
|
||||
raise LoginError(403, "Invalid password", errcode=Codes.FORBIDDEN)
|
||||
|
@ -710,10 +721,6 @@ class AuthHandler(BaseHandler):
|
|||
Returns:
|
||||
Deferred[unicode] the canonical_user_id, or Deferred[None] if
|
||||
unknown user/bad password
|
||||
|
||||
Raises:
|
||||
LimitExceededError if the ratelimiter's login requests count for this
|
||||
user is too high too proceed.
|
||||
"""
|
||||
lookupres = yield self._find_user_id_and_pwd_hash(user_id)
|
||||
if not lookupres:
|
||||
|
@ -742,7 +749,7 @@ class AuthHandler(BaseHandler):
|
|||
auth_api.validate_macaroon(macaroon, "login", user_id)
|
||||
except Exception:
|
||||
raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN)
|
||||
self.ratelimit_login_per_account(user_id)
|
||||
|
||||
yield self.auth.check_auth_blocking(user_id)
|
||||
return user_id
|
||||
|
||||
|
@ -810,7 +817,7 @@ class AuthHandler(BaseHandler):
|
|||
@defer.inlineCallbacks
|
||||
def add_threepid(self, user_id, medium, address, validated_at):
|
||||
# 'Canonicalise' email addresses down to lower case.
|
||||
# We've now moving towards the Home Server being the entity that
|
||||
# We've now moving towards the homeserver being the entity that
|
||||
# is responsible for validating threepids used for resetting passwords
|
||||
# on accounts, so in future Synapse will gain knowledge of specific
|
||||
# types (mediums) of threepid. For now, we still use the existing
|
||||
|
@ -912,35 +919,6 @@ class AuthHandler(BaseHandler):
|
|||
else:
|
||||
return defer.succeed(False)
|
||||
|
||||
def ratelimit_login_per_account(self, user_id):
|
||||
"""Checks whether the process must be stopped because of ratelimiting.
|
||||
|
||||
Checks against two ratelimiters: the generic one for login attempts per
|
||||
account and the one specific to failed attempts.
|
||||
|
||||
Args:
|
||||
user_id (unicode): complete @user:id
|
||||
|
||||
Raises:
|
||||
LimitExceededError if one of the ratelimiters' login requests count
|
||||
for this user is too high too proceed.
|
||||
"""
|
||||
self._failed_attempts_ratelimiter.ratelimit(
|
||||
user_id.lower(),
|
||||
time_now_s=self._clock.time(),
|
||||
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
|
||||
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
|
||||
update=False,
|
||||
)
|
||||
|
||||
self._account_ratelimiter.ratelimit(
|
||||
user_id.lower(),
|
||||
time_now_s=self._clock.time(),
|
||||
rate_hz=self.hs.config.rc_login_account.per_second,
|
||||
burst_count=self.hs.config.rc_login_account.burst_count,
|
||||
update=True,
|
||||
)
|
||||
|
||||
|
||||
@attr.s
|
||||
class MacaroonGenerator(object):
|
||||
|
|
|
@ -119,7 +119,7 @@ class DirectoryHandler(BaseHandler):
|
|||
if not service.is_interested_in_alias(room_alias.to_string()):
|
||||
raise SynapseError(
|
||||
400,
|
||||
"This application service has not reserved" " this kind of alias.",
|
||||
"This application service has not reserved this kind of alias.",
|
||||
errcode=Codes.EXCLUSIVE,
|
||||
)
|
||||
else:
|
||||
|
@ -283,7 +283,7 @@ class DirectoryHandler(BaseHandler):
|
|||
def on_directory_query(self, args):
|
||||
room_alias = RoomAlias.from_string(args["room_alias"])
|
||||
if not self.hs.is_mine(room_alias):
|
||||
raise SynapseError(400, "Room Alias is not hosted on this Home Server")
|
||||
raise SynapseError(400, "Room Alias is not hosted on this homeserver")
|
||||
|
||||
result = yield self.get_association_from_room_alias(room_alias)
|
||||
|
||||
|
|
|
@ -30,6 +30,7 @@ from twisted.internet import defer
|
|||
from synapse.api.errors import CodeMessageException, Codes, NotFoundError, SynapseError
|
||||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||
from synapse.logging.opentracing import log_kv, set_tag, tag_args, trace
|
||||
from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
|
||||
from synapse.types import (
|
||||
UserID,
|
||||
get_domain_from_id,
|
||||
|
@ -53,6 +54,12 @@ class E2eKeysHandler(object):
|
|||
|
||||
self._edu_updater = SigningKeyEduUpdater(hs, self)
|
||||
|
||||
self._is_master = hs.config.worker_app is None
|
||||
if not self._is_master:
|
||||
self._user_device_resync_client = ReplicationUserDevicesResyncRestServlet.make_client(
|
||||
hs
|
||||
)
|
||||
|
||||
federation_registry = hs.get_federation_registry()
|
||||
|
||||
# FIXME: switch to m.signing_key_update when MSC1756 is merged into the spec
|
||||
|
@ -191,9 +198,15 @@ class E2eKeysHandler(object):
|
|||
# probably be tracking their device lists. However, we haven't
|
||||
# done an initial sync on the device list so we do it now.
|
||||
try:
|
||||
user_devices = yield self.device_handler.device_list_updater.user_device_resync(
|
||||
user_id
|
||||
)
|
||||
if self._is_master:
|
||||
user_devices = yield self.device_handler.device_list_updater.user_device_resync(
|
||||
user_id
|
||||
)
|
||||
else:
|
||||
user_devices = yield self._user_device_resync_client(
|
||||
user_id=user_id
|
||||
)
|
||||
|
||||
user_devices = user_devices["devices"]
|
||||
for device in user_devices:
|
||||
results[user_id] = {device["device_id"]: device["keys"]}
|
||||
|
|
|
@ -97,9 +97,9 @@ class FederationHandler(BaseHandler):
|
|||
"""Handles events that originated from federation.
|
||||
Responsible for:
|
||||
a) handling received Pdus before handing them on as Events to the rest
|
||||
of the home server (including auth and state conflict resoultion)
|
||||
of the homeserver (including auth and state conflict resoultion)
|
||||
b) converting events that were produced by local clients that may need
|
||||
to be sent to remote home servers.
|
||||
to be sent to remote homeservers.
|
||||
c) doing the necessary dances to invite remote users and join remote
|
||||
rooms.
|
||||
"""
|
||||
|
@ -1688,7 +1688,11 @@ class FederationHandler(BaseHandler):
|
|||
# hack around with a try/finally instead.
|
||||
success = False
|
||||
try:
|
||||
if not event.internal_metadata.is_outlier() and not backfilled:
|
||||
if (
|
||||
not event.internal_metadata.is_outlier()
|
||||
and not backfilled
|
||||
and not context.rejected
|
||||
):
|
||||
yield self.action_generator.handle_push_actions_for_event(
|
||||
event, context
|
||||
)
|
||||
|
@ -2276,6 +2280,7 @@ class FederationHandler(BaseHandler):
|
|||
|
||||
return EventContext.with_state(
|
||||
state_group=state_group,
|
||||
state_group_before_event=context.state_group_before_event,
|
||||
current_state_ids=current_state_ids,
|
||||
prev_state_ids=prev_state_ids,
|
||||
prev_group=prev_group,
|
||||
|
|
|
@ -233,7 +233,9 @@ class PaginationHandler(object):
|
|||
self._purges_in_progress_by_room.add(room_id)
|
||||
try:
|
||||
with (yield self.pagination_lock.write(room_id)):
|
||||
yield self.store.purge_history(room_id, token, delete_local_events)
|
||||
yield self.storage.purge_events.purge_history(
|
||||
room_id, token, delete_local_events
|
||||
)
|
||||
logger.info("[purge] complete")
|
||||
self._purges_by_id[purge_id].status = PurgeStatus.STATUS_COMPLETE
|
||||
except Exception:
|
||||
|
@ -276,7 +278,7 @@ class PaginationHandler(object):
|
|||
if joined:
|
||||
raise SynapseError(400, "Users are still joined to this room")
|
||||
|
||||
await self.store.purge_room(room_id)
|
||||
await self.storage.purge_events.purge_room(room_id)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_messages(
|
||||
|
|
|
@ -152,7 +152,7 @@ class BaseProfileHandler(BaseHandler):
|
|||
by_admin (bool): Whether this change was made by an administrator.
|
||||
"""
|
||||
if not self.hs.is_mine(target_user):
|
||||
raise SynapseError(400, "User is not hosted on this Home Server")
|
||||
raise SynapseError(400, "User is not hosted on this homeserver")
|
||||
|
||||
if not by_admin and target_user != requester.user:
|
||||
raise AuthError(400, "Cannot set another user's displayname")
|
||||
|
@ -207,7 +207,7 @@ class BaseProfileHandler(BaseHandler):
|
|||
"""target_user is the user whose avatar_url is to be changed;
|
||||
auth_user is the user attempting to make this change."""
|
||||
if not self.hs.is_mine(target_user):
|
||||
raise SynapseError(400, "User is not hosted on this Home Server")
|
||||
raise SynapseError(400, "User is not hosted on this homeserver")
|
||||
|
||||
if not by_admin and target_user != requester.user:
|
||||
raise AuthError(400, "Cannot set another user's avatar_url")
|
||||
|
@ -231,7 +231,7 @@ class BaseProfileHandler(BaseHandler):
|
|||
def on_profile_query(self, args):
|
||||
user = UserID.from_string(args["user_id"])
|
||||
if not self.hs.is_mine(user):
|
||||
raise SynapseError(400, "User is not hosted on this Home Server")
|
||||
raise SynapseError(400, "User is not hosted on this homeserver")
|
||||
|
||||
just_field = args.get("field", None)
|
||||
|
||||
|
|
|
@ -24,7 +24,6 @@ from synapse.api.errors import (
|
|||
AuthError,
|
||||
Codes,
|
||||
ConsentNotGivenError,
|
||||
LimitExceededError,
|
||||
RegistrationError,
|
||||
SynapseError,
|
||||
)
|
||||
|
@ -168,6 +167,7 @@ class RegistrationHandler(BaseHandler):
|
|||
Raises:
|
||||
RegistrationError if there was a problem registering.
|
||||
"""
|
||||
yield self.check_registration_ratelimit(address)
|
||||
|
||||
yield self.auth.check_auth_blocking(threepid=threepid)
|
||||
password_hash = None
|
||||
|
@ -217,8 +217,13 @@ class RegistrationHandler(BaseHandler):
|
|||
|
||||
else:
|
||||
# autogen a sequential user ID
|
||||
fail_count = 0
|
||||
user = None
|
||||
while not user:
|
||||
# Fail after being unable to find a suitable ID a few times
|
||||
if fail_count > 10:
|
||||
raise SynapseError(500, "Unable to find a suitable guest user ID")
|
||||
|
||||
localpart = yield self._generate_user_id()
|
||||
user = UserID(localpart, self.hs.hostname)
|
||||
user_id = user.to_string()
|
||||
|
@ -233,10 +238,14 @@ class RegistrationHandler(BaseHandler):
|
|||
create_profile_with_displayname=default_display_name,
|
||||
address=address,
|
||||
)
|
||||
|
||||
# Successfully registered
|
||||
break
|
||||
except SynapseError:
|
||||
# if user id is taken, just generate another
|
||||
user = None
|
||||
user_id = None
|
||||
fail_count += 1
|
||||
|
||||
if not self.hs.config.user_consent_at_registration:
|
||||
yield self._auto_join_rooms(user_id)
|
||||
|
@ -414,6 +423,29 @@ class RegistrationHandler(BaseHandler):
|
|||
ratelimit=False,
|
||||
)
|
||||
|
||||
def check_registration_ratelimit(self, address):
|
||||
"""A simple helper method to check whether the registration rate limit has been hit
|
||||
for a given IP address
|
||||
|
||||
Args:
|
||||
address (str|None): the IP address used to perform the registration. If this is
|
||||
None, no ratelimiting will be performed.
|
||||
|
||||
Raises:
|
||||
LimitExceededError: If the rate limit has been exceeded.
|
||||
"""
|
||||
if not address:
|
||||
return
|
||||
|
||||
time_now = self.clock.time()
|
||||
|
||||
self.ratelimiter.ratelimit(
|
||||
address,
|
||||
time_now_s=time_now,
|
||||
rate_hz=self.hs.config.rc_registration.per_second,
|
||||
burst_count=self.hs.config.rc_registration.burst_count,
|
||||
)
|
||||
|
||||
def register_with_store(
|
||||
self,
|
||||
user_id,
|
||||
|
@ -446,22 +478,6 @@ class RegistrationHandler(BaseHandler):
|
|||
Returns:
|
||||
Deferred
|
||||
"""
|
||||
# Don't rate limit for app services
|
||||
if appservice_id is None and address is not None:
|
||||
time_now = self.clock.time()
|
||||
|
||||
allowed, time_allowed = self.ratelimiter.can_do_action(
|
||||
address,
|
||||
time_now_s=time_now,
|
||||
rate_hz=self.hs.config.rc_registration.per_second,
|
||||
burst_count=self.hs.config.rc_registration.burst_count,
|
||||
)
|
||||
|
||||
if not allowed:
|
||||
raise LimitExceededError(
|
||||
retry_after_ms=int(1000 * (time_allowed - time_now))
|
||||
)
|
||||
|
||||
if self.hs.config.worker_app:
|
||||
return self._register_client(
|
||||
user_id=user_id,
|
||||
|
@ -614,7 +630,7 @@ class RegistrationHandler(BaseHandler):
|
|||
# And we add an email pusher for them by default, but only
|
||||
# if email notifications are enabled (so people don't start
|
||||
# getting mail spam where they weren't before if email
|
||||
# notifs are set up on a home server)
|
||||
# notifs are set up on a homeserver)
|
||||
if (
|
||||
self.hs.config.email_enable_notifs
|
||||
and self.hs.config.email_notif_for_new_users
|
||||
|
|
|
@ -515,6 +515,15 @@ class RoomMemberHandler(object):
|
|||
yield self.store.set_room_is_public(old_room_id, False)
|
||||
yield self.store.set_room_is_public(room_id, True)
|
||||
|
||||
# Check if any groups we own contain the predecessor room
|
||||
local_group_ids = yield self.store.get_local_groups_for_room(old_room_id)
|
||||
for group_id in local_group_ids:
|
||||
# Add new the new room to those groups
|
||||
yield self.store.add_room_to_group(group_id, room_id, old_room["is_public"])
|
||||
|
||||
# Remove the old room from those groups
|
||||
yield self.store.remove_room_from_group(group_id, old_room_id)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def copy_user_state_on_room_upgrade(self, old_room_id, new_room_id, user_ids):
|
||||
"""Copy user-specific information when they join a new room when that new room is the
|
||||
|
|
|
@ -120,7 +120,7 @@ class TypingHandler(object):
|
|||
auth_user_id = auth_user.to_string()
|
||||
|
||||
if not self.is_mine_id(target_user_id):
|
||||
raise SynapseError(400, "User is not hosted on this Home Server")
|
||||
raise SynapseError(400, "User is not hosted on this homeserver")
|
||||
|
||||
if target_user_id != auth_user_id:
|
||||
raise AuthError(400, "Cannot set another user's typing state")
|
||||
|
@ -150,7 +150,7 @@ class TypingHandler(object):
|
|||
auth_user_id = auth_user.to_string()
|
||||
|
||||
if not self.is_mine_id(target_user_id):
|
||||
raise SynapseError(400, "User is not hosted on this Home Server")
|
||||
raise SynapseError(400, "User is not hosted on this homeserver")
|
||||
|
||||
if target_user_id != auth_user_id:
|
||||
raise AuthError(400, "Cannot set another user's typing state")
|
||||
|
|
|
@ -530,7 +530,7 @@ class MatrixFederationHttpClient(object):
|
|||
"""
|
||||
Builds the Authorization headers for a federation request
|
||||
Args:
|
||||
destination (bytes|None): The desination home server of the request.
|
||||
destination (bytes|None): The desination homeserver of the request.
|
||||
May be None if the destination is an identity server, in which case
|
||||
destination_is must be non-None.
|
||||
method (bytes): The HTTP method of the request
|
||||
|
|
|
@ -96,7 +96,7 @@ def parse_boolean_from_args(args, name, default=None, required=False):
|
|||
return {b"true": True, b"false": False}[args[name][0]]
|
||||
except Exception:
|
||||
message = (
|
||||
"Boolean query parameter %r must be one of" " ['true', 'false']"
|
||||
"Boolean query parameter %r must be one of ['true', 'false']"
|
||||
) % (name,)
|
||||
raise SynapseError(400, message)
|
||||
else:
|
||||
|
|
|
@ -261,6 +261,18 @@ def parse_drain_configs(
|
|||
)
|
||||
|
||||
|
||||
class StoppableLogPublisher(LogPublisher):
|
||||
"""
|
||||
A log publisher that can tell its observers to shut down any external
|
||||
communications.
|
||||
"""
|
||||
|
||||
def stop(self):
|
||||
for obs in self._observers:
|
||||
if hasattr(obs, "stop"):
|
||||
obs.stop()
|
||||
|
||||
|
||||
def setup_structured_logging(
|
||||
hs,
|
||||
config,
|
||||
|
@ -336,7 +348,7 @@ def setup_structured_logging(
|
|||
# We should never get here, but, just in case, throw an error.
|
||||
raise ConfigError("%s drain type cannot be configured" % (observer.type,))
|
||||
|
||||
publisher = LogPublisher(*observers)
|
||||
publisher = StoppableLogPublisher(*observers)
|
||||
log_filter = LogLevelFilterPredicate()
|
||||
|
||||
for namespace, namespace_config in log_config.get(
|
||||
|
|
|
@ -17,25 +17,29 @@
|
|||
Log formatters that output terse JSON.
|
||||
"""
|
||||
|
||||
import json
|
||||
import sys
|
||||
import traceback
|
||||
from collections import deque
|
||||
from ipaddress import IPv4Address, IPv6Address, ip_address
|
||||
from math import floor
|
||||
from typing import IO
|
||||
from typing import IO, Optional
|
||||
|
||||
import attr
|
||||
from simplejson import dumps
|
||||
from zope.interface import implementer
|
||||
|
||||
from twisted.application.internet import ClientService
|
||||
from twisted.internet.defer import Deferred
|
||||
from twisted.internet.endpoints import (
|
||||
HostnameEndpoint,
|
||||
TCP4ClientEndpoint,
|
||||
TCP6ClientEndpoint,
|
||||
)
|
||||
from twisted.internet.interfaces import IPushProducer, ITransport
|
||||
from twisted.internet.protocol import Factory, Protocol
|
||||
from twisted.logger import FileLogObserver, ILogObserver, Logger
|
||||
from twisted.python.failure import Failure
|
||||
|
||||
_encoder = json.JSONEncoder(ensure_ascii=False, separators=(",", ":"))
|
||||
|
||||
|
||||
def flatten_event(event: dict, metadata: dict, include_time: bool = False):
|
||||
|
@ -141,11 +145,49 @@ def TerseJSONToConsoleLogObserver(outFile: IO[str], metadata: dict) -> FileLogOb
|
|||
|
||||
def formatEvent(_event: dict) -> str:
|
||||
flattened = flatten_event(_event, metadata)
|
||||
return dumps(flattened, ensure_ascii=False, separators=(",", ":")) + "\n"
|
||||
return _encoder.encode(flattened) + "\n"
|
||||
|
||||
return FileLogObserver(outFile, formatEvent)
|
||||
|
||||
|
||||
@attr.s
|
||||
@implementer(IPushProducer)
|
||||
class LogProducer(object):
|
||||
"""
|
||||
An IPushProducer that writes logs from its buffer to its transport when it
|
||||
is resumed.
|
||||
|
||||
Args:
|
||||
buffer: Log buffer to read logs from.
|
||||
transport: Transport to write to.
|
||||
"""
|
||||
|
||||
transport = attr.ib(type=ITransport)
|
||||
_buffer = attr.ib(type=deque)
|
||||
_paused = attr.ib(default=False, type=bool, init=False)
|
||||
|
||||
def pauseProducing(self):
|
||||
self._paused = True
|
||||
|
||||
def stopProducing(self):
|
||||
self._paused = True
|
||||
self._buffer = None
|
||||
|
||||
def resumeProducing(self):
|
||||
self._paused = False
|
||||
|
||||
while self._paused is False and (self._buffer and self.transport.connected):
|
||||
try:
|
||||
event = self._buffer.popleft()
|
||||
self.transport.write(_encoder.encode(event).encode("utf8"))
|
||||
self.transport.write(b"\n")
|
||||
except Exception:
|
||||
# Something has gone wrong writing to the transport -- log it
|
||||
# and break out of the while.
|
||||
traceback.print_exc(file=sys.__stderr__)
|
||||
break
|
||||
|
||||
|
||||
@attr.s
|
||||
@implementer(ILogObserver)
|
||||
class TerseJSONToTCPLogObserver(object):
|
||||
|
@ -153,7 +195,7 @@ class TerseJSONToTCPLogObserver(object):
|
|||
An IObserver that writes JSON logs to a TCP target.
|
||||
|
||||
Args:
|
||||
hs (HomeServer): The Homeserver that is being logged for.
|
||||
hs (HomeServer): The homeserver that is being logged for.
|
||||
host: The host of the logging target.
|
||||
port: The logging target's port.
|
||||
metadata: Metadata to be added to each log entry.
|
||||
|
@ -165,8 +207,9 @@ class TerseJSONToTCPLogObserver(object):
|
|||
metadata = attr.ib(type=dict)
|
||||
maximum_buffer = attr.ib(type=int)
|
||||
_buffer = attr.ib(default=attr.Factory(deque), type=deque)
|
||||
_writer = attr.ib(default=None)
|
||||
_connection_waiter = attr.ib(default=None, type=Optional[Deferred])
|
||||
_logger = attr.ib(default=attr.Factory(Logger))
|
||||
_producer = attr.ib(default=None, type=Optional[LogProducer])
|
||||
|
||||
def start(self) -> None:
|
||||
|
||||
|
@ -187,38 +230,43 @@ class TerseJSONToTCPLogObserver(object):
|
|||
factory = Factory.forProtocol(Protocol)
|
||||
self._service = ClientService(endpoint, factory, clock=self.hs.get_reactor())
|
||||
self._service.startService()
|
||||
self._connect()
|
||||
|
||||
def _write_loop(self) -> None:
|
||||
def stop(self):
|
||||
self._service.stopService()
|
||||
|
||||
def _connect(self) -> None:
|
||||
"""
|
||||
Implement the write loop.
|
||||
Triggers an attempt to connect then write to the remote if not already writing.
|
||||
"""
|
||||
if self._writer:
|
||||
if self._connection_waiter:
|
||||
return
|
||||
|
||||
self._writer = self._service.whenConnected()
|
||||
self._connection_waiter = self._service.whenConnected(failAfterFailures=1)
|
||||
|
||||
@self._writer.addBoth
|
||||
@self._connection_waiter.addErrback
|
||||
def fail(r):
|
||||
r.printTraceback(file=sys.__stderr__)
|
||||
self._connection_waiter = None
|
||||
self._connect()
|
||||
|
||||
@self._connection_waiter.addCallback
|
||||
def writer(r):
|
||||
if isinstance(r, Failure):
|
||||
r.printTraceback(file=sys.__stderr__)
|
||||
self._writer = None
|
||||
self.hs.get_reactor().callLater(1, self._write_loop)
|
||||
# We have a connection. If we already have a producer, and its
|
||||
# transport is the same, just trigger a resumeProducing.
|
||||
if self._producer and r.transport is self._producer.transport:
|
||||
self._producer.resumeProducing()
|
||||
return
|
||||
|
||||
try:
|
||||
for event in self._buffer:
|
||||
r.transport.write(
|
||||
dumps(event, ensure_ascii=False, separators=(",", ":")).encode(
|
||||
"utf8"
|
||||
)
|
||||
)
|
||||
r.transport.write(b"\n")
|
||||
self._buffer.clear()
|
||||
except Exception as e:
|
||||
sys.__stderr__.write("Failed writing out logs with %s\n" % (str(e),))
|
||||
# If the producer is still producing, stop it.
|
||||
if self._producer:
|
||||
self._producer.stopProducing()
|
||||
|
||||
self._writer = False
|
||||
self.hs.get_reactor().callLater(1, self._write_loop)
|
||||
# Make a new producer and start it.
|
||||
self._producer = LogProducer(buffer=self._buffer, transport=r.transport)
|
||||
r.transport.registerProducer(self._producer, True)
|
||||
self._producer.resumeProducing()
|
||||
self._connection_waiter = None
|
||||
|
||||
def _handle_pressure(self) -> None:
|
||||
"""
|
||||
|
@ -277,4 +325,4 @@ class TerseJSONToTCPLogObserver(object):
|
|||
self._logger.failure("Failed clearing backpressure")
|
||||
|
||||
# Try and write immediately.
|
||||
self._write_loop()
|
||||
self._connect()
|
||||
|
|
|
@ -246,7 +246,7 @@ class HttpPusher(object):
|
|||
# fixed, we don't suddenly deliver a load
|
||||
# of old notifications.
|
||||
logger.warning(
|
||||
"Giving up on a notification to user %s, " "pushkey %s",
|
||||
"Giving up on a notification to user %s, pushkey %s",
|
||||
self.user_id,
|
||||
self.pushkey,
|
||||
)
|
||||
|
@ -299,8 +299,7 @@ class HttpPusher(object):
|
|||
# for sanity, we only remove the pushkey if it
|
||||
# was the one we actually sent...
|
||||
logger.warning(
|
||||
("Ignoring rejected pushkey %s because we" " didn't send it"),
|
||||
pk,
|
||||
("Ignoring rejected pushkey %s because we didn't send it"), pk,
|
||||
)
|
||||
else:
|
||||
logger.info("Pushkey %s was rejected: removing", pk)
|
||||
|
|
|
@ -43,7 +43,7 @@ logger = logging.getLogger(__name__)
|
|||
|
||||
|
||||
MESSAGE_FROM_PERSON_IN_ROOM = (
|
||||
"You have a message on %(app)s from %(person)s " "in the %(room)s room..."
|
||||
"You have a message on %(app)s from %(person)s in the %(room)s room..."
|
||||
)
|
||||
MESSAGE_FROM_PERSON = "You have a message on %(app)s from %(person)s..."
|
||||
MESSAGES_FROM_PERSON = "You have messages on %(app)s from %(person)s..."
|
||||
|
@ -55,7 +55,7 @@ MESSAGES_FROM_PERSON_AND_OTHERS = (
|
|||
"You have messages on %(app)s from %(person)s and others..."
|
||||
)
|
||||
INVITE_FROM_PERSON_TO_ROOM = (
|
||||
"%(person)s has invited you to join the " "%(room)s room on %(app)s..."
|
||||
"%(person)s has invited you to join the %(room)s room on %(app)s..."
|
||||
)
|
||||
INVITE_FROM_PERSON = "%(person)s has invited you to chat on %(app)s..."
|
||||
|
||||
|
|
|
@ -61,7 +61,6 @@ REQUIREMENTS = [
|
|||
"bcrypt>=3.1.0",
|
||||
"pillow>=4.3.0",
|
||||
"sortedcontainers>=1.4.4",
|
||||
"psutil>=2.0.0",
|
||||
"pymacaroons>=0.13.0",
|
||||
"msgpack>=0.5.2",
|
||||
"phonenumbers>=8.2.0",
|
||||
|
|
|
@ -14,7 +14,14 @@
|
|||
# limitations under the License.
|
||||
|
||||
from synapse.http.server import JsonResource
|
||||
from synapse.replication.http import federation, login, membership, register, send_event
|
||||
from synapse.replication.http import (
|
||||
devices,
|
||||
federation,
|
||||
login,
|
||||
membership,
|
||||
register,
|
||||
send_event,
|
||||
)
|
||||
|
||||
REPLICATION_PREFIX = "/_synapse/replication"
|
||||
|
||||
|
@ -30,3 +37,4 @@ class ReplicationRestResource(JsonResource):
|
|||
federation.register_servlets(hs, self)
|
||||
login.register_servlets(hs, self)
|
||||
register.register_servlets(hs, self)
|
||||
devices.register_servlets(hs, self)
|
||||
|
|
73
synapse/replication/http/devices.py
Normal file
73
synapse/replication/http/devices.py
Normal file
|
@ -0,0 +1,73 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2018 New Vector Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
|
||||
from synapse.replication.http._base import ReplicationEndpoint
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ReplicationUserDevicesResyncRestServlet(ReplicationEndpoint):
|
||||
"""Ask master to resync the device list for a user by contacting their
|
||||
server.
|
||||
|
||||
This must happen on master so that the results can be correctly cached in
|
||||
the database and streamed to workers.
|
||||
|
||||
Request format:
|
||||
|
||||
POST /_synapse/replication/user_device_resync/:user_id
|
||||
|
||||
{}
|
||||
|
||||
Response is equivalent to ` /_matrix/federation/v1/user/devices/:user_id`
|
||||
response, e.g.:
|
||||
|
||||
{
|
||||
"user_id": "@alice:example.org",
|
||||
"devices": [
|
||||
{
|
||||
"device_id": "JLAFKJWSCS",
|
||||
"keys": { ... },
|
||||
"device_display_name": "Alice's Mobile Phone"
|
||||
}
|
||||
]
|
||||
}
|
||||
"""
|
||||
|
||||
NAME = "user_device_resync"
|
||||
PATH_ARGS = ("user_id",)
|
||||
CACHE = False
|
||||
|
||||
def __init__(self, hs):
|
||||
super(ReplicationUserDevicesResyncRestServlet, self).__init__(hs)
|
||||
|
||||
self.device_list_updater = hs.get_device_handler().device_list_updater
|
||||
self.store = hs.get_datastore()
|
||||
self.clock = hs.get_clock()
|
||||
|
||||
@staticmethod
|
||||
def _serialize_payload(user_id):
|
||||
return {}
|
||||
|
||||
async def _handle_request(self, request, user_id):
|
||||
user_devices = await self.device_list_updater.user_device_resync(user_id)
|
||||
|
||||
return 200, user_devices
|
||||
|
||||
|
||||
def register_servlets(hs, http_server):
|
||||
ReplicationUserDevicesResyncRestServlet(hs).register(http_server)
|
|
@ -75,6 +75,8 @@ class ReplicationRegisterServlet(ReplicationEndpoint):
|
|||
async def _handle_request(self, request, user_id):
|
||||
content = parse_json_object_from_request(request)
|
||||
|
||||
self.registration_handler.check_registration_ratelimit(content["address"])
|
||||
|
||||
await self.registration_handler.register_with_store(
|
||||
user_id=user_id,
|
||||
password_hash=content["password_hash"],
|
||||
|
|
|
@ -14,6 +14,7 @@
|
|||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from typing import Dict
|
||||
|
||||
import six
|
||||
|
||||
|
@ -44,7 +45,14 @@ class BaseSlavedStore(SQLBaseStore):
|
|||
|
||||
self.hs = hs
|
||||
|
||||
def stream_positions(self):
|
||||
def stream_positions(self) -> Dict[str, int]:
|
||||
"""
|
||||
Get the current positions of all the streams this store wants to subscribe to
|
||||
|
||||
Returns:
|
||||
map from stream name to the most recent update we have for
|
||||
that stream (ie, the point we want to start replicating from)
|
||||
"""
|
||||
pos = {}
|
||||
if self._cache_id_gen:
|
||||
pos["caches"] = self._cache_id_gen.get_current_token()
|
||||
|
|
|
@ -16,10 +16,17 @@
|
|||
"""
|
||||
|
||||
import logging
|
||||
from typing import Dict
|
||||
|
||||
from twisted.internet import defer
|
||||
from twisted.internet.protocol import ReconnectingClientFactory
|
||||
|
||||
from synapse.replication.slave.storage._base import BaseSlavedStore
|
||||
from synapse.replication.tcp.protocol import (
|
||||
AbstractReplicationClientHandler,
|
||||
ClientReplicationStreamProtocol,
|
||||
)
|
||||
|
||||
from .commands import (
|
||||
FederationAckCommand,
|
||||
InvalidateCacheCommand,
|
||||
|
@ -27,7 +34,6 @@ from .commands import (
|
|||
UserIpCommand,
|
||||
UserSyncCommand,
|
||||
)
|
||||
from .protocol import ClientReplicationStreamProtocol
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
@ -42,7 +48,7 @@ class ReplicationClientFactory(ReconnectingClientFactory):
|
|||
|
||||
maxDelay = 30 # Try at least once every N seconds
|
||||
|
||||
def __init__(self, hs, client_name, handler):
|
||||
def __init__(self, hs, client_name, handler: AbstractReplicationClientHandler):
|
||||
self.client_name = client_name
|
||||
self.handler = handler
|
||||
self.server_name = hs.config.server_name
|
||||
|
@ -68,13 +74,13 @@ class ReplicationClientFactory(ReconnectingClientFactory):
|
|||
ReconnectingClientFactory.clientConnectionFailed(self, connector, reason)
|
||||
|
||||
|
||||
class ReplicationClientHandler(object):
|
||||
class ReplicationClientHandler(AbstractReplicationClientHandler):
|
||||
"""A base handler that can be passed to the ReplicationClientFactory.
|
||||
|
||||
By default proxies incoming replication data to the SlaveStore.
|
||||
"""
|
||||
|
||||
def __init__(self, store):
|
||||
def __init__(self, store: BaseSlavedStore):
|
||||
self.store = store
|
||||
|
||||
# The current connection. None if we are currently (re)connecting
|
||||
|
@ -138,11 +144,13 @@ class ReplicationClientHandler(object):
|
|||
if d:
|
||||
d.callback(data)
|
||||
|
||||
def get_streams_to_replicate(self):
|
||||
def get_streams_to_replicate(self) -> Dict[str, int]:
|
||||
"""Called when a new connection has been established and we need to
|
||||
subscribe to streams.
|
||||
|
||||
Returns a dictionary of stream name to token.
|
||||
Returns:
|
||||
map from stream name to the most recent update we have for
|
||||
that stream (ie, the point we want to start replicating from)
|
||||
"""
|
||||
args = self.store.stream_positions()
|
||||
user_account_data = args.pop("user_account_data", None)
|
||||
|
|
|
@ -48,7 +48,7 @@ indicate which side is sending, these are *not* included on the wire::
|
|||
> ERROR server stopping
|
||||
* connection closed by server *
|
||||
"""
|
||||
|
||||
import abc
|
||||
import fcntl
|
||||
import logging
|
||||
import struct
|
||||
|
@ -65,6 +65,7 @@ from twisted.python.failure import Failure
|
|||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||
from synapse.metrics import LaterGauge
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.util import Clock
|
||||
from synapse.util.stringutils import random_string
|
||||
|
||||
from .commands import (
|
||||
|
@ -558,11 +559,80 @@ class ServerReplicationStreamProtocol(BaseReplicationStreamProtocol):
|
|||
self.streamer.lost_connection(self)
|
||||
|
||||
|
||||
class AbstractReplicationClientHandler(metaclass=abc.ABCMeta):
|
||||
"""
|
||||
The interface for the handler that should be passed to
|
||||
ClientReplicationStreamProtocol
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def on_rdata(self, stream_name, token, rows):
|
||||
"""Called to handle a batch of replication data with a given stream token.
|
||||
|
||||
Args:
|
||||
stream_name (str): name of the replication stream for this batch of rows
|
||||
token (int): stream token for this batch of rows
|
||||
rows (list): a list of Stream.ROW_TYPE objects as returned by
|
||||
Stream.parse_row.
|
||||
|
||||
Returns:
|
||||
Deferred|None
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
@abc.abstractmethod
|
||||
def on_position(self, stream_name, token):
|
||||
"""Called when we get new position data."""
|
||||
raise NotImplementedError()
|
||||
|
||||
@abc.abstractmethod
|
||||
def on_sync(self, data):
|
||||
"""Called when get a new SYNC command."""
|
||||
raise NotImplementedError()
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_streams_to_replicate(self):
|
||||
"""Called when a new connection has been established and we need to
|
||||
subscribe to streams.
|
||||
|
||||
Returns:
|
||||
map from stream name to the most recent update we have for
|
||||
that stream (ie, the point we want to start replicating from)
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_currently_syncing_users(self):
|
||||
"""Get the list of currently syncing users (if any). This is called
|
||||
when a connection has been established and we need to send the
|
||||
currently syncing users."""
|
||||
raise NotImplementedError()
|
||||
|
||||
@abc.abstractmethod
|
||||
def update_connection(self, connection):
|
||||
"""Called when a connection has been established (or lost with None).
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
@abc.abstractmethod
|
||||
def finished_connecting(self):
|
||||
"""Called when we have successfully subscribed and caught up to all
|
||||
streams we're interested in.
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
class ClientReplicationStreamProtocol(BaseReplicationStreamProtocol):
|
||||
VALID_INBOUND_COMMANDS = VALID_SERVER_COMMANDS
|
||||
VALID_OUTBOUND_COMMANDS = VALID_CLIENT_COMMANDS
|
||||
|
||||
def __init__(self, client_name, server_name, clock, handler):
|
||||
def __init__(
|
||||
self,
|
||||
client_name: str,
|
||||
server_name: str,
|
||||
clock: Clock,
|
||||
handler: AbstractReplicationClientHandler,
|
||||
):
|
||||
BaseReplicationStreamProtocol.__init__(self, clock)
|
||||
|
||||
self.client_name = client_name
|
||||
|
|
|
@ -88,8 +88,7 @@ TagAccountDataStreamRow = namedtuple(
|
|||
"TagAccountDataStreamRow", ("user_id", "room_id", "data") # str # str # dict
|
||||
)
|
||||
AccountDataStreamRow = namedtuple(
|
||||
"AccountDataStream",
|
||||
("user_id", "room_id", "data_type", "data"), # str # str # str # dict
|
||||
"AccountDataStream", ("user_id", "room_id", "data_type") # str # str # str
|
||||
)
|
||||
GroupsStreamRow = namedtuple(
|
||||
"GroupsStreamRow",
|
||||
|
@ -421,8 +420,8 @@ class AccountDataStream(Stream):
|
|||
|
||||
results = list(room_results)
|
||||
results.extend(
|
||||
(stream_id, user_id, None, account_data_type, content)
|
||||
for stream_id, user_id, account_data_type, content in global_results
|
||||
(stream_id, user_id, None, account_data_type)
|
||||
for stream_id, user_id, account_data_type in global_results
|
||||
)
|
||||
|
||||
return results
|
||||
|
|
|
@ -14,62 +14,39 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import hashlib
|
||||
import hmac
|
||||
import logging
|
||||
import platform
|
||||
import re
|
||||
|
||||
from six import text_type
|
||||
from six.moves import http_client
|
||||
|
||||
import synapse
|
||||
from synapse.api.constants import Membership, UserTypes
|
||||
from synapse.api.errors import Codes, NotFoundError, SynapseError
|
||||
from synapse.http.server import JsonResource
|
||||
from synapse.http.servlet import (
|
||||
RestServlet,
|
||||
assert_params_in_dict,
|
||||
parse_integer,
|
||||
parse_json_object_from_request,
|
||||
parse_string,
|
||||
)
|
||||
from synapse.http.servlet import RestServlet, parse_json_object_from_request
|
||||
from synapse.rest.admin._base import (
|
||||
assert_requester_is_admin,
|
||||
assert_user_is_admin,
|
||||
historical_admin_path_patterns,
|
||||
)
|
||||
from synapse.rest.admin.groups import DeleteGroupAdminRestServlet
|
||||
from synapse.rest.admin.media import ListMediaInRoom, register_servlets_for_media_repo
|
||||
from synapse.rest.admin.purge_room_servlet import PurgeRoomServlet
|
||||
from synapse.rest.admin.rooms import ShutdownRoomRestServlet
|
||||
from synapse.rest.admin.server_notice_servlet import SendServerNoticeServlet
|
||||
from synapse.rest.admin.users import UserAdminServlet
|
||||
from synapse.types import UserID, create_requester
|
||||
from synapse.util.async_helpers import maybe_awaitable
|
||||
from synapse.rest.admin.users import (
|
||||
AccountValidityRenewServlet,
|
||||
DeactivateAccountRestServlet,
|
||||
GetUsersPaginatedRestServlet,
|
||||
ResetPasswordRestServlet,
|
||||
SearchUsersRestServlet,
|
||||
UserAdminServlet,
|
||||
UserRegisterServlet,
|
||||
UsersRestServlet,
|
||||
WhoisRestServlet,
|
||||
)
|
||||
from synapse.util.versionstring import get_version_string
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class UsersRestServlet(RestServlet):
|
||||
PATTERNS = historical_admin_path_patterns("/users/(?P<user_id>[^/]*)$")
|
||||
|
||||
def __init__(self, hs):
|
||||
self.hs = hs
|
||||
self.auth = hs.get_auth()
|
||||
self.handlers = hs.get_handlers()
|
||||
|
||||
async def on_GET(self, request, user_id):
|
||||
target_user = UserID.from_string(user_id)
|
||||
await assert_requester_is_admin(self.auth, request)
|
||||
|
||||
if not self.hs.is_mine(target_user):
|
||||
raise SynapseError(400, "Can only users a local user")
|
||||
|
||||
ret = await self.handlers.admin_handler.get_users()
|
||||
|
||||
return 200, ret
|
||||
|
||||
|
||||
class VersionServlet(RestServlet):
|
||||
PATTERNS = (re.compile("^/_synapse/admin/v1/server_version$"),)
|
||||
|
||||
|
@ -83,159 +60,6 @@ class VersionServlet(RestServlet):
|
|||
return 200, self.res
|
||||
|
||||
|
||||
class UserRegisterServlet(RestServlet):
|
||||
"""
|
||||
Attributes:
|
||||
NONCE_TIMEOUT (int): Seconds until a generated nonce won't be accepted
|
||||
nonces (dict[str, int]): The nonces that we will accept. A dict of
|
||||
nonce to the time it was generated, in int seconds.
|
||||
"""
|
||||
|
||||
PATTERNS = historical_admin_path_patterns("/register")
|
||||
NONCE_TIMEOUT = 60
|
||||
|
||||
def __init__(self, hs):
|
||||
self.handlers = hs.get_handlers()
|
||||
self.reactor = hs.get_reactor()
|
||||
self.nonces = {}
|
||||
self.hs = hs
|
||||
|
||||
def _clear_old_nonces(self):
|
||||
"""
|
||||
Clear out old nonces that are older than NONCE_TIMEOUT.
|
||||
"""
|
||||
now = int(self.reactor.seconds())
|
||||
|
||||
for k, v in list(self.nonces.items()):
|
||||
if now - v > self.NONCE_TIMEOUT:
|
||||
del self.nonces[k]
|
||||
|
||||
def on_GET(self, request):
|
||||
"""
|
||||
Generate a new nonce.
|
||||
"""
|
||||
self._clear_old_nonces()
|
||||
|
||||
nonce = self.hs.get_secrets().token_hex(64)
|
||||
self.nonces[nonce] = int(self.reactor.seconds())
|
||||
return 200, {"nonce": nonce}
|
||||
|
||||
async def on_POST(self, request):
|
||||
self._clear_old_nonces()
|
||||
|
||||
if not self.hs.config.registration_shared_secret:
|
||||
raise SynapseError(400, "Shared secret registration is not enabled")
|
||||
|
||||
body = parse_json_object_from_request(request)
|
||||
|
||||
if "nonce" not in body:
|
||||
raise SynapseError(400, "nonce must be specified", errcode=Codes.BAD_JSON)
|
||||
|
||||
nonce = body["nonce"]
|
||||
|
||||
if nonce not in self.nonces:
|
||||
raise SynapseError(400, "unrecognised nonce")
|
||||
|
||||
# Delete the nonce, so it can't be reused, even if it's invalid
|
||||
del self.nonces[nonce]
|
||||
|
||||
if "username" not in body:
|
||||
raise SynapseError(
|
||||
400, "username must be specified", errcode=Codes.BAD_JSON
|
||||
)
|
||||
else:
|
||||
if (
|
||||
not isinstance(body["username"], text_type)
|
||||
or len(body["username"]) > 512
|
||||
):
|
||||
raise SynapseError(400, "Invalid username")
|
||||
|
||||
username = body["username"].encode("utf-8")
|
||||
if b"\x00" in username:
|
||||
raise SynapseError(400, "Invalid username")
|
||||
|
||||
if "password" not in body:
|
||||
raise SynapseError(
|
||||
400, "password must be specified", errcode=Codes.BAD_JSON
|
||||
)
|
||||
else:
|
||||
if (
|
||||
not isinstance(body["password"], text_type)
|
||||
or len(body["password"]) > 512
|
||||
):
|
||||
raise SynapseError(400, "Invalid password")
|
||||
|
||||
password = body["password"].encode("utf-8")
|
||||
if b"\x00" in password:
|
||||
raise SynapseError(400, "Invalid password")
|
||||
|
||||
admin = body.get("admin", None)
|
||||
user_type = body.get("user_type", None)
|
||||
|
||||
if user_type is not None and user_type not in UserTypes.ALL_USER_TYPES:
|
||||
raise SynapseError(400, "Invalid user type")
|
||||
|
||||
got_mac = body["mac"]
|
||||
|
||||
want_mac = hmac.new(
|
||||
key=self.hs.config.registration_shared_secret.encode(),
|
||||
digestmod=hashlib.sha1,
|
||||
)
|
||||
want_mac.update(nonce.encode("utf8"))
|
||||
want_mac.update(b"\x00")
|
||||
want_mac.update(username)
|
||||
want_mac.update(b"\x00")
|
||||
want_mac.update(password)
|
||||
want_mac.update(b"\x00")
|
||||
want_mac.update(b"admin" if admin else b"notadmin")
|
||||
if user_type:
|
||||
want_mac.update(b"\x00")
|
||||
want_mac.update(user_type.encode("utf8"))
|
||||
want_mac = want_mac.hexdigest()
|
||||
|
||||
if not hmac.compare_digest(want_mac.encode("ascii"), got_mac.encode("ascii")):
|
||||
raise SynapseError(403, "HMAC incorrect")
|
||||
|
||||
# Reuse the parts of RegisterRestServlet to reduce code duplication
|
||||
from synapse.rest.client.v2_alpha.register import RegisterRestServlet
|
||||
|
||||
register = RegisterRestServlet(self.hs)
|
||||
|
||||
user_id = await register.registration_handler.register_user(
|
||||
localpart=body["username"].lower(),
|
||||
password=body["password"],
|
||||
admin=bool(admin),
|
||||
user_type=user_type,
|
||||
)
|
||||
|
||||
result = await register._create_registration_details(user_id, body)
|
||||
return 200, result
|
||||
|
||||
|
||||
class WhoisRestServlet(RestServlet):
|
||||
PATTERNS = historical_admin_path_patterns("/whois/(?P<user_id>[^/]*)")
|
||||
|
||||
def __init__(self, hs):
|
||||
self.hs = hs
|
||||
self.auth = hs.get_auth()
|
||||
self.handlers = hs.get_handlers()
|
||||
|
||||
async def on_GET(self, request, user_id):
|
||||
target_user = UserID.from_string(user_id)
|
||||
requester = await self.auth.get_user_by_req(request)
|
||||
auth_user = requester.user
|
||||
|
||||
if target_user != auth_user:
|
||||
await assert_user_is_admin(self.auth, auth_user)
|
||||
|
||||
if not self.hs.is_mine(target_user):
|
||||
raise SynapseError(400, "Can only whois a local user")
|
||||
|
||||
ret = await self.handlers.admin_handler.get_whois(target_user)
|
||||
|
||||
return 200, ret
|
||||
|
||||
|
||||
class PurgeHistoryRestServlet(RestServlet):
|
||||
PATTERNS = historical_admin_path_patterns(
|
||||
"/purge_history/(?P<room_id>[^/]*)(/(?P<event_id>[^/]+))?"
|
||||
|
@ -342,369 +166,6 @@ class PurgeHistoryStatusRestServlet(RestServlet):
|
|||
return 200, purge_status.asdict()
|
||||
|
||||
|
||||
class DeactivateAccountRestServlet(RestServlet):
|
||||
PATTERNS = historical_admin_path_patterns("/deactivate/(?P<target_user_id>[^/]*)")
|
||||
|
||||
def __init__(self, hs):
|
||||
self._deactivate_account_handler = hs.get_deactivate_account_handler()
|
||||
self.auth = hs.get_auth()
|
||||
|
||||
async def on_POST(self, request, target_user_id):
|
||||
await assert_requester_is_admin(self.auth, request)
|
||||
body = parse_json_object_from_request(request, allow_empty_body=True)
|
||||
erase = body.get("erase", False)
|
||||
if not isinstance(erase, bool):
|
||||
raise SynapseError(
|
||||
http_client.BAD_REQUEST,
|
||||
"Param 'erase' must be a boolean, if given",
|
||||
Codes.BAD_JSON,
|
||||
)
|
||||
|
||||
UserID.from_string(target_user_id)
|
||||
|
||||
result = await self._deactivate_account_handler.deactivate_account(
|
||||
target_user_id, erase
|
||||
)
|
||||
if result:
|
||||
id_server_unbind_result = "success"
|
||||
else:
|
||||
id_server_unbind_result = "no-support"
|
||||
|
||||
return 200, {"id_server_unbind_result": id_server_unbind_result}
|
||||
|
||||
|
||||
class ShutdownRoomRestServlet(RestServlet):
|
||||
"""Shuts down a room by removing all local users from the room and blocking
|
||||
all future invites and joins to the room. Any local aliases will be repointed
|
||||
to a new room created by `new_room_user_id` and kicked users will be auto
|
||||
joined to the new room.
|
||||
"""
|
||||
|
||||
PATTERNS = historical_admin_path_patterns("/shutdown_room/(?P<room_id>[^/]+)")
|
||||
|
||||
DEFAULT_MESSAGE = (
|
||||
"Sharing illegal content on this server is not permitted and rooms in"
|
||||
" violation will be blocked."
|
||||
)
|
||||
|
||||
def __init__(self, hs):
|
||||
self.hs = hs
|
||||
self.store = hs.get_datastore()
|
||||
self.state = hs.get_state_handler()
|
||||
self._room_creation_handler = hs.get_room_creation_handler()
|
||||
self.event_creation_handler = hs.get_event_creation_handler()
|
||||
self.room_member_handler = hs.get_room_member_handler()
|
||||
self.auth = hs.get_auth()
|
||||
|
||||
async def on_POST(self, request, room_id):
|
||||
requester = await self.auth.get_user_by_req(request)
|
||||
await assert_user_is_admin(self.auth, requester.user)
|
||||
|
||||
content = parse_json_object_from_request(request)
|
||||
assert_params_in_dict(content, ["new_room_user_id"])
|
||||
new_room_user_id = content["new_room_user_id"]
|
||||
|
||||
room_creator_requester = create_requester(new_room_user_id)
|
||||
|
||||
message = content.get("message", self.DEFAULT_MESSAGE)
|
||||
room_name = content.get("room_name", "Content Violation Notification")
|
||||
|
||||
info = await self._room_creation_handler.create_room(
|
||||
room_creator_requester,
|
||||
config={
|
||||
"preset": "public_chat",
|
||||
"name": room_name,
|
||||
"power_level_content_override": {"users_default": -10},
|
||||
},
|
||||
ratelimit=False,
|
||||
)
|
||||
new_room_id = info["room_id"]
|
||||
|
||||
requester_user_id = requester.user.to_string()
|
||||
|
||||
logger.info(
|
||||
"Shutting down room %r, joining to new room: %r", room_id, new_room_id
|
||||
)
|
||||
|
||||
# This will work even if the room is already blocked, but that is
|
||||
# desirable in case the first attempt at blocking the room failed below.
|
||||
await self.store.block_room(room_id, requester_user_id)
|
||||
|
||||
users = await self.state.get_current_users_in_room(room_id)
|
||||
kicked_users = []
|
||||
failed_to_kick_users = []
|
||||
for user_id in users:
|
||||
if not self.hs.is_mine_id(user_id):
|
||||
continue
|
||||
|
||||
logger.info("Kicking %r from %r...", user_id, room_id)
|
||||
|
||||
try:
|
||||
target_requester = create_requester(user_id)
|
||||
await self.room_member_handler.update_membership(
|
||||
requester=target_requester,
|
||||
target=target_requester.user,
|
||||
room_id=room_id,
|
||||
action=Membership.LEAVE,
|
||||
content={},
|
||||
ratelimit=False,
|
||||
require_consent=False,
|
||||
)
|
||||
|
||||
await self.room_member_handler.forget(target_requester.user, room_id)
|
||||
|
||||
await self.room_member_handler.update_membership(
|
||||
requester=target_requester,
|
||||
target=target_requester.user,
|
||||
room_id=new_room_id,
|
||||
action=Membership.JOIN,
|
||||
content={},
|
||||
ratelimit=False,
|
||||
require_consent=False,
|
||||
)
|
||||
|
||||
kicked_users.append(user_id)
|
||||
except Exception:
|
||||
logger.exception(
|
||||
"Failed to leave old room and join new room for %r", user_id
|
||||
)
|
||||
failed_to_kick_users.append(user_id)
|
||||
|
||||
await self.event_creation_handler.create_and_send_nonmember_event(
|
||||
room_creator_requester,
|
||||
{
|
||||
"type": "m.room.message",
|
||||
"content": {"body": message, "msgtype": "m.text"},
|
||||
"room_id": new_room_id,
|
||||
"sender": new_room_user_id,
|
||||
},
|
||||
ratelimit=False,
|
||||
)
|
||||
|
||||
aliases_for_room = await maybe_awaitable(
|
||||
self.store.get_aliases_for_room(room_id)
|
||||
)
|
||||
|
||||
await self.store.update_aliases_for_room(
|
||||
room_id, new_room_id, requester_user_id
|
||||
)
|
||||
|
||||
return (
|
||||
200,
|
||||
{
|
||||
"kicked_users": kicked_users,
|
||||
"failed_to_kick_users": failed_to_kick_users,
|
||||
"local_aliases": aliases_for_room,
|
||||
"new_room_id": new_room_id,
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
class ResetPasswordRestServlet(RestServlet):
|
||||
"""Post request to allow an administrator reset password for a user.
|
||||
This needs user to have administrator access in Synapse.
|
||||
Example:
|
||||
http://localhost:8008/_synapse/admin/v1/reset_password/
|
||||
@user:to_reset_password?access_token=admin_access_token
|
||||
JsonBodyToSend:
|
||||
{
|
||||
"new_password": "secret"
|
||||
}
|
||||
Returns:
|
||||
200 OK with empty object if success otherwise an error.
|
||||
"""
|
||||
|
||||
PATTERNS = historical_admin_path_patterns(
|
||||
"/reset_password/(?P<target_user_id>[^/]*)"
|
||||
)
|
||||
|
||||
def __init__(self, hs):
|
||||
self.store = hs.get_datastore()
|
||||
self.hs = hs
|
||||
self.auth = hs.get_auth()
|
||||
self._set_password_handler = hs.get_set_password_handler()
|
||||
|
||||
async def on_POST(self, request, target_user_id):
|
||||
"""Post request to allow an administrator reset password for a user.
|
||||
This needs user to have administrator access in Synapse.
|
||||
"""
|
||||
requester = await self.auth.get_user_by_req(request)
|
||||
await assert_user_is_admin(self.auth, requester.user)
|
||||
|
||||
UserID.from_string(target_user_id)
|
||||
|
||||
params = parse_json_object_from_request(request)
|
||||
assert_params_in_dict(params, ["new_password"])
|
||||
new_password = params["new_password"]
|
||||
|
||||
await self._set_password_handler.set_password(
|
||||
target_user_id, new_password, requester
|
||||
)
|
||||
return 200, {}
|
||||
|
||||
|
||||
class GetUsersPaginatedRestServlet(RestServlet):
|
||||
"""Get request to get specific number of users from Synapse.
|
||||
This needs user to have administrator access in Synapse.
|
||||
Example:
|
||||
http://localhost:8008/_synapse/admin/v1/users_paginate/
|
||||
@admin:user?access_token=admin_access_token&start=0&limit=10
|
||||
Returns:
|
||||
200 OK with json object {list[dict[str, Any]], count} or empty object.
|
||||
"""
|
||||
|
||||
PATTERNS = historical_admin_path_patterns(
|
||||
"/users_paginate/(?P<target_user_id>[^/]*)"
|
||||
)
|
||||
|
||||
def __init__(self, hs):
|
||||
self.store = hs.get_datastore()
|
||||
self.hs = hs
|
||||
self.auth = hs.get_auth()
|
||||
self.handlers = hs.get_handlers()
|
||||
|
||||
async def on_GET(self, request, target_user_id):
|
||||
"""Get request to get specific number of users from Synapse.
|
||||
This needs user to have administrator access in Synapse.
|
||||
"""
|
||||
await assert_requester_is_admin(self.auth, request)
|
||||
|
||||
target_user = UserID.from_string(target_user_id)
|
||||
|
||||
if not self.hs.is_mine(target_user):
|
||||
raise SynapseError(400, "Can only users a local user")
|
||||
|
||||
order = "name" # order by name in user table
|
||||
start = parse_integer(request, "start", required=True)
|
||||
limit = parse_integer(request, "limit", required=True)
|
||||
|
||||
logger.info("limit: %s, start: %s", limit, start)
|
||||
|
||||
ret = await self.handlers.admin_handler.get_users_paginate(order, start, limit)
|
||||
return 200, ret
|
||||
|
||||
async def on_POST(self, request, target_user_id):
|
||||
"""Post request to get specific number of users from Synapse..
|
||||
This needs user to have administrator access in Synapse.
|
||||
Example:
|
||||
http://localhost:8008/_synapse/admin/v1/users_paginate/
|
||||
@admin:user?access_token=admin_access_token
|
||||
JsonBodyToSend:
|
||||
{
|
||||
"start": "0",
|
||||
"limit": "10
|
||||
}
|
||||
Returns:
|
||||
200 OK with json object {list[dict[str, Any]], count} or empty object.
|
||||
"""
|
||||
await assert_requester_is_admin(self.auth, request)
|
||||
UserID.from_string(target_user_id)
|
||||
|
||||
order = "name" # order by name in user table
|
||||
params = parse_json_object_from_request(request)
|
||||
assert_params_in_dict(params, ["limit", "start"])
|
||||
limit = params["limit"]
|
||||
start = params["start"]
|
||||
logger.info("limit: %s, start: %s", limit, start)
|
||||
|
||||
ret = await self.handlers.admin_handler.get_users_paginate(order, start, limit)
|
||||
return 200, ret
|
||||
|
||||
|
||||
class SearchUsersRestServlet(RestServlet):
|
||||
"""Get request to search user table for specific users according to
|
||||
search term.
|
||||
This needs user to have administrator access in Synapse.
|
||||
Example:
|
||||
http://localhost:8008/_synapse/admin/v1/search_users/
|
||||
@admin:user?access_token=admin_access_token&term=alice
|
||||
Returns:
|
||||
200 OK with json object {list[dict[str, Any]], count} or empty object.
|
||||
"""
|
||||
|
||||
PATTERNS = historical_admin_path_patterns("/search_users/(?P<target_user_id>[^/]*)")
|
||||
|
||||
def __init__(self, hs):
|
||||
self.store = hs.get_datastore()
|
||||
self.hs = hs
|
||||
self.auth = hs.get_auth()
|
||||
self.handlers = hs.get_handlers()
|
||||
|
||||
async def on_GET(self, request, target_user_id):
|
||||
"""Get request to search user table for specific users according to
|
||||
search term.
|
||||
This needs user to have a administrator access in Synapse.
|
||||
"""
|
||||
await assert_requester_is_admin(self.auth, request)
|
||||
|
||||
target_user = UserID.from_string(target_user_id)
|
||||
|
||||
# To allow all users to get the users list
|
||||
# if not is_admin and target_user != auth_user:
|
||||
# raise AuthError(403, "You are not a server admin")
|
||||
|
||||
if not self.hs.is_mine(target_user):
|
||||
raise SynapseError(400, "Can only users a local user")
|
||||
|
||||
term = parse_string(request, "term", required=True)
|
||||
logger.info("term: %s ", term)
|
||||
|
||||
ret = await self.handlers.admin_handler.search_users(term)
|
||||
return 200, ret
|
||||
|
||||
|
||||
class DeleteGroupAdminRestServlet(RestServlet):
|
||||
"""Allows deleting of local groups
|
||||
"""
|
||||
|
||||
PATTERNS = historical_admin_path_patterns("/delete_group/(?P<group_id>[^/]*)")
|
||||
|
||||
def __init__(self, hs):
|
||||
self.group_server = hs.get_groups_server_handler()
|
||||
self.is_mine_id = hs.is_mine_id
|
||||
self.auth = hs.get_auth()
|
||||
|
||||
async def on_POST(self, request, group_id):
|
||||
requester = await self.auth.get_user_by_req(request)
|
||||
await assert_user_is_admin(self.auth, requester.user)
|
||||
|
||||
if not self.is_mine_id(group_id):
|
||||
raise SynapseError(400, "Can only delete local groups")
|
||||
|
||||
await self.group_server.delete_group(group_id, requester.user.to_string())
|
||||
return 200, {}
|
||||
|
||||
|
||||
class AccountValidityRenewServlet(RestServlet):
|
||||
PATTERNS = historical_admin_path_patterns("/account_validity/validity$")
|
||||
|
||||
def __init__(self, hs):
|
||||
"""
|
||||
Args:
|
||||
hs (synapse.server.HomeServer): server
|
||||
"""
|
||||
self.hs = hs
|
||||
self.account_activity_handler = hs.get_account_validity_handler()
|
||||
self.auth = hs.get_auth()
|
||||
|
||||
async def on_POST(self, request):
|
||||
await assert_requester_is_admin(self.auth, request)
|
||||
|
||||
body = parse_json_object_from_request(request)
|
||||
|
||||
if "user_id" not in body:
|
||||
raise SynapseError(400, "Missing property 'user_id' in the request body")
|
||||
|
||||
expiration_ts = await self.account_activity_handler.renew_account_for_user(
|
||||
body["user_id"],
|
||||
body.get("expiration_ts"),
|
||||
not body.get("enable_renewal_emails", True),
|
||||
)
|
||||
|
||||
res = {"expiration_ts": expiration_ts}
|
||||
return 200, res
|
||||
|
||||
|
||||
########################################################################################
|
||||
#
|
||||
# please don't add more servlets here: this file is already long and unwieldy. Put
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue