v0.99.4rc1

-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEdVkXOgzrGzds0jtrHgFcFF8ZFs0FAlzZihYACgkQHgFcFF8Z
 Fs325w/8DOsFbrvITIYNpHKB8fZ4udrzwL/R+PRV+G5e/piJBumvnCtGqWFIKzwO
 FiF7M+7xPfATxI8sKHcFE7HAbG7/5zCFTp4vgVul6vzt2lhGR6uY0ZmBn7LizIiR
 ++eXAqfsqO4p6PepS5X3Mv17EiIQl+PFfN81va7/t4pk3YXtaucVAzYmlNWmHPiD
 KwyH9OsXdgu00/9QIBh+h2gCeB19e++6b+Ry2ZcMJAOgv8bgRisnjy35d0bN8uGR
 XSGFz9VEH4B8yvCOI9l9L4S+BvRmM+uL8qD5BSq5NIRqKt+YgdE9ioVscy461Xag
 lFjDqjkZxLRDHtLP2gGCM6iLaMIt1wZ3czC2P8YObgtVeskHaqK6rxKs1tP/jz+M
 fd7vXQpqA9zSmNJZ2p/nDFpcP1FRw6/gnYxqemcFOhSCmUeZcznaAkMBOCqW7XFF
 w9EOC5WIWmjHROsOdU59XgWai4igc2kTpflvM8jGWDYTdH4XOnGrde2MKCY+hYc4
 J/dII0sOKlMJzS9cqXkoWhARt+E+OeCbgDjnPnYvLX3AHZJcySGdQMzl+o2TKkYG
 MBGm6DDYsuKMx0Uv18b8WM1dWPbAyOXzxgBYFNuNOZLCZI81LE1jZf86rUnLvDqQ
 JTWBIQJhFiX6YxHMr5Enbtc1qoWp0rhlmCxnXpATSJQMc8pXVPY=
 =9pbU
 -----END PGP SIGNATURE-----

Merge tag 'v0.99.4rc1' into matrix-org-hotfixes

v0.99.4rc1
This commit is contained in:
Richard van der Hoff 2019-05-14 11:12:22 +01:00
commit 9feee29d76
153 changed files with 1620 additions and 1347 deletions

View file

@ -1,3 +1,105 @@
Synapse 0.99.4rc1 (2019-05-13)
==============================
Features
--------
- Add systemd-python to the optional dependencies to enable logging to the systemd journal. Install with `pip install matrix-synapse[systemd]`. ([\#4339](https://github.com/matrix-org/synapse/issues/4339))
- Add a default .m.rule.tombstone push rule. ([\#4867](https://github.com/matrix-org/synapse/issues/4867))
- Add ability for password provider modules to bind email addresses to users upon registration. ([\#4947](https://github.com/matrix-org/synapse/issues/4947))
- Implementation of [MSC1711](https://github.com/matrix-org/matrix-doc/pull/1711) including config options for requiring valid TLS certificates for federation traffic, the ability to disable TLS validation for specific domains, and the ability to specify your own list of CA certificates. ([\#4967](https://github.com/matrix-org/synapse/issues/4967))
- Remove presence list support as per MSC 1819. ([\#4989](https://github.com/matrix-org/synapse/issues/4989))
- Reduce CPU usage starting pushers during start up. ([\#4991](https://github.com/matrix-org/synapse/issues/4991))
- Add a delete group admin API. ([\#5002](https://github.com/matrix-org/synapse/issues/5002))
- Add config option to block users from looking up 3PIDs. ([\#5010](https://github.com/matrix-org/synapse/issues/5010))
- Add context to phonehome stats. ([\#5020](https://github.com/matrix-org/synapse/issues/5020))
- Configure the example systemd units to have a log identifier of `matrix-synapse`
instead of the executable name, `python`.
Contributed by Christoph Müller. ([\#5023](https://github.com/matrix-org/synapse/issues/5023))
- Add time-based account expiration. ([\#5027](https://github.com/matrix-org/synapse/issues/5027), [\#5047](https://github.com/matrix-org/synapse/issues/5047), [\#5073](https://github.com/matrix-org/synapse/issues/5073), [\#5116](https://github.com/matrix-org/synapse/issues/5116))
- Add support for handling /verions, /voip and /push_rules client endpoints to client_reader worker. ([\#5063](https://github.com/matrix-org/synapse/issues/5063), [\#5065](https://github.com/matrix-org/synapse/issues/5065), [\#5070](https://github.com/matrix-org/synapse/issues/5070))
- Add an configuration option to require authentication on /publicRooms and /profile endpoints. ([\#5083](https://github.com/matrix-org/synapse/issues/5083))
- Move admin APIs to `/_synapse/admin/v1`. (The old paths are retained for backwards-compatibility, for now). ([\#5119](https://github.com/matrix-org/synapse/issues/5119))
- Implement an admin API for sending server notices. Many thanks to @krombel who provided a foundation for this work. ([\#5121](https://github.com/matrix-org/synapse/issues/5121), [\#5142](https://github.com/matrix-org/synapse/issues/5142))
Bugfixes
--------
- Avoid redundant URL encoding of redirect URL for SSO login in the fallback login page. Fixes a regression introduced in [#4220](https://github.com/matrix-org/synapse/pull/4220). Contributed by Marcel Fabian Krüger ("[zaugin](https://github.com/zauguin)"). ([\#4555](https://github.com/matrix-org/synapse/issues/4555))
- Fix bug where presence updates were sent to all servers in a room when a new server joined, rather than to just the new server. ([\#4942](https://github.com/matrix-org/synapse/issues/4942), [\#5103](https://github.com/matrix-org/synapse/issues/5103))
- Fix sync bug which made accepting invites unreliable in worker-mode synapses. ([\#4955](https://github.com/matrix-org/synapse/issues/4955), [\#4956](https://github.com/matrix-org/synapse/issues/4956))
- start.sh: Fix the --no-rate-limit option for messages and make it bypass rate limit on registration and login too. ([\#4981](https://github.com/matrix-org/synapse/issues/4981))
- Transfer related groups on room upgrade. ([\#4990](https://github.com/matrix-org/synapse/issues/4990))
- Prevent the ability to kick users from a room they aren't in. ([\#4999](https://github.com/matrix-org/synapse/issues/4999))
- Fix issue #4596 so synapse_port_db script works with --curses option on Python 3. Contributed by Anders Jensen-Waud <anders@jensenwaud.com>. ([\#5003](https://github.com/matrix-org/synapse/issues/5003))
- Clients timing out/disappearing while downloading from the media repository will now no longer log a spurious "Producer was not unregistered" message. ([\#5009](https://github.com/matrix-org/synapse/issues/5009))
- Fix "cannot import name execute_batch" error with postgres. ([\#5032](https://github.com/matrix-org/synapse/issues/5032))
- Fix disappearing exceptions in manhole. ([\#5035](https://github.com/matrix-org/synapse/issues/5035))
- Workaround bug in twisted where attempting too many concurrent DNS requests could cause it to hang due to running out of file descriptors. ([\#5037](https://github.com/matrix-org/synapse/issues/5037))
- Make sure we're not registering the same 3pid twice on registration. ([\#5071](https://github.com/matrix-org/synapse/issues/5071))
- Don't crash on lack of expiry templates. ([\#5077](https://github.com/matrix-org/synapse/issues/5077))
- Fix the ratelimting on third party invites. ([\#5104](https://github.com/matrix-org/synapse/issues/5104))
- Add some missing limitations to room alias creation. ([\#5124](https://github.com/matrix-org/synapse/issues/5124), [\#5128](https://github.com/matrix-org/synapse/issues/5128))
- Limit the number of EDUs in transactions to 100 as expected by synapse. Thanks to @superboum for this work! ([\#5138](https://github.com/matrix-org/synapse/issues/5138))
- Fix bogus imports in unit tests. ([\#5154](https://github.com/matrix-org/synapse/issues/5154))
Internal Changes
----------------
- Add test to verify threepid auth check added in #4435. ([\#4474](https://github.com/matrix-org/synapse/issues/4474))
- Fix/improve some docstrings in the replication code. ([\#4949](https://github.com/matrix-org/synapse/issues/4949))
- Split synapse.replication.tcp.streams into smaller files. ([\#4953](https://github.com/matrix-org/synapse/issues/4953))
- Refactor replication row generation/parsing. ([\#4954](https://github.com/matrix-org/synapse/issues/4954))
- Run `black` to clean up formatting on `synapse/storage/roommember.py` and `synapse/storage/events.py`. ([\#4959](https://github.com/matrix-org/synapse/issues/4959))
- Remove log line for password via the admin API. ([\#4965](https://github.com/matrix-org/synapse/issues/4965))
- Fix typo in TLS filenames in docker/README.md. Also add the '-p' commandline option to the 'docker run' example. Contributed by Jurrie Overgoor. ([\#4968](https://github.com/matrix-org/synapse/issues/4968))
- Refactor room version definitions. ([\#4969](https://github.com/matrix-org/synapse/issues/4969))
- Reduce log level of .well-known/matrix/client responses. ([\#4972](https://github.com/matrix-org/synapse/issues/4972))
- Add `config.signing_key_path` that can be read by `synapse.config` utility. ([\#4974](https://github.com/matrix-org/synapse/issues/4974))
- Track which identity server is used when binding a threepid and use that for unbinding, as per MSC1915. ([\#4982](https://github.com/matrix-org/synapse/issues/4982))
- Rewrite KeyringTestCase as a HomeserverTestCase. ([\#4985](https://github.com/matrix-org/synapse/issues/4985))
- README updates: Corrected the default POSTGRES_USER. Added port forwarding hint in TLS section. ([\#4987](https://github.com/matrix-org/synapse/issues/4987))
- Remove a number of unused tables from the database schema. ([\#4992](https://github.com/matrix-org/synapse/issues/4992), [\#5028](https://github.com/matrix-org/synapse/issues/5028), [\#5033](https://github.com/matrix-org/synapse/issues/5033))
- Run `black` on the remainder of `synapse/storage/`. ([\#4996](https://github.com/matrix-org/synapse/issues/4996))
- Fix grammar in get_current_users_in_room and give it a docstring. ([\#4998](https://github.com/matrix-org/synapse/issues/4998))
- Clean up some code in the server-key Keyring. ([\#5001](https://github.com/matrix-org/synapse/issues/5001))
- Convert SYNAPSE_NO_TLS Docker variable to boolean for user friendliness. Contributed by Gabriel Eckerson. ([\#5005](https://github.com/matrix-org/synapse/issues/5005))
- Refactor synapse.storage._base._simple_select_list_paginate. ([\#5007](https://github.com/matrix-org/synapse/issues/5007))
- Store the notary server name correctly in server_keys_json. ([\#5024](https://github.com/matrix-org/synapse/issues/5024))
- Rewrite Datastore.get_server_verify_keys to reduce the number of database transactions. ([\#5030](https://github.com/matrix-org/synapse/issues/5030))
- Remove extraneous period from copyright headers. ([\#5046](https://github.com/matrix-org/synapse/issues/5046))
- Update documentation for where to get Synapse packages. ([\#5067](https://github.com/matrix-org/synapse/issues/5067))
- Add workarounds for pep-517 install errors. ([\#5098](https://github.com/matrix-org/synapse/issues/5098))
- Improve logging when event-signature checks fail. ([\#5100](https://github.com/matrix-org/synapse/issues/5100))
- Factor out an "assert_requester_is_admin" function. ([\#5120](https://github.com/matrix-org/synapse/issues/5120))
- Remove the requirement to authenticate for /admin/server_version. ([\#5122](https://github.com/matrix-org/synapse/issues/5122))
- Prevent an exception from being raised in a IResolutionReceiver and use a more generic error message for blacklisted URL previews. ([\#5155](https://github.com/matrix-org/synapse/issues/5155))
- Run `black` on the tests directory. ([\#5170](https://github.com/matrix-org/synapse/issues/5170))
- Fix CI after new release of isort. ([\#5179](https://github.com/matrix-org/synapse/issues/5179))
Synapse 0.99.3.2 (2019-05-03)
=============================
Internal Changes
----------------
- Ensure that we have `urllib3` <1.25, to resolve incompatibility with `requests`. ([\#5135](https://github.com/matrix-org/synapse/issues/5135))
Synapse 0.99.3.1 (2019-05-03)
=============================
Security update
---------------
This release includes two security fixes:
- Switch to using a cryptographically-secure random number generator for token strings, ensuring they cannot be predicted by an attacker. Thanks to @opnsec for identifying and responsibly disclosing this issue! ([\#5133](https://github.com/matrix-org/synapse/issues/5133))
- Blacklist 0.0.0.0 and :: by default for URL previews. Thanks to @opnsec for identifying and responsibly disclosing this issue too! ([\#5134](https://github.com/matrix-org/synapse/issues/5134))
Synapse 0.99.3 (2019-04-01)
===========================

View file

@ -257,9 +257,8 @@ https://github.com/spantaleev/matrix-docker-ansible-deploy
#### Matrix.org packages
Matrix.org provides Debian/Ubuntu packages of the latest stable version of
Synapse via https://packages.matrix.org/debian/. To use them:
For Debian 9 (Stretch), Ubuntu 16.04 (Xenial), and later:
Synapse via https://packages.matrix.org/debian/. They are available for Debian
9 (Stretch), Ubuntu 16.04 (Xenial), and later. To use them:
```
sudo apt install -y lsb-release wget apt-transport-https
@ -270,19 +269,6 @@ sudo apt update
sudo apt install matrix-synapse-py3
```
For Debian 8 (Jessie):
```
sudo apt install -y lsb-release wget apt-transport-https
sudo wget -O /etc/apt/trusted.gpg.d/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg
echo "deb [signed-by=5586CCC0CBBBEFC7A25811ADF473DD4473365DE1] https://packages.matrix.org/debian/ $(lsb_release -cs) main" |
sudo tee /etc/apt/sources.list.d/matrix-org.list
sudo apt update
sudo apt install matrix-synapse-py3
```
The fingerprint of the repository signing key is AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058.
**Note**: if you followed a previous version of these instructions which
recommended using `apt-key add` to add an old key from
`https://matrix.org/packages/debian/`, you should note that this key has been
@ -290,6 +276,9 @@ revoked. You should remove the old key with `sudo apt-key remove
C35EB17E1EAE708E6603A9B3AD0592FE47F0DF61`, and follow the above instructions to
update your configuration.
The fingerprint of the repository signing key (as shown by `gpg
/usr/share/keyrings/matrix-org-archive-keyring.gpg`) is
`AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058`.
#### Downstream Debian/Ubuntu packages

View file

@ -1 +0,0 @@
Add systemd-python to the optional dependencies to enable logging to the systemd journal. Install with `pip install matrix-synapse[systemd]`.

View file

@ -1 +0,0 @@
Add test to verify threepid auth check added in #4435.

View file

@ -1 +0,0 @@
Avoid redundant URL encoding of redirect URL for SSO login in the fallback login page. Fixes a regression introduced in [#4220](https://github.com/matrix-org/synapse/pull/4220). Contributed by Marcel Fabian Krüger ("[zaugin](https://github.com/zauguin)").

View file

@ -1 +0,0 @@
Add a default .m.rule.tombstone push rule.

View file

@ -1 +0,0 @@
Fix bug where presence updates were sent to all servers in a room when a new server joined, rather than to just the new server.

View file

@ -1 +0,0 @@
Add ability for password provider modules to bind email addresses to users upon registration.

View file

@ -1 +0,0 @@
Fix/improve some docstrings in the replication code.

View file

@ -1,2 +0,0 @@
Split synapse.replication.tcp.streams into smaller files.

View file

@ -1 +0,0 @@
Refactor replication row generation/parsing.

View file

@ -1 +0,0 @@
Fix sync bug which made accepting invites unreliable in worker-mode synapses.

View file

@ -1 +0,0 @@
Fix sync bug which made accepting invites unreliable in worker-mode synapses.

View file

@ -1 +0,0 @@
Run `black` to clean up formatting on `synapse/storage/roommember.py` and `synapse/storage/events.py`.

View file

@ -1 +0,0 @@
Remove log line for password via the admin API.

View file

@ -1 +0,0 @@
Implementation of [MSC1711](https://github.com/matrix-org/matrix-doc/pull/1711) including config options for requiring valid TLS certificates for federation traffic, the ability to disable TLS validation for specific domains, and the ability to specify your own list of CA certificates.

View file

@ -1 +0,0 @@
Fix typo in TLS filenames in docker/README.md. Also add the '-p' commandline option to the 'docker run' example. Contributed by Jurrie Overgoor.

View file

@ -1,2 +0,0 @@
Refactor room version definitions.

View file

@ -1 +0,0 @@
Reduce log level of .well-known/matrix/client responses.

View file

@ -1 +0,0 @@
Add `config.signing_key_path` that can be read by `synapse.config` utility.

View file

@ -1 +0,0 @@
start.sh: Fix the --no-rate-limit option for messages and make it bypass rate limit on registration and login too.

View file

@ -1 +0,0 @@
Track which identity server is used when binding a threepid and use that for unbinding, as per MSC1915.

View file

@ -1 +0,0 @@
Rewrite KeyringTestCase as a HomeserverTestCase.

View file

@ -1 +0,0 @@
README updates: Corrected the default POSTGRES_USER. Added port forwarding hint in TLS section.

View file

@ -1 +0,0 @@
Remove presence list support as per MSC 1819.

View file

@ -1 +0,0 @@
Transfer related groups on room upgrade.

View file

@ -1 +0,0 @@
Reduce CPU usage starting pushers during start up.

View file

@ -1 +0,0 @@
Remove a number of unused tables from the database schema.

View file

@ -1 +0,0 @@
Run `black` on the remainder of `synapse/storage/`.

View file

@ -1 +0,0 @@
Fix grammar in get_current_users_in_room and give it a docstring.

View file

@ -1 +0,0 @@
Prevent the ability to kick users from a room they aren't in.

View file

@ -1 +0,0 @@
Clean up some code in the server-key Keyring.

View file

@ -1 +0,0 @@
Add a delete group admin API.

View file

@ -1 +0,0 @@
Fix issue #4596 so synapse_port_db script works with --curses option on Python 3. Contributed by Anders Jensen-Waud <anders@jensenwaud.com>.

View file

@ -1 +0,0 @@
Convert SYNAPSE_NO_TLS Docker variable to boolean for user friendliness. Contributed by Gabriel Eckerson.

View file

@ -1 +0,0 @@
Refactor synapse.storage._base._simple_select_list_paginate.

View file

@ -1 +0,0 @@
Clients timing out/disappearing while downloading from the media repository will now no longer log a spurious "Producer was not unregistered" message.

View file

@ -1 +0,0 @@
Add config option to block users from looking up 3PIDs.

View file

@ -1 +0,0 @@
Add context to phonehome stats.

View file

@ -1 +0,0 @@
Store the notary server name correctly in server_keys_json.

View file

@ -1 +0,0 @@
Add time-based account expiration.

View file

@ -1 +0,0 @@
Remove a number of unused tables from the database schema.

View file

@ -1 +0,0 @@
Rewrite Datastore.get_server_verify_keys to reduce the number of database transactions.

View file

@ -1 +0,0 @@
Fix "cannot import name execute_batch" error with postgres.

View file

@ -1 +0,0 @@
Remove a number of unused tables from the database schema.

View file

@ -1 +0,0 @@
Fix disappearing exceptions in manhole.

View file

@ -1 +0,0 @@
Remove extraneous period from copyright headers.

View file

@ -1 +0,0 @@
Add time-based account expiration.

View file

@ -1 +0,0 @@
Add support for handling /verions, /voip and /push_rules client endpoints to client_reader worker.

View file

@ -1 +0,0 @@
Add support for handling /verions, /voip and /push_rules client endpoints to client_reader worker.

View file

@ -1 +0,0 @@
Update documentation for where to get Synapse packages.

View file

@ -1 +0,0 @@
Add support for handling /verions, /voip and /push_rules client endpoints to client_reader worker.

View file

@ -1 +0,0 @@
Make sure we're not registering the same 3pid twice on registration.

View file

@ -1 +0,0 @@
Add time-based account expiration.

View file

@ -1 +0,0 @@
Don't crash on lack of expiry templates.

View file

@ -1 +0,0 @@
Add workarounds for pep-517 install errors.

View file

@ -1 +0,0 @@
Improve logging when event-signature checks fail.

View file

@ -1 +0,0 @@
Fix bug where presence updates were sent to all servers in a room when a new server joined, rather than to just the new server.

View file

@ -1 +0,0 @@
Fix the ratelimting on third party invites.

View file

@ -1 +0,0 @@
Add time-based account expiration.

View file

@ -1 +0,0 @@
Move admin APIs to `/_synapse/admin/v1`. (The old paths are retained for backwards-compatibility, for now).

View file

@ -1 +0,0 @@
Factor out an "assert_requester_is_admin" function.

View file

@ -1 +0,0 @@
Implement an admin API for sending server notices. Many thanks to @krombel who provided a foundation for this work.

View file

@ -1 +0,0 @@
Add some missing limitations to room alias creation.

View file

@ -1 +0,0 @@
Switch to using a cryptographically-secure random number generator for token strings, ensuring they cannot be predicted by an attacker. Thanks to @opnsec for identifying and responsibly disclosing this issue!

View file

@ -1 +0,0 @@
Blacklist 0.0.0.0 and :: by default for URL previews. Thanks to @opnsec for identifying and responsibly disclosing this issue too!

View file

@ -12,6 +12,7 @@ ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.%i --config-path=/
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=3
SyslogIdentifier=matrix-synapse-%i
[Install]
WantedBy=matrix-synapse.service

View file

@ -11,6 +11,7 @@ ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --confi
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=3
SyslogIdentifier=matrix-synapse
[Install]
WantedBy=matrix.target

View file

@ -22,10 +22,10 @@ Group=nogroup
WorkingDirectory=/opt/synapse
ExecStart=/opt/synapse/env/bin/python -m synapse.app.homeserver --config-path=/opt/synapse/homeserver.yaml
SyslogIdentifier=matrix-synapse
# adjust the cache factor if necessary
# Environment=SYNAPSE_CACHE_FACTOR=2.0
[Install]
WantedBy=multi-user.target

19
debian/changelog vendored
View file

@ -1,3 +1,22 @@
matrix-synapse-py3 (0.99.3.2+nmu1) UNRELEASED; urgency=medium
[ Christoph Müller ]
* Configure the systemd units to have a log identifier of `matrix-synapse`
-- Christoph Müller <iblzm@hotmail.de> Wed, 17 Apr 2019 16:17:32 +0200
matrix-synapse-py3 (0.99.3.2) stable; urgency=medium
* New synapse release 0.99.3.2.
-- Synapse Packaging team <packages@matrix.org> Fri, 03 May 2019 18:56:20 +0100
matrix-synapse-py3 (0.99.3.1) stable; urgency=medium
* New synapse release 0.99.3.1.
-- Synapse Packaging team <packages@matrix.org> Fri, 03 May 2019 16:02:43 +0100
matrix-synapse-py3 (0.99.3) stable; urgency=medium
[ Richard van der Hoff ]

View file

@ -11,6 +11,7 @@ ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --confi
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=3
SyslogIdentifier=matrix-synapse
[Install]
WantedBy=multi-user.target

View file

@ -57,7 +57,8 @@ RUN apt-get update -qq -o Acquire::Languages=none \
python3-pip \
python3-setuptools \
python3-venv \
sqlite3
sqlite3 \
libpq-dev
COPY --from=builder /dh-virtualenv_1.1-1_all.deb /

View file

@ -36,7 +36,7 @@ You can optionally include the following additional parameters:
* `state_key`: Setting this will result in a state event being sent.
Once the notice has been sent, the APU will return the following response:
Once the notice has been sent, the API will return the following response:
```json
{

View file

@ -10,8 +10,6 @@ The api is::
GET /_synapse/admin/v1/server_version
including an ``access_token`` of a server admin.
It returns a JSON body like the following:
.. code:: json

View file

@ -48,7 +48,10 @@ How to monitor Synapse metrics using Prometheus
- job_name: "synapse"
metrics_path: "/_synapse/metrics"
static_configs:
- targets: ["my.server.here:9092"]
- targets: ["my.server.here:port"]
where ``my.server.here`` is the IP address of Synapse, and ``port`` is the listener port
configured with the ``metrics`` resource.
If your prometheus is older than 1.5.2, you will need to replace
``static_configs`` in the above with ``target_groups``.

View file

@ -69,6 +69,7 @@ Let's assume that we expect clients to connect to our server at
SSLEngine on
ServerName matrix.example.com;
AllowEncodedSlashes NoDecode
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
</VirtualHost>
@ -77,6 +78,7 @@ Let's assume that we expect clients to connect to our server at
SSLEngine on
ServerName example.com;
AllowEncodedSlashes NoDecode
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
</VirtualHost>

View file

@ -69,6 +69,20 @@ pid_file: DATADIR/homeserver.pid
#
#use_presence: false
# Whether to require authentication to retrieve profile data (avatars,
# display names) of other users through the client API. Defaults to
# 'false'. Note that profile data is also available via the federation
# API, so this setting is of limited value if federation is enabled on
# the server.
#
#require_auth_for_profile_requests: true
# If set to 'true', requires authentication to access the server's
# public rooms directory through the client API, and forbids any other
# homeserver to fetch it via federation. Defaults to 'false'.
#
#restrict_public_rooms_to_local_users: true
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
#
#gc_thresholds: [700, 10, 10]
@ -566,7 +580,7 @@ uploads_path: "DATADIR/uploads"
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
# listed here, since they correspond to unroutable addresses.)
#
# This must be specified if url_preview_enabled is set. It is recommended that
# This must be specified if url_preview_enabled is set. It is recommended that
# you uncomment the following list as a starting point.
#
#url_preview_ip_range_blacklist:

View file

@ -24,6 +24,7 @@ DISTS = (
"ubuntu:xenial",
"ubuntu:bionic",
"ubuntu:cosmic",
"ubuntu:disco",
)
DESC = '''\

View file

@ -27,4 +27,4 @@ try:
except ImportError:
pass
__version__ = "0.99.3"
__version__ = "0.99.4rc1"

View file

@ -20,6 +20,9 @@
# the "depth" field on events is limited to 2**63 - 1
MAX_DEPTH = 2**63 - 1
# the maximum length for a room alias is 255 characters
MAX_ALIAS_LENGTH = 255
class Membership(object):

View file

@ -22,13 +22,14 @@ import traceback
import psutil
from daemonize import Daemonize
from twisted.internet import error, reactor
from twisted.internet import defer, error, reactor
from twisted.protocols.tls import TLSMemoryBIOFactory
import synapse
from synapse.app import check_bind_error
from synapse.crypto import context_factory
from synapse.util import PreserveLoggingContext
from synapse.util.async_helpers import Linearizer
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
@ -99,6 +100,8 @@ def start_reactor(
logger (logging.Logger): logger instance to pass to Daemonize
"""
install_dns_limiter(reactor)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
@ -312,3 +315,81 @@ def setup_sentry(hs):
name = hs.config.worker_name if hs.config.worker_name else "master"
scope.set_tag("worker_app", app)
scope.set_tag("worker_name", name)
def install_dns_limiter(reactor, max_dns_requests_in_flight=100):
"""Replaces the resolver with one that limits the number of in flight DNS
requests.
This is to workaround https://twistedmatrix.com/trac/ticket/9620, where we
can run out of file descriptors and infinite loop if we attempt to do too
many DNS queries at once
"""
new_resolver = _LimitedHostnameResolver(
reactor.nameResolver, max_dns_requests_in_flight,
)
reactor.installNameResolver(new_resolver)
class _LimitedHostnameResolver(object):
"""Wraps a IHostnameResolver, limiting the number of in-flight DNS lookups.
"""
def __init__(self, resolver, max_dns_requests_in_flight):
self._resolver = resolver
self._limiter = Linearizer(
name="dns_client_limiter", max_count=max_dns_requests_in_flight,
)
def resolveHostName(self, resolutionReceiver, hostName, portNumber=0,
addressTypes=None, transportSemantics='TCP'):
# Note this is happening deep within the reactor, so we don't need to
# worry about log contexts.
# We need this function to return `resolutionReceiver` so we do all the
# actual logic involving deferreds in a separate function.
self._resolve(
resolutionReceiver, hostName, portNumber,
addressTypes, transportSemantics,
)
return resolutionReceiver
@defer.inlineCallbacks
def _resolve(self, resolutionReceiver, hostName, portNumber=0,
addressTypes=None, transportSemantics='TCP'):
with (yield self._limiter.queue(())):
# resolveHostName doesn't return a Deferred, so we need to hook into
# the receiver interface to get told when resolution has finished.
deferred = defer.Deferred()
receiver = _DeferredResolutionReceiver(resolutionReceiver, deferred)
self._resolver.resolveHostName(
receiver, hostName, portNumber,
addressTypes, transportSemantics,
)
yield deferred
class _DeferredResolutionReceiver(object):
"""Wraps a IResolutionReceiver and simply resolves the given deferred when
resolution is complete
"""
def __init__(self, receiver, deferred):
self._receiver = receiver
self._deferred = deferred
def resolutionBegan(self, resolutionInProgress):
self._receiver.resolutionBegan(resolutionInProgress)
def addressResolved(self, address):
self._receiver.addressResolved(address)
def resolutionComplete(self):
self._deferred.callback(())
self._receiver.resolutionComplete()

View file

@ -72,6 +72,19 @@ class ServerConfig(Config):
# master, potentially causing inconsistency.
self.enable_media_repo = config.get("enable_media_repo", True)
# Whether to require authentication to retrieve profile data (avatars,
# display names) of other users through the client API.
self.require_auth_for_profile_requests = config.get(
"require_auth_for_profile_requests", False,
)
# If set to 'True', requires authentication to access the server's
# public rooms directory through the client API, and forbids any other
# homeserver to fetch it via federation.
self.restrict_public_rooms_to_local_users = config.get(
"restrict_public_rooms_to_local_users", False,
)
# whether to enable search. If disabled, new entries will not be inserted
# into the search tables and they will not be indexed. Users will receive
# errors when attempting to search for messages.
@ -327,6 +340,20 @@ class ServerConfig(Config):
#
#use_presence: false
# Whether to require authentication to retrieve profile data (avatars,
# display names) of other users through the client API. Defaults to
# 'false'. Note that profile data is also available via the federation
# API, so this setting is of limited value if federation is enabled on
# the server.
#
#require_auth_for_profile_requests: true
# If set to 'true', requires authentication to access the server's
# public rooms directory through the client API, and forbids any other
# homeserver to fetch it via federation. Defaults to 'false'.
#
#restrict_public_rooms_to_local_users: true
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
#
#gc_thresholds: [700, 10, 10]

View file

@ -187,7 +187,9 @@ class EventContext(object):
Returns:
Deferred[dict[(str, str), str]|None]: Returns None if state_group
is None, which happens when the associated event is an outlier.
is None, which happens when the associated event is an outlier.
Maps a (type, state_key) to the event ID of the state event matching
this tuple.
"""
if not self._fetching_state_deferred:
@ -205,7 +207,9 @@ class EventContext(object):
Returns:
Deferred[dict[(str, str), str]|None]: Returns None if state_group
is None, which happens when the associated event is an outlier.
is None, which happens when the associated event is an outlier.
Maps a (type, state_key) to the event ID of the state event matching
this tuple.
"""
if not self._fetching_state_deferred:

View file

@ -15,8 +15,8 @@
from six import string_types
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import SynapseError
from synapse.api.constants import MAX_ALIAS_LENGTH, EventTypes, Membership
from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import EventFormatVersions
from synapse.types import EventID, RoomID, UserID
@ -56,6 +56,17 @@ class EventValidator(object):
if not isinstance(getattr(event, s), string_types):
raise SynapseError(400, "'%s' not a string type" % (s,))
if event.type == EventTypes.Aliases:
if "aliases" in event.content:
for alias in event.content["aliases"]:
if len(alias) > MAX_ALIAS_LENGTH:
raise SynapseError(
400,
("Can't create aliases longer than"
" %d characters" % (MAX_ALIAS_LENGTH,)),
Codes.INVALID_PARAM,
)
def validate_builder(self, event):
"""Validates that the builder/event has roughly the right format. Only
checks values that we expect a proto event to have, rather than all the

View file

@ -33,12 +33,14 @@ from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage import UserPresenceState
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
# This is defined in the Matrix spec and enforced by the receiver.
MAX_EDUS_PER_TRANSACTION = 100
logger = logging.getLogger(__name__)
sent_edus_counter = Counter(
"synapse_federation_client_sent_edus",
"Total number of EDUs successfully sent",
"synapse_federation_client_sent_edus", "Total number of EDUs successfully sent"
)
sent_edus_by_type = Counter(
@ -58,6 +60,7 @@ class PerDestinationQueue(object):
destination (str): the server_name of the destination that we are managing
transmission for.
"""
def __init__(self, hs, transaction_manager, destination):
self._server_name = hs.hostname
self._clock = hs.get_clock()
@ -68,17 +71,17 @@ class PerDestinationQueue(object):
self.transmission_loop_running = False
# a list of tuples of (pending pdu, order)
self._pending_pdus = [] # type: list[tuple[EventBase, int]]
self._pending_edus = [] # type: list[Edu]
self._pending_pdus = [] # type: list[tuple[EventBase, int]]
self._pending_edus = [] # type: list[Edu]
# Pending EDUs by their "key". Keyed EDUs are EDUs that get clobbered
# based on their key (e.g. typing events by room_id)
# Map of (edu_type, key) -> Edu
self._pending_edus_keyed = {} # type: dict[tuple[str, str], Edu]
self._pending_edus_keyed = {} # type: dict[tuple[str, str], Edu]
# Map of user_id -> UserPresenceState of pending presence to be sent to this
# destination
self._pending_presence = {} # type: dict[str, UserPresenceState]
self._pending_presence = {} # type: dict[str, UserPresenceState]
# room_id -> receipt_type -> user_id -> receipt_dict
self._pending_rrs = {}
@ -120,9 +123,7 @@ class PerDestinationQueue(object):
Args:
states (iterable[UserPresenceState]): presence to send
"""
self._pending_presence.update({
state.user_id: state for state in states
})
self._pending_presence.update({state.user_id: state for state in states})
self.attempt_new_transaction()
def queue_read_receipt(self, receipt):
@ -132,14 +133,9 @@ class PerDestinationQueue(object):
Args:
receipt (synapse.api.receipt_info.ReceiptInfo): receipt to be queued
"""
self._pending_rrs.setdefault(
receipt.room_id, {},
).setdefault(
self._pending_rrs.setdefault(receipt.room_id, {}).setdefault(
receipt.receipt_type, {}
)[receipt.user_id] = {
"event_ids": receipt.event_ids,
"data": receipt.data,
}
)[receipt.user_id] = {"event_ids": receipt.event_ids, "data": receipt.data}
def flush_read_receipts_for_room(self, room_id):
# if we don't have any read-receipts for this room, it may be that we've already
@ -170,10 +166,7 @@ class PerDestinationQueue(object):
# request at which point pending_pdus just keeps growing.
# we need application-layer timeouts of some flavour of these
# requests
logger.debug(
"TX [%s] Transaction already in progress",
self._destination
)
logger.debug("TX [%s] Transaction already in progress", self._destination)
return
logger.debug("TX [%s] Starting transaction loop", self._destination)
@ -197,7 +190,8 @@ class PerDestinationQueue(object):
pending_pdus = []
while True:
device_message_edus, device_stream_id, dev_list_id = (
yield self._get_new_device_messages()
# We have to keep 2 free slots for presence and rr_edus
yield self._get_new_device_messages(MAX_EDUS_PER_TRANSACTION - 2)
)
# BEGIN CRITICAL SECTION
@ -216,19 +210,9 @@ class PerDestinationQueue(object):
pending_edus = []
pending_edus.extend(self._get_rr_edus(force_flush=False))
# We can only include at most 100 EDUs per transactions
pending_edus.extend(self._pop_pending_edus(100 - len(pending_edus)))
pending_edus.extend(
self._pending_edus_keyed.values()
)
self._pending_edus_keyed = {}
pending_edus.extend(device_message_edus)
# rr_edus and pending_presence take at most one slot each
pending_edus.extend(self._get_rr_edus(force_flush=False))
pending_presence = self._pending_presence
self._pending_presence = {}
if pending_presence:
@ -248,9 +232,23 @@ class PerDestinationQueue(object):
)
)
pending_edus.extend(device_message_edus)
pending_edus.extend(
self._pop_pending_edus(MAX_EDUS_PER_TRANSACTION - len(pending_edus))
)
while (
len(pending_edus) < MAX_EDUS_PER_TRANSACTION
and self._pending_edus_keyed
):
_, val = self._pending_edus_keyed.popitem()
pending_edus.append(val)
if pending_pdus:
logger.debug("TX [%s] len(pending_pdus_by_dest[dest]) = %d",
self._destination, len(pending_pdus))
logger.debug(
"TX [%s] len(pending_pdus_by_dest[dest]) = %d",
self._destination,
len(pending_pdus),
)
if not pending_pdus and not pending_edus:
logger.debug("TX [%s] Nothing to send", self._destination)
@ -259,7 +257,7 @@ class PerDestinationQueue(object):
# if we've decided to send a transaction anyway, and we have room, we
# may as well send any pending RRs
if len(pending_edus) < 100:
if len(pending_edus) < MAX_EDUS_PER_TRANSACTION:
pending_edus.extend(self._get_rr_edus(force_flush=True))
# END CRITICAL SECTION
@ -303,22 +301,25 @@ class PerDestinationQueue(object):
except HttpResponseException as e:
logger.warning(
"TX [%s] Received %d response to transaction: %s",
self._destination, e.code, e,
self._destination,
e.code,
e,
)
except RequestSendFailed as e:
logger.warning("TX [%s] Failed to send transaction: %s", self._destination, e)
logger.warning(
"TX [%s] Failed to send transaction: %s", self._destination, e
)
for p, _ in pending_pdus:
logger.info("Failed to send event %s to %s", p.event_id,
self._destination)
logger.info(
"Failed to send event %s to %s", p.event_id, self._destination
)
except Exception:
logger.exception(
"TX [%s] Failed to send transaction",
self._destination,
)
logger.exception("TX [%s] Failed to send transaction", self._destination)
for p, _ in pending_pdus:
logger.info("Failed to send event %s to %s", p.event_id,
self._destination)
logger.info(
"Failed to send event %s to %s", p.event_id, self._destination
)
finally:
# We want to be *very* sure we clear this after we stop processing
self.transmission_loop_running = False
@ -346,27 +347,13 @@ class PerDestinationQueue(object):
return pending_edus
@defer.inlineCallbacks
def _get_new_device_messages(self):
last_device_stream_id = self._last_device_stream_id
to_device_stream_id = self._store.get_to_device_stream_token()
contents, stream_id = yield self._store.get_new_device_msgs_for_remote(
self._destination, last_device_stream_id, to_device_stream_id
)
edus = [
Edu(
origin=self._server_name,
destination=self._destination,
edu_type="m.direct_to_device",
content=content,
)
for content in contents
]
def _get_new_device_messages(self, limit):
last_device_list = self._last_device_list_stream_id
# Will return at most 20 entries
now_stream_id, results = yield self._store.get_devices_by_remote(
self._destination, last_device_list
)
edus.extend(
edus = [
Edu(
origin=self._server_name,
destination=self._destination,
@ -374,5 +361,26 @@ class PerDestinationQueue(object):
content=content,
)
for content in results
]
assert len(edus) <= limit, "get_devices_by_remote returned too many EDUs"
last_device_stream_id = self._last_device_stream_id
to_device_stream_id = self._store.get_to_device_stream_token()
contents, stream_id = yield self._store.get_new_device_msgs_for_remote(
self._destination,
last_device_stream_id,
to_device_stream_id,
limit - len(edus),
)
edus.extend(
Edu(
origin=self._server_name,
destination=self._destination,
edu_type="m.direct_to_device",
content=content,
)
for content in contents
)
defer.returnValue((edus, stream_id, now_stream_id))

View file

@ -716,8 +716,17 @@ class PublicRoomList(BaseFederationServlet):
PATH = "/publicRooms"
def __init__(self, handler, authenticator, ratelimiter, server_name, deny_access):
super(PublicRoomList, self).__init__(
handler, authenticator, ratelimiter, server_name,
)
self.deny_access = deny_access
@defer.inlineCallbacks
def on_GET(self, origin, content, query):
if self.deny_access:
raise FederationDeniedError(origin)
limit = parse_integer_from_args(query, "limit", 0)
since_token = parse_string_from_args(query, "since", None)
include_all_networks = parse_boolean_from_args(
@ -1417,6 +1426,7 @@ def register_servlets(hs, resource, authenticator, ratelimiter, servlet_groups=N
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
deny_access=hs.config.restrict_public_rooms_to_local_users,
).register(resource)
if "group_server" in servlet_groups:

View file

@ -19,7 +19,7 @@ import string
from twisted.internet import defer
from synapse.api.constants import EventTypes
from synapse.api.constants import MAX_ALIAS_LENGTH, EventTypes
from synapse.api.errors import (
AuthError,
CodeMessageException,
@ -36,7 +36,6 @@ logger = logging.getLogger(__name__)
class DirectoryHandler(BaseHandler):
MAX_ALIAS_LENGTH = 255
def __init__(self, hs):
super(DirectoryHandler, self).__init__(hs)
@ -105,10 +104,10 @@ class DirectoryHandler(BaseHandler):
user_id = requester.user.to_string()
if len(room_alias.to_string()) > self.MAX_ALIAS_LENGTH:
if len(room_alias.to_string()) > MAX_ALIAS_LENGTH:
raise SynapseError(
400,
"Can't create aliases longer than %s characters" % self.MAX_ALIAS_LENGTH,
"Can't create aliases longer than %s characters" % MAX_ALIAS_LENGTH,
Codes.INVALID_PARAM,
)

View file

@ -228,6 +228,7 @@ class EventCreationHandler(object):
self.ratelimiter = hs.get_ratelimiter()
self.notifier = hs.get_notifier()
self.config = hs.config
self.require_membership_for_aliases = hs.config.require_membership_for_aliases
self.send_event_to_master = ReplicationSendEventRestServlet.make_client(hs)
@ -336,6 +337,35 @@ class EventCreationHandler(object):
prev_events_and_hashes=prev_events_and_hashes,
)
# In an ideal world we wouldn't need the second part of this condition. However,
# this behaviour isn't spec'd yet, meaning we should be able to deactivate this
# behaviour. Another reason is that this code is also evaluated each time a new
# m.room.aliases event is created, which includes hitting a /directory route.
# Therefore not including this condition here would render the similar one in
# synapse.handlers.directory pointless.
if builder.type == EventTypes.Aliases and self.require_membership_for_aliases:
# Ideally we'd do the membership check in event_auth.check(), which
# describes a spec'd algorithm for authenticating events received over
# federation as well as those created locally. As of room v3, aliases events
# can be created by users that are not in the room, therefore we have to
# tolerate them in event_auth.check().
prev_state_ids = yield context.get_prev_state_ids(self.store)
prev_event_id = prev_state_ids.get((EventTypes.Member, event.sender))
prev_event = yield self.store.get_event(prev_event_id, allow_none=True)
if not prev_event or prev_event.membership != Membership.JOIN:
logger.warning(
("Attempt to send `m.room.aliases` in room %s by user %s but"
" membership is %s"),
event.room_id,
event.sender,
prev_event.membership if prev_event else None,
)
raise AuthError(
403,
"You must be in the room to create an alias for it",
)
self.validator.validate_new(event)
defer.returnValue((event, context))

View file

@ -53,6 +53,7 @@ class BaseProfileHandler(BaseHandler):
@defer.inlineCallbacks
def get_profile(self, user_id):
target_user = UserID.from_string(user_id)
if self.hs.is_mine(target_user):
try:
displayname = yield self.store.get_profile_displayname(
@ -283,6 +284,48 @@ class BaseProfileHandler(BaseHandler):
room_id, str(e)
)
@defer.inlineCallbacks
def check_profile_query_allowed(self, target_user, requester=None):
"""Checks whether a profile query is allowed. If the
'require_auth_for_profile_requests' config flag is set to True and a
'requester' is provided, the query is only allowed if the two users
share a room.
Args:
target_user (UserID): The owner of the queried profile.
requester (None|UserID): The user querying for the profile.
Raises:
SynapseError(403): The two users share no room, or ne user couldn't
be found to be in any room the server is in, and therefore the query
is denied.
"""
# Implementation of MSC1301: don't allow looking up profiles if the
# requester isn't in the same room as the target. We expect requester to
# be None when this function is called outside of a profile query, e.g.
# when building a membership event. In this case, we must allow the
# lookup.
if not self.hs.config.require_auth_for_profile_requests or not requester:
return
try:
requester_rooms = yield self.store.get_rooms_for_user(
requester.to_string()
)
target_user_rooms = yield self.store.get_rooms_for_user(
target_user.to_string(),
)
# Check if the room lists have no elements in common.
if requester_rooms.isdisjoint(target_user_rooms):
raise SynapseError(403, "Profile isn't available", Codes.FORBIDDEN)
except StoreError as e:
if e.code == 404:
# This likely means that one of the users doesn't exist,
# so we act as if we couldn't find the profile.
raise SynapseError(403, "Profile isn't available", Codes.FORBIDDEN)
raise
class MasterProfileHandler(BaseProfileHandler):
PROFILE_UPDATE_MS = 60 * 1000

View file

@ -730,8 +730,9 @@ class RoomMemberHandler(object):
Codes.FORBIDDEN,
)
# Check whether we'll be ratelimited
yield self.base_handler.ratelimit(requester, update=False)
# We need to rate limit *before* we send out any 3PID invites, so we
# can't just rely on the standard ratelimiting of events.
yield self.base_handler.ratelimit(requester)
invitee = yield self._lookup_3pid(
id_server, medium, address

View file

@ -90,9 +90,32 @@ class IPBlacklistingResolver(object):
def resolveHostName(self, recv, hostname, portNumber=0):
r = recv()
d = defer.Deferred()
addresses = []
def _callback():
r.resolutionBegan(None)
has_bad_ip = False
for i in addresses:
ip_address = IPAddress(i.host)
if check_against_blacklist(
ip_address, self._ip_whitelist, self._ip_blacklist
):
logger.info(
"Dropped %s from DNS resolution to %s due to blacklist" %
(ip_address, hostname)
)
has_bad_ip = True
# if we have a blacklisted IP, we'd like to raise an error to block the
# request, but all we can really do from here is claim that there were no
# valid results.
if not has_bad_ip:
for i in addresses:
r.addressResolved(i)
r.resolutionComplete()
@provider(IResolutionReceiver)
class EndpointReceiver(object):
@staticmethod
@ -101,34 +124,16 @@ class IPBlacklistingResolver(object):
@staticmethod
def addressResolved(address):
ip_address = IPAddress(address.host)
if check_against_blacklist(
ip_address, self._ip_whitelist, self._ip_blacklist
):
logger.info(
"Dropped %s from DNS resolution to %s" % (ip_address, hostname)
)
raise SynapseError(403, "IP address blocked by IP blacklist entry")
addresses.append(address)
@staticmethod
def resolutionComplete():
d.callback(addresses)
_callback()
self._reactor.nameResolver.resolveHostName(
EndpointReceiver, hostname, portNumber=portNumber
)
def _callback(addrs):
r.resolutionBegan(None)
for i in addrs:
r.addressResolved(i)
r.resolutionComplete()
d.addCallback(_callback)
return r

View file

@ -69,6 +69,14 @@ REQUIREMENTS = [
"attrs>=17.4.0",
"netaddr>=0.7.18",
# requests is a transitive dep of treq, and urlib3 is a transitive dep
# of requests, as well as of sentry-sdk.
#
# As of requests 2.21, requests does not yet support urllib3 1.25.
# (If we do not pin it here, pip will give us the latest urllib3
# due to the dep via sentry-sdk.)
"urllib3<1.25",
]
CONDITIONAL_REQUIREMENTS = {

View file

@ -88,21 +88,16 @@ class UsersRestServlet(RestServlet):
class VersionServlet(RestServlet):
PATTERNS = historical_admin_path_patterns("/server_version")
PATTERNS = (re.compile("^/_synapse/admin/v1/server_version$"), )
def __init__(self, hs):
self.auth = hs.get_auth()
@defer.inlineCallbacks
def on_GET(self, request):
yield assert_requester_is_admin(self.auth, request)
ret = {
self.res = {
'server_version': get_version_string(synapse),
'python_version': platform.python_version(),
}
defer.returnValue((200, ret))
def on_GET(self, request):
return 200, self.res
class UserRegisterServlet(RestServlet):
@ -830,6 +825,7 @@ class AdminRestResource(JsonResource):
register_servlets_for_client_rest_resource(hs, self)
SendServerNoticeServlet(hs).register(self)
VersionServlet(hs).register(self)
def register_servlets_for_client_rest_resource(hs, http_server):
@ -847,7 +843,6 @@ def register_servlets_for_client_rest_resource(hs, http_server):
QuarantineMediaInRoom(hs).register(http_server)
ListMediaInRoom(hs).register(http_server)
UserRegisterServlet(hs).register(http_server)
VersionServlet(hs).register(http_server)
DeleteGroupAdminRestServlet(hs).register(http_server)
AccountValidityRenewServlet(hs).register(http_server)
# don't add more things here: new servlets should only be exposed on

View file

@ -31,11 +31,17 @@ class ProfileDisplaynameRestServlet(ClientV1RestServlet):
@defer.inlineCallbacks
def on_GET(self, request, user_id):
requester_user = None
if self.hs.config.require_auth_for_profile_requests:
requester = yield self.auth.get_user_by_req(request)
requester_user = requester.user
user = UserID.from_string(user_id)
displayname = yield self.profile_handler.get_displayname(
user,
)
yield self.profile_handler.check_profile_query_allowed(user, requester_user)
displayname = yield self.profile_handler.get_displayname(user)
ret = {}
if displayname is not None:
@ -74,11 +80,17 @@ class ProfileAvatarURLRestServlet(ClientV1RestServlet):
@defer.inlineCallbacks
def on_GET(self, request, user_id):
requester_user = None
if self.hs.config.require_auth_for_profile_requests:
requester = yield self.auth.get_user_by_req(request)
requester_user = requester.user
user = UserID.from_string(user_id)
avatar_url = yield self.profile_handler.get_avatar_url(
user,
)
yield self.profile_handler.check_profile_query_allowed(user, requester_user)
avatar_url = yield self.profile_handler.get_avatar_url(user)
ret = {}
if avatar_url is not None:
@ -116,14 +128,18 @@ class ProfileRestServlet(ClientV1RestServlet):
@defer.inlineCallbacks
def on_GET(self, request, user_id):
requester_user = None
if self.hs.config.require_auth_for_profile_requests:
requester = yield self.auth.get_user_by_req(request)
requester_user = requester.user
user = UserID.from_string(user_id)
displayname = yield self.profile_handler.get_displayname(
user,
)
avatar_url = yield self.profile_handler.get_avatar_url(
user,
)
yield self.profile_handler.check_profile_query_allowed(user, requester_user)
displayname = yield self.profile_handler.get_displayname(user)
avatar_url = yield self.profile_handler.get_avatar_url(user)
ret = {}
if displayname is not None:

View file

@ -301,6 +301,12 @@ class PublicRoomListRestServlet(ClientV1RestServlet):
try:
yield self.auth.get_user_by_req(request, allow_guest=True)
except AuthError as e:
# Option to allow servers to require auth when accessing
# /publicRooms via CS API. This is especially helpful in private
# federations.
if self.hs.config.restrict_public_rooms_to_local_users:
raise
# We allow people to not be authed if they're just looking at our
# room list, but require auth when we proxy the request.
# In both cases we call the auth function, as that has the side

View file

@ -31,6 +31,7 @@ from six.moves import urllib_parse as urlparse
from canonicaljson import json
from twisted.internet import defer
from twisted.internet.error import DNSLookupError
from twisted.web.resource import Resource
from twisted.web.server import NOT_DONE_YET
@ -328,9 +329,18 @@ class PreviewUrlResource(Resource):
# handler will return a SynapseError to the client instead of
# blank data or a 500.
raise
except DNSLookupError:
# DNS lookup returned no results
# Note: This will also be the case if one of the resolved IP
# addresses is blacklisted
raise SynapseError(
502, "DNS resolution failure during URL preview generation",
Codes.UNKNOWN
)
except Exception as e:
# FIXME: pass through 404s and other error messages nicely
logger.warn("Error downloading %s: %r", url, e)
raise SynapseError(
500, "Failed to download content: %s" % (
traceback.format_exception_only(sys.exc_info()[0], e),

View file

@ -18,7 +18,6 @@ import synapse.server_notices.server_notices_sender
import synapse.state
import synapse.storage
class HomeServer(object):
@property
def config(self) -> synapse.config.homeserver.HomeServerConfig:

View file

@ -118,7 +118,7 @@ class DeviceInboxWorkerStore(SQLBaseStore):
defer.returnValue(count)
def get_new_device_msgs_for_remote(
self, destination, last_stream_id, current_stream_id, limit=100
self, destination, last_stream_id, current_stream_id, limit
):
"""
Args:

View file

@ -109,7 +109,6 @@ class FilteringTestCase(unittest.TestCase):
"event_format": "client",
"event_fields": ["type", "content", "sender"],
},
# a single backslash should be permitted (though it is debatable whether
# it should be permitted before anything other than `.`, and what that
# actually means)

View file

@ -10,19 +10,19 @@ class TestRatelimiter(unittest.TestCase):
key="test_id", time_now_s=0, rate_hz=0.1, burst_count=1
)
self.assertTrue(allowed)
self.assertEquals(10., time_allowed)
self.assertEquals(10.0, time_allowed)
allowed, time_allowed = limiter.can_do_action(
key="test_id", time_now_s=5, rate_hz=0.1, burst_count=1
)
self.assertFalse(allowed)
self.assertEquals(10., time_allowed)
self.assertEquals(10.0, time_allowed)
allowed, time_allowed = limiter.can_do_action(
key="test_id", time_now_s=10, rate_hz=0.1, burst_count=1
)
self.assertTrue(allowed)
self.assertEquals(20., time_allowed)
self.assertEquals(20.0, time_allowed)
def test_pruning(self):
limiter = Ratelimiter()

Some files were not shown because too many files have changed in this diff Show more