mirror of
https://mau.dev/maunium/synapse.git
synced 2024-11-12 04:52:26 +01:00
Merge branch 'develop' into rav/saml2_client
This commit is contained in:
commit
b4fd86a9b4
55 changed files with 835 additions and 453 deletions
|
@ -1,9 +1,13 @@
|
||||||
Dockerfile
|
# ignore everything by default
|
||||||
.travis.yml
|
*
|
||||||
.gitignore
|
|
||||||
demo/etc
|
# things to include
|
||||||
tox.ini
|
!docker
|
||||||
.git/*
|
!scripts
|
||||||
.tox/*
|
!synapse
|
||||||
debian/matrix-synapse/
|
!MANIFEST.in
|
||||||
debian/matrix-synapse-*/
|
!README.rst
|
||||||
|
!setup.py
|
||||||
|
!synctl
|
||||||
|
|
||||||
|
**/__pycache__
|
||||||
|
|
1
changelog.d/5092.feature
Normal file
1
changelog.d/5092.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Added possibilty to disable local password authentication. Contributed by Daniel Hoffend.
|
1
changelog.d/5313.misc
Normal file
1
changelog.d/5313.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Update example haproxy config to a more compatible setup.
|
1
changelog.d/5475.misc
Normal file
1
changelog.d/5475.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Synapse can now handle RestServlets that return coroutines.
|
1
changelog.d/5543.misc
Normal file
1
changelog.d/5543.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Make the config clearer in that email.template_dir is relative to the Synapse's root directory, not the `synapse/` folder within it.
|
1
changelog.d/5550.feature
Normal file
1
changelog.d/5550.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
The minimum TLS version used for outgoing federation requests can now be set with `federation_client_minimum_tls_version`.
|
1
changelog.d/5550.misc
Normal file
1
changelog.d/5550.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Synapse will now only allow TLS v1.2 connections when serving federation, if it terminates TLS. As Synapse's allowed ciphers were only able to be used in TLSv1.2 before, this does not change behaviour.
|
1
changelog.d/5555.bugfix
Normal file
1
changelog.d/5555.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fixed m.login.jwt using unregistred user_id and added pyjwt>=1.6.4 as jwt conditional dependencies. Contributed by Pau Rodriguez-Estivill.
|
1
changelog.d/5557.misc
Normal file
1
changelog.d/5557.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Logging when running GC collection on generation 0 is now at the DEBUG level, not INFO.
|
1
changelog.d/5559.feature
Normal file
1
changelog.d/5559.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Optimise devices changed query to not pull unnecessary rows from the database, reducing database load.
|
1
changelog.d/5561.feature
Normal file
1
changelog.d/5561.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Update Docker image to deprecate the use of environment variables for configuration, and make the use of a static configuration the default.
|
1
changelog.d/5562.feature
Normal file
1
changelog.d/5562.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Update Docker image to deprecate the use of environment variables for configuration, and make the use of a static configuration the default.
|
1
changelog.d/5563.bugfix
Normal file
1
changelog.d/5563.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Docker: Use a sensible location for data files when generating a config file.
|
1
changelog.d/5564.misc
Normal file
1
changelog.d/5564.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Reduce the amount of stuff we send in the docker context.
|
1
changelog.d/5565.feature
Normal file
1
changelog.d/5565.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Docker: Send synapse logs to the docker logging system, by default.
|
1
changelog.d/5566.feature
Normal file
1
changelog.d/5566.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Update Docker image to deprecate the use of environment variables for configuration, and make the use of a static configuration the default.
|
1
changelog.d/5567.feature
Normal file
1
changelog.d/5567.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Update Docker image to deprecate the use of environment variables for configuration, and make the use of a static configuration the default.
|
1
changelog.d/5568.feature
Normal file
1
changelog.d/5568.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Docker image: open the non-TLS port by default.
|
1
changelog.d/5570.misc
Normal file
1
changelog.d/5570.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Point the reverse links in the Purge History contrib scripts at the intended location.
|
1
changelog.d/5576.bugfix
Normal file
1
changelog.d/5576.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix a bug that would cause invited users to receive several emails for a single 3PID invite in case the inviter is rate limited.
|
|
@ -1,5 +1,9 @@
|
||||||
# Synapse Docker
|
# Synapse Docker
|
||||||
|
|
||||||
|
FIXME: this is out-of-date as of
|
||||||
|
https://github.com/matrix-org/synapse/issues/5518. Contributions to bring it up
|
||||||
|
to date would be welcome.
|
||||||
|
|
||||||
### Automated configuration
|
### Automated configuration
|
||||||
|
|
||||||
It is recommended that you use Docker Compose to run your containers, including
|
It is recommended that you use Docker Compose to run your containers, including
|
||||||
|
|
|
@ -3,7 +3,7 @@ Purge history API examples
|
||||||
|
|
||||||
# `purge_history.sh`
|
# `purge_history.sh`
|
||||||
|
|
||||||
A bash file, that uses the [purge history API](/docs/admin_api/README.rst) to
|
A bash file, that uses the [purge history API](/docs/admin_api/purge_history_api.rst) to
|
||||||
purge all messages in a list of rooms up to a certain event. You can select a
|
purge all messages in a list of rooms up to a certain event. You can select a
|
||||||
timeframe or a number of messages that you want to keep in the room.
|
timeframe or a number of messages that you want to keep in the room.
|
||||||
|
|
||||||
|
@ -12,5 +12,5 @@ the script.
|
||||||
|
|
||||||
# `purge_remote_media.sh`
|
# `purge_remote_media.sh`
|
||||||
|
|
||||||
A bash file, that uses the [purge history API](/docs/admin_api/README.rst) to
|
A bash file, that uses the [purge history API](/docs/admin_api/purge_history_api.rst) to
|
||||||
purge all old cached remote media.
|
purge all old cached remote media.
|
||||||
|
|
2
debian/build_virtualenv
vendored
2
debian/build_virtualenv
vendored
|
@ -43,7 +43,7 @@ dh_virtualenv \
|
||||||
--preinstall="mock" \
|
--preinstall="mock" \
|
||||||
--extra-pip-arg="--no-cache-dir" \
|
--extra-pip-arg="--no-cache-dir" \
|
||||||
--extra-pip-arg="--compile" \
|
--extra-pip-arg="--compile" \
|
||||||
--extras="all"
|
--extras="all,systemd"
|
||||||
|
|
||||||
PACKAGE_BUILD_DIR="debian/matrix-synapse-py3"
|
PACKAGE_BUILD_DIR="debian/matrix-synapse-py3"
|
||||||
VIRTUALENV_DIR="${PACKAGE_BUILD_DIR}${DH_VIRTUALENV_INSTALL_ROOT}/matrix-synapse"
|
VIRTUALENV_DIR="${PACKAGE_BUILD_DIR}${DH_VIRTUALENV_INSTALL_ROOT}/matrix-synapse"
|
||||||
|
|
7
debian/changelog
vendored
7
debian/changelog
vendored
|
@ -1,3 +1,10 @@
|
||||||
|
matrix-synapse-py3 (1.0.0+nmu1) UNRELEASED; urgency=medium
|
||||||
|
|
||||||
|
[ Silke Hofstra ]
|
||||||
|
* Include systemd-python to allow logging to the systemd journal.
|
||||||
|
|
||||||
|
-- Silke Hofstra <silke@slxh.eu> Wed, 29 May 2019 09:45:29 +0200
|
||||||
|
|
||||||
matrix-synapse-py3 (1.0.0) stable; urgency=medium
|
matrix-synapse-py3 (1.0.0) stable; urgency=medium
|
||||||
|
|
||||||
* New synapse release 1.0.0.
|
* New synapse release 1.0.0.
|
||||||
|
|
227
docker/README.md
227
docker/README.md
|
@ -6,39 +6,11 @@ postgres database.
|
||||||
|
|
||||||
The image also does *not* provide a TURN server.
|
The image also does *not* provide a TURN server.
|
||||||
|
|
||||||
## Run
|
|
||||||
|
|
||||||
### Using docker-compose (easier)
|
|
||||||
|
|
||||||
This image is designed to run either with an automatically generated
|
|
||||||
configuration file or with a custom configuration that requires manual editing.
|
|
||||||
|
|
||||||
An easy way to make use of this image is via docker-compose. See the
|
|
||||||
[contrib/docker](https://github.com/matrix-org/synapse/tree/master/contrib/docker) section of the synapse project for
|
|
||||||
examples.
|
|
||||||
|
|
||||||
### Without Compose (harder)
|
|
||||||
|
|
||||||
If you do not wish to use Compose, you may still run this image using plain
|
|
||||||
Docker commands. Note that the following is just a guideline and you may need
|
|
||||||
to add parameters to the docker run command to account for the network situation
|
|
||||||
with your postgres database.
|
|
||||||
|
|
||||||
```
|
|
||||||
docker run \
|
|
||||||
-d \
|
|
||||||
--name synapse \
|
|
||||||
--mount type=volume,src=synapse-data,dst=/data \
|
|
||||||
-e SYNAPSE_SERVER_NAME=my.matrix.host \
|
|
||||||
-e SYNAPSE_REPORT_STATS=yes \
|
|
||||||
-p 8448:8448 \
|
|
||||||
matrixdotorg/synapse:latest
|
|
||||||
```
|
|
||||||
|
|
||||||
## Volumes
|
## Volumes
|
||||||
|
|
||||||
The image expects a single volume, located at ``/data``, that will hold:
|
By default, the image expects a single volume, located at ``/data``, that will hold:
|
||||||
|
|
||||||
|
* configuration files;
|
||||||
* temporary files during uploads;
|
* temporary files during uploads;
|
||||||
* uploaded media and thumbnails;
|
* uploaded media and thumbnails;
|
||||||
* the SQLite database if you do not configure postgres;
|
* the SQLite database if you do not configure postgres;
|
||||||
|
@ -53,129 +25,106 @@ In order to setup an application service, simply create an ``appservices``
|
||||||
directory in the data volume and write the application service Yaml
|
directory in the data volume and write the application service Yaml
|
||||||
configuration file there. Multiple application services are supported.
|
configuration file there. Multiple application services are supported.
|
||||||
|
|
||||||
## TLS certificates
|
## Generating a configuration file
|
||||||
|
|
||||||
Synapse requires a valid TLS certificate. You can do one of the following:
|
The first step is to genearte a valid config file. To do this, you can run the
|
||||||
|
image with the `generate` commandline option.
|
||||||
|
|
||||||
* Provide your own certificate and key (as
|
You will need to specify values for the `SYNAPSE_SERVER_NAME` and
|
||||||
`${DATA_PATH}/${SYNAPSE_SERVER_NAME}.tls.crt` and
|
`SYNAPSE_REPORT_STATS` environment variable, and mount a docker volume to store
|
||||||
`${DATA_PATH}/${SYNAPSE_SERVER_NAME}.tls.key`, or elsewhere by providing an
|
the configuration on. For example:
|
||||||
entire config as `${SYNAPSE_CONFIG_PATH}`). In this case, you should forward
|
|
||||||
traffic to port 8448 in the container, for example with `-p 443:8448`.
|
|
||||||
|
|
||||||
* Use a reverse proxy to terminate incoming TLS, and forward the plain http
|
|
||||||
traffic to port 8008 in the container. In this case you should set `-e
|
|
||||||
SYNAPSE_NO_TLS=1`.
|
|
||||||
|
|
||||||
* Use the ACME (Let's Encrypt) support built into Synapse. This requires
|
|
||||||
`${SYNAPSE_SERVER_NAME}` port 80 to be forwarded to port 8009 in the
|
|
||||||
container, for example with `-p 80:8009`. To enable it in the docker
|
|
||||||
container, set `-e SYNAPSE_ACME=1`.
|
|
||||||
|
|
||||||
If you don't do any of these, Synapse will fail to start with an error similar to:
|
|
||||||
|
|
||||||
synapse.config._base.ConfigError: Error accessing file '/data/<server_name>.tls.crt' (config for tls_certificate): No such file or directory
|
|
||||||
|
|
||||||
## Environment
|
|
||||||
|
|
||||||
Unless you specify a custom path for the configuration file, a very generic
|
|
||||||
file will be generated, based on the following environment settings.
|
|
||||||
These are a good starting point for setting up your own deployment.
|
|
||||||
|
|
||||||
Global settings:
|
|
||||||
|
|
||||||
* ``UID``, the user id Synapse will run as [default 991]
|
|
||||||
* ``GID``, the group id Synapse will run as [default 991]
|
|
||||||
* ``SYNAPSE_CONFIG_PATH``, path to a custom config file
|
|
||||||
|
|
||||||
If ``SYNAPSE_CONFIG_PATH`` is set, you should generate a configuration file
|
|
||||||
then customize it manually: see [Generating a config
|
|
||||||
file](#generating-a-config-file).
|
|
||||||
|
|
||||||
Otherwise, a dynamic configuration file will be used.
|
|
||||||
|
|
||||||
### Environment variables used to build a dynamic configuration file
|
|
||||||
|
|
||||||
The following environment variables are used to build the configuration file
|
|
||||||
when ``SYNAPSE_CONFIG_PATH`` is not set.
|
|
||||||
|
|
||||||
* ``SYNAPSE_SERVER_NAME`` (mandatory), the server public hostname.
|
|
||||||
* ``SYNAPSE_REPORT_STATS``, (mandatory, ``yes`` or ``no``), enable anonymous
|
|
||||||
statistics reporting back to the Matrix project which helps us to get funding.
|
|
||||||
* `SYNAPSE_NO_TLS`, (accepts `true`, `false`, `on`, `off`, `1`, `0`, `yes`, `no`]): disable
|
|
||||||
TLS in Synapse (use this if you run your own TLS-capable reverse proxy). Defaults
|
|
||||||
to `false` (ie, TLS is enabled by default).
|
|
||||||
* ``SYNAPSE_ENABLE_REGISTRATION``, set this variable to enable registration on
|
|
||||||
the Synapse instance.
|
|
||||||
* ``SYNAPSE_ALLOW_GUEST``, set this variable to allow guest joining this server.
|
|
||||||
* ``SYNAPSE_EVENT_CACHE_SIZE``, the event cache size [default `10K`].
|
|
||||||
* ``SYNAPSE_RECAPTCHA_PUBLIC_KEY``, set this variable to the recaptcha public
|
|
||||||
key in order to enable recaptcha upon registration.
|
|
||||||
* ``SYNAPSE_RECAPTCHA_PRIVATE_KEY``, set this variable to the recaptcha private
|
|
||||||
key in order to enable recaptcha upon registration.
|
|
||||||
* ``SYNAPSE_TURN_URIS``, set this variable to the coma-separated list of TURN
|
|
||||||
uris to enable TURN for this homeserver.
|
|
||||||
* ``SYNAPSE_TURN_SECRET``, set this to the TURN shared secret if required.
|
|
||||||
* ``SYNAPSE_MAX_UPLOAD_SIZE``, set this variable to change the max upload size
|
|
||||||
[default `10M`].
|
|
||||||
* ``SYNAPSE_ACME``: set this to enable the ACME certificate renewal support.
|
|
||||||
|
|
||||||
Shared secrets, that will be initialized to random values if not set:
|
|
||||||
|
|
||||||
* ``SYNAPSE_REGISTRATION_SHARED_SECRET``, secret for registrering users if
|
|
||||||
registration is disable.
|
|
||||||
* ``SYNAPSE_MACAROON_SECRET_KEY`` secret for signing access tokens
|
|
||||||
to the server.
|
|
||||||
|
|
||||||
Database specific values (will use SQLite if not set):
|
|
||||||
|
|
||||||
* `POSTGRES_DB` - The database name for the synapse postgres
|
|
||||||
database. [default: `synapse`]
|
|
||||||
* `POSTGRES_HOST` - The host of the postgres database if you wish to use
|
|
||||||
postgresql instead of sqlite3. [default: `db` which is useful when using a
|
|
||||||
container on the same docker network in a compose file where the postgres
|
|
||||||
service is called `db`]
|
|
||||||
* `POSTGRES_PASSWORD` - The password for the synapse postgres database. **If
|
|
||||||
this is set then postgres will be used instead of sqlite3.** [default: none]
|
|
||||||
**NOTE**: You are highly encouraged to use postgresql! Please use the compose
|
|
||||||
file to make it easier to deploy.
|
|
||||||
* `POSTGRES_USER` - The user for the synapse postgres database. [default:
|
|
||||||
`synapse`]
|
|
||||||
|
|
||||||
Mail server specific values (will not send emails if not set):
|
|
||||||
|
|
||||||
* ``SYNAPSE_SMTP_HOST``, hostname to the mail server.
|
|
||||||
* ``SYNAPSE_SMTP_PORT``, TCP port for accessing the mail server [default
|
|
||||||
``25``].
|
|
||||||
* ``SYNAPSE_SMTP_USER``, username for authenticating against the mail server if
|
|
||||||
any.
|
|
||||||
* ``SYNAPSE_SMTP_PASSWORD``, password for authenticating against the mail
|
|
||||||
server if any.
|
|
||||||
|
|
||||||
### Generating a config file
|
|
||||||
|
|
||||||
It is possible to generate a basic configuration file for use with
|
|
||||||
`SYNAPSE_CONFIG_PATH` using the `generate` commandline option. You will need to
|
|
||||||
specify values for `SYNAPSE_CONFIG_PATH`, `SYNAPSE_SERVER_NAME` and
|
|
||||||
`SYNAPSE_REPORT_STATS`, and mount a docker volume to store the data on. For
|
|
||||||
example:
|
|
||||||
|
|
||||||
```
|
```
|
||||||
docker run -it --rm \
|
docker run -it --rm \
|
||||||
--mount type=volume,src=synapse-data,dst=/data \
|
--mount type=volume,src=synapse-data,dst=/data \
|
||||||
-e SYNAPSE_CONFIG_PATH=/data/homeserver.yaml \
|
|
||||||
-e SYNAPSE_SERVER_NAME=my.matrix.host \
|
-e SYNAPSE_SERVER_NAME=my.matrix.host \
|
||||||
-e SYNAPSE_REPORT_STATS=yes \
|
-e SYNAPSE_REPORT_STATS=yes \
|
||||||
matrixdotorg/synapse:latest generate
|
matrixdotorg/synapse:latest generate
|
||||||
```
|
```
|
||||||
|
|
||||||
This will generate a `homeserver.yaml` in (typically)
|
For information on picking a suitable server name, see
|
||||||
`/var/lib/docker/volumes/synapse-data/_data`, which you can then customise and
|
https://github.com/matrix-org/synapse/blob/master/INSTALL.md.
|
||||||
use with:
|
|
||||||
|
The above command will generate a `homeserver.yaml` in (typically)
|
||||||
|
`/var/lib/docker/volumes/synapse-data/_data`. You should check this file, and
|
||||||
|
customise it to your needs.
|
||||||
|
|
||||||
|
The following environment variables are supported in `generate` mode:
|
||||||
|
|
||||||
|
* `SYNAPSE_SERVER_NAME` (mandatory): the server public hostname.
|
||||||
|
* `SYNAPSE_REPORT_STATS` (mandatory, `yes` or `no`): whether to enable
|
||||||
|
anonymous statistics reporting.
|
||||||
|
* `SYNAPSE_CONFIG_DIR`: where additional config files (such as the log config
|
||||||
|
and event signing key) will be stored. Defaults to `/data`.
|
||||||
|
* `SYNAPSE_CONFIG_PATH`: path to the file to be generated. Defaults to
|
||||||
|
`<SYNAPSE_CONFIG_DIR>/homeserver.yaml`.
|
||||||
|
* `SYNAPSE_DATA_DIR`: where the generated config will put persistent data
|
||||||
|
such as the datatase and media store. Defaults to `/data`.
|
||||||
|
* `UID`, `GID`: the user id and group id to use for creating the data
|
||||||
|
directories. Defaults to `991`, `991`.
|
||||||
|
|
||||||
|
|
||||||
|
## Running synapse
|
||||||
|
|
||||||
|
Once you have a valid configuration file, you can start synapse as follows:
|
||||||
|
|
||||||
```
|
```
|
||||||
docker run -d --name synapse \
|
docker run -d --name synapse \
|
||||||
--mount type=volume,src=synapse-data,dst=/data \
|
--mount type=volume,src=synapse-data,dst=/data \
|
||||||
-e SYNAPSE_CONFIG_PATH=/data/homeserver.yaml \
|
-p 8008:8008 \
|
||||||
matrixdotorg/synapse:latest
|
matrixdotorg/synapse:latest
|
||||||
```
|
```
|
||||||
|
|
||||||
|
You can then check that it has started correctly with:
|
||||||
|
|
||||||
|
```
|
||||||
|
docker logs synapse
|
||||||
|
```
|
||||||
|
|
||||||
|
If all is well, you should now be able to connect to http://localhost:8008 and
|
||||||
|
see a confirmation message.
|
||||||
|
|
||||||
|
The following environment variables are supported in run mode:
|
||||||
|
|
||||||
|
* `SYNAPSE_CONFIG_DIR`: where additional config files are stored. Defaults to
|
||||||
|
`/data`.
|
||||||
|
* `SYNAPSE_CONFIG_PATH`: path to the config file. Defaults to
|
||||||
|
`<SYNAPSE_CONFIG_DIR>/homeserver.yaml`.
|
||||||
|
* `UID`, `GID`: the user and group id to run Synapse as. Defaults to `991`, `991`.
|
||||||
|
|
||||||
|
## TLS support
|
||||||
|
|
||||||
|
The default configuration exposes a single HTTP port: http://localhost:8008. It
|
||||||
|
is suitable for local testing, but for any practical use, you will either need
|
||||||
|
to use a reverse proxy, or configure Synapse to expose an HTTPS port.
|
||||||
|
|
||||||
|
For documentation on using a reverse proxy, see
|
||||||
|
https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.rst.
|
||||||
|
|
||||||
|
For more information on enabling TLS support in synapse itself, see
|
||||||
|
https://github.com/matrix-org/synapse/blob/master/INSTALL.md#tls-certificates. Of
|
||||||
|
course, you will need to expose the TLS port from the container with a `-p`
|
||||||
|
argument to `docker run`.
|
||||||
|
|
||||||
|
## Legacy dynamic configuration file support
|
||||||
|
|
||||||
|
For backwards-compatibility only, the docker image supports creating a dynamic
|
||||||
|
configuration file based on environment variables. This is now deprecated, but
|
||||||
|
is enabled when the `SYNAPSE_SERVER_NAME` variable is set (and `generate` is
|
||||||
|
not given).
|
||||||
|
|
||||||
|
To migrate from a dynamic configuration file to a static one, run the docker
|
||||||
|
container once with the environment variables set, and `migrate_config`
|
||||||
|
commandline option. For example:
|
||||||
|
|
||||||
|
```
|
||||||
|
docker run -it --rm \
|
||||||
|
--mount type=volume,src=synapse-data,dst=/data \
|
||||||
|
-e SYNAPSE_SERVER_NAME=my.matrix.host \
|
||||||
|
-e SYNAPSE_REPORT_STATS=yes \
|
||||||
|
matrixdotorg/synapse:latest migrate_config
|
||||||
|
```
|
||||||
|
|
||||||
|
This will generate the same configuration file as the legacy mode used, but
|
||||||
|
will store it in `/data/homeserver.yaml` instead of a temporary location. You
|
||||||
|
can then use it as shown above at [Running synapse](#running-synapse).
|
||||||
|
|
|
@ -21,7 +21,7 @@ server_name: "{{ SYNAPSE_SERVER_NAME }}"
|
||||||
pid_file: /homeserver.pid
|
pid_file: /homeserver.pid
|
||||||
web_client: False
|
web_client: False
|
||||||
soft_file_limit: 0
|
soft_file_limit: 0
|
||||||
log_config: "/compiled/log.config"
|
log_config: "{{ SYNAPSE_LOG_CONFIG }}"
|
||||||
|
|
||||||
## Ports ##
|
## Ports ##
|
||||||
|
|
||||||
|
|
260
docker/start.py
260
docker/start.py
|
@ -1,80 +1,84 @@
|
||||||
#!/usr/local/bin/python
|
#!/usr/local/bin/python
|
||||||
|
|
||||||
import jinja2
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import subprocess
|
|
||||||
import glob
|
|
||||||
import codecs
|
import codecs
|
||||||
|
import glob
|
||||||
|
import os
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
|
||||||
|
import jinja2
|
||||||
|
|
||||||
|
|
||||||
# Utility functions
|
# Utility functions
|
||||||
convert = lambda src, dst, environ: open(dst, "w").write(
|
def log(txt):
|
||||||
jinja2.Template(open(src).read()).render(**environ)
|
print(txt, file=sys.stderr)
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def check_arguments(environ, args):
|
def error(txt):
|
||||||
for argument in args:
|
log(txt)
|
||||||
if argument not in environ:
|
|
||||||
print("Environment variable %s is mandatory, exiting." % argument)
|
|
||||||
sys.exit(2)
|
sys.exit(2)
|
||||||
|
|
||||||
|
|
||||||
def generate_secrets(environ, secrets):
|
def convert(src, dst, environ):
|
||||||
|
"""Generate a file from a template
|
||||||
|
|
||||||
|
Args:
|
||||||
|
src (str): path to input file
|
||||||
|
dst (str): path to file to write
|
||||||
|
environ (dict): environment dictionary, for replacement mappings.
|
||||||
|
"""
|
||||||
|
with open(src) as infile:
|
||||||
|
template = infile.read()
|
||||||
|
rendered = jinja2.Template(template).render(**environ)
|
||||||
|
with open(dst, "w") as outfile:
|
||||||
|
outfile.write(rendered)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_config_from_template(config_dir, config_path, environ, ownership):
|
||||||
|
"""Generate a homeserver.yaml from environment variables
|
||||||
|
|
||||||
|
Args:
|
||||||
|
config_dir (str): where to put generated config files
|
||||||
|
config_path (str): where to put the main config file
|
||||||
|
environ (dict): environment dictionary
|
||||||
|
ownership (str): "<user>:<group>" string which will be used to set
|
||||||
|
ownership of the generated configs
|
||||||
|
"""
|
||||||
|
for v in ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS"):
|
||||||
|
if v not in environ:
|
||||||
|
error(
|
||||||
|
"Environment variable '%s' is mandatory when generating a config file."
|
||||||
|
% (v,)
|
||||||
|
)
|
||||||
|
|
||||||
|
# populate some params from data files (if they exist, else create new ones)
|
||||||
|
environ = environ.copy()
|
||||||
|
secrets = {
|
||||||
|
"registration": "SYNAPSE_REGISTRATION_SHARED_SECRET",
|
||||||
|
"macaroon": "SYNAPSE_MACAROON_SECRET_KEY",
|
||||||
|
}
|
||||||
|
|
||||||
for name, secret in secrets.items():
|
for name, secret in secrets.items():
|
||||||
if secret not in environ:
|
if secret not in environ:
|
||||||
filename = "/data/%s.%s.key" % (environ["SYNAPSE_SERVER_NAME"], name)
|
filename = "/data/%s.%s.key" % (environ["SYNAPSE_SERVER_NAME"], name)
|
||||||
|
|
||||||
|
# if the file already exists, load in the existing value; otherwise,
|
||||||
|
# generate a new secret and write it to a file
|
||||||
|
|
||||||
if os.path.exists(filename):
|
if os.path.exists(filename):
|
||||||
|
log("Reading %s from %s" % (secret, filename))
|
||||||
with open(filename) as handle:
|
with open(filename) as handle:
|
||||||
value = handle.read()
|
value = handle.read()
|
||||||
else:
|
else:
|
||||||
print("Generating a random secret for {}".format(name))
|
log("Generating a random secret for {}".format(secret))
|
||||||
value = codecs.encode(os.urandom(32), "hex").decode()
|
value = codecs.encode(os.urandom(32), "hex").decode()
|
||||||
with open(filename, "w") as handle:
|
with open(filename, "w") as handle:
|
||||||
handle.write(value)
|
handle.write(value)
|
||||||
environ[secret] = value
|
environ[secret] = value
|
||||||
|
|
||||||
|
|
||||||
# Prepare the configuration
|
|
||||||
mode = sys.argv[1] if len(sys.argv) > 1 else None
|
|
||||||
environ = os.environ.copy()
|
|
||||||
ownership = "{}:{}".format(environ.get("UID", 991), environ.get("GID", 991))
|
|
||||||
args = ["python", "-m", "synapse.app.homeserver"]
|
|
||||||
|
|
||||||
# In generate mode, generate a configuration, missing keys, then exit
|
|
||||||
if mode == "generate":
|
|
||||||
check_arguments(
|
|
||||||
environ, ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS", "SYNAPSE_CONFIG_PATH")
|
|
||||||
)
|
|
||||||
args += [
|
|
||||||
"--server-name",
|
|
||||||
environ["SYNAPSE_SERVER_NAME"],
|
|
||||||
"--report-stats",
|
|
||||||
environ["SYNAPSE_REPORT_STATS"],
|
|
||||||
"--config-path",
|
|
||||||
environ["SYNAPSE_CONFIG_PATH"],
|
|
||||||
"--generate-config",
|
|
||||||
]
|
|
||||||
os.execv("/usr/local/bin/python", args)
|
|
||||||
|
|
||||||
# In normal mode, generate missing keys if any, then run synapse
|
|
||||||
else:
|
|
||||||
if "SYNAPSE_CONFIG_PATH" in environ:
|
|
||||||
config_path = environ["SYNAPSE_CONFIG_PATH"]
|
|
||||||
else:
|
|
||||||
check_arguments(environ, ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS"))
|
|
||||||
generate_secrets(
|
|
||||||
environ,
|
|
||||||
{
|
|
||||||
"registration": "SYNAPSE_REGISTRATION_SHARED_SECRET",
|
|
||||||
"macaroon": "SYNAPSE_MACAROON_SECRET_KEY",
|
|
||||||
},
|
|
||||||
)
|
|
||||||
environ["SYNAPSE_APPSERVICES"] = glob.glob("/data/appservices/*.yaml")
|
environ["SYNAPSE_APPSERVICES"] = glob.glob("/data/appservices/*.yaml")
|
||||||
if not os.path.exists("/compiled"):
|
if not os.path.exists(config_dir):
|
||||||
os.mkdir("/compiled")
|
os.mkdir(config_dir)
|
||||||
|
|
||||||
config_path = "/compiled/homeserver.yaml"
|
|
||||||
|
|
||||||
# Convert SYNAPSE_NO_TLS to boolean if exists
|
# Convert SYNAPSE_NO_TLS to boolean if exists
|
||||||
if "SYNAPSE_NO_TLS" in environ:
|
if "SYNAPSE_NO_TLS" in environ:
|
||||||
|
@ -85,25 +89,155 @@ else:
|
||||||
if tlsanswerstring in ("false", "off", "0", "no"):
|
if tlsanswerstring in ("false", "off", "0", "no"):
|
||||||
environ["SYNAPSE_NO_TLS"] = False
|
environ["SYNAPSE_NO_TLS"] = False
|
||||||
else:
|
else:
|
||||||
print(
|
error(
|
||||||
'Environment variable "SYNAPSE_NO_TLS" found but value "'
|
'Environment variable "SYNAPSE_NO_TLS" found but value "'
|
||||||
+ tlsanswerstring
|
+ tlsanswerstring
|
||||||
+ '" unrecognized; exiting.'
|
+ '" unrecognized; exiting.'
|
||||||
)
|
)
|
||||||
sys.exit(2)
|
|
||||||
|
|
||||||
|
if "SYNAPSE_LOG_CONFIG" not in environ:
|
||||||
|
environ["SYNAPSE_LOG_CONFIG"] = config_dir + "/log.config"
|
||||||
|
|
||||||
|
log("Generating synapse config file " + config_path)
|
||||||
convert("/conf/homeserver.yaml", config_path, environ)
|
convert("/conf/homeserver.yaml", config_path, environ)
|
||||||
convert("/conf/log.config", "/compiled/log.config", environ)
|
|
||||||
|
log_config_file = environ["SYNAPSE_LOG_CONFIG"]
|
||||||
|
log("Generating log config file " + log_config_file)
|
||||||
|
convert("/conf/log.config", log_config_file, environ)
|
||||||
|
|
||||||
subprocess.check_output(["chown", "-R", ownership, "/data"])
|
subprocess.check_output(["chown", "-R", ownership, "/data"])
|
||||||
|
|
||||||
args += [
|
# Hopefully we already have a signing key, but generate one if not.
|
||||||
|
subprocess.check_output(
|
||||||
|
[
|
||||||
|
"su-exec",
|
||||||
|
ownership,
|
||||||
|
"python",
|
||||||
|
"-m",
|
||||||
|
"synapse.app.homeserver",
|
||||||
"--config-path",
|
"--config-path",
|
||||||
config_path,
|
config_path,
|
||||||
# tell synapse to put any generated keys in /data rather than /compiled
|
# tell synapse to put generated keys in /data rather than /compiled
|
||||||
"--keys-directory",
|
"--keys-directory",
|
||||||
"/data",
|
config_dir,
|
||||||
|
"--generate-keys",
|
||||||
]
|
]
|
||||||
|
)
|
||||||
|
|
||||||
# Generate missing keys and start synapse
|
|
||||||
subprocess.check_output(args + ["--generate-keys"])
|
def run_generate_config(environ, ownership):
|
||||||
os.execv("/sbin/su-exec", ["su-exec", ownership] + args)
|
"""Run synapse with a --generate-config param to generate a template config file
|
||||||
|
|
||||||
|
Args:
|
||||||
|
environ (dict): env var dict
|
||||||
|
ownership (str): "userid:groupid" arg for chmod
|
||||||
|
|
||||||
|
Never returns.
|
||||||
|
"""
|
||||||
|
for v in ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS"):
|
||||||
|
if v not in environ:
|
||||||
|
error("Environment variable '%s' is mandatory in `generate` mode." % (v,))
|
||||||
|
|
||||||
|
server_name = environ["SYNAPSE_SERVER_NAME"]
|
||||||
|
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
|
||||||
|
config_path = environ.get("SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml")
|
||||||
|
data_dir = environ.get("SYNAPSE_DATA_DIR", "/data")
|
||||||
|
|
||||||
|
# create a suitable log config from our template
|
||||||
|
log_config_file = "%s/%s.log.config" % (config_dir, server_name)
|
||||||
|
if not os.path.exists(log_config_file):
|
||||||
|
log("Creating log config %s" % (log_config_file,))
|
||||||
|
convert("/conf/log.config", log_config_file, environ)
|
||||||
|
|
||||||
|
# make sure that synapse has perms to write to the data dir.
|
||||||
|
subprocess.check_output(["chown", ownership, data_dir])
|
||||||
|
|
||||||
|
args = [
|
||||||
|
"python",
|
||||||
|
"-m",
|
||||||
|
"synapse.app.homeserver",
|
||||||
|
"--server-name",
|
||||||
|
server_name,
|
||||||
|
"--report-stats",
|
||||||
|
environ["SYNAPSE_REPORT_STATS"],
|
||||||
|
"--config-path",
|
||||||
|
config_path,
|
||||||
|
"--config-directory",
|
||||||
|
config_dir,
|
||||||
|
"--data-directory",
|
||||||
|
data_dir,
|
||||||
|
"--generate-config",
|
||||||
|
"--open-private-ports",
|
||||||
|
]
|
||||||
|
# log("running %s" % (args, ))
|
||||||
|
os.execv("/usr/local/bin/python", args)
|
||||||
|
|
||||||
|
|
||||||
|
def main(args, environ):
|
||||||
|
mode = args[1] if len(args) > 1 else None
|
||||||
|
ownership = "{}:{}".format(environ.get("UID", 991), environ.get("GID", 991))
|
||||||
|
|
||||||
|
# In generate mode, generate a configuration and missing keys, then exit
|
||||||
|
if mode == "generate":
|
||||||
|
return run_generate_config(environ, ownership)
|
||||||
|
|
||||||
|
if mode == "migrate_config":
|
||||||
|
# generate a config based on environment vars.
|
||||||
|
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
|
||||||
|
config_path = environ.get(
|
||||||
|
"SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml"
|
||||||
|
)
|
||||||
|
return generate_config_from_template(
|
||||||
|
config_dir, config_path, environ, ownership
|
||||||
|
)
|
||||||
|
|
||||||
|
if mode is not None:
|
||||||
|
error("Unknown execution mode '%s'" % (mode,))
|
||||||
|
|
||||||
|
if "SYNAPSE_SERVER_NAME" in environ:
|
||||||
|
# backwards-compatibility generate-a-config-on-the-fly mode
|
||||||
|
if "SYNAPSE_CONFIG_PATH" in environ:
|
||||||
|
error(
|
||||||
|
"SYNAPSE_SERVER_NAME and SYNAPSE_CONFIG_PATH are mutually exclusive "
|
||||||
|
"except in `generate` or `migrate_config` mode."
|
||||||
|
)
|
||||||
|
|
||||||
|
config_path = "/compiled/homeserver.yaml"
|
||||||
|
log(
|
||||||
|
"Generating config file '%s' on-the-fly from environment variables.\n"
|
||||||
|
"Note that this mode is deprecated. You can migrate to a static config\n"
|
||||||
|
"file by running with 'migrate_config'. See the README for more details."
|
||||||
|
% (config_path,)
|
||||||
|
)
|
||||||
|
|
||||||
|
generate_config_from_template("/compiled", config_path, environ, ownership)
|
||||||
|
else:
|
||||||
|
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
|
||||||
|
config_path = environ.get(
|
||||||
|
"SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml"
|
||||||
|
)
|
||||||
|
if not os.path.exists(config_path):
|
||||||
|
error(
|
||||||
|
"Config file '%s' does not exist. You should either create a new "
|
||||||
|
"config file by running with the `generate` argument (and then edit "
|
||||||
|
"the resulting file before restarting) or specify the path to an "
|
||||||
|
"existing config file with the SYNAPSE_CONFIG_PATH variable."
|
||||||
|
% (config_path,)
|
||||||
|
)
|
||||||
|
|
||||||
|
log("Starting synapse with config file " + config_path)
|
||||||
|
|
||||||
|
args = [
|
||||||
|
"su-exec",
|
||||||
|
ownership,
|
||||||
|
"python",
|
||||||
|
"-m",
|
||||||
|
"synapse.app.homeserver",
|
||||||
|
"--config-path",
|
||||||
|
config_path,
|
||||||
|
]
|
||||||
|
os.execv("/sbin/su-exec", args)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main(sys.argv, os.environ)
|
||||||
|
|
|
@ -89,8 +89,10 @@ Let's assume that we expect clients to connect to our server at
|
||||||
bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
|
bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
|
||||||
|
|
||||||
# Matrix client traffic
|
# Matrix client traffic
|
||||||
acl matrix hdr(host) -i matrix.example.com
|
acl matrix-host hdr(host) -i matrix.example.com
|
||||||
use_backend matrix if matrix
|
acl matrix-path path_beg /_matrix
|
||||||
|
|
||||||
|
use_backend matrix if matrix-host matrix-path
|
||||||
|
|
||||||
frontend matrix-federation
|
frontend matrix-federation
|
||||||
bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1
|
bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1
|
||||||
|
|
|
@ -317,6 +317,15 @@ listeners:
|
||||||
#
|
#
|
||||||
#federation_verify_certificates: false
|
#federation_verify_certificates: false
|
||||||
|
|
||||||
|
# The minimum TLS version that will be used for outbound federation requests.
|
||||||
|
#
|
||||||
|
# Defaults to `1`. Configurable to `1`, `1.1`, `1.2`, or `1.3`. Note
|
||||||
|
# that setting this value higher than `1.2` will prevent federation to most
|
||||||
|
# of the public Matrix network: only configure it to `1.3` if you have an
|
||||||
|
# entirely private federation setup and you can ensure TLS 1.3 support.
|
||||||
|
#
|
||||||
|
#federation_client_minimum_tls_version: 1.2
|
||||||
|
|
||||||
# Skip federation certificate verification on the following whitelist
|
# Skip federation certificate verification on the following whitelist
|
||||||
# of domains.
|
# of domains.
|
||||||
#
|
#
|
||||||
|
@ -1066,6 +1075,12 @@ password_config:
|
||||||
#
|
#
|
||||||
#enabled: false
|
#enabled: false
|
||||||
|
|
||||||
|
# Uncomment to disable authentication against the local password
|
||||||
|
# database. This is ignored if `enabled` is false, and is only useful
|
||||||
|
# if you have other password_providers.
|
||||||
|
#
|
||||||
|
#localdb_enabled: false
|
||||||
|
|
||||||
# Uncomment and change to a secret random string for extra security.
|
# Uncomment and change to a secret random string for extra security.
|
||||||
# DO NOT CHANGE THIS AFTER INITIAL SETUP!
|
# DO NOT CHANGE THIS AFTER INITIAL SETUP!
|
||||||
#
|
#
|
||||||
|
@ -1090,11 +1105,13 @@ password_config:
|
||||||
# app_name: Matrix
|
# app_name: Matrix
|
||||||
#
|
#
|
||||||
# # Enable email notifications by default
|
# # Enable email notifications by default
|
||||||
|
# #
|
||||||
# notif_for_new_users: True
|
# notif_for_new_users: True
|
||||||
#
|
#
|
||||||
# # Defining a custom URL for Riot is only needed if email notifications
|
# # Defining a custom URL for Riot is only needed if email notifications
|
||||||
# # should contain links to a self-hosted installation of Riot; when set
|
# # should contain links to a self-hosted installation of Riot; when set
|
||||||
# # the "app_name" setting is ignored
|
# # the "app_name" setting is ignored
|
||||||
|
# #
|
||||||
# riot_base_url: "http://localhost/riot"
|
# riot_base_url: "http://localhost/riot"
|
||||||
#
|
#
|
||||||
# # Enable sending password reset emails via the configured, trusted
|
# # Enable sending password reset emails via the configured, trusted
|
||||||
|
@ -1107,16 +1124,22 @@ password_config:
|
||||||
# #
|
# #
|
||||||
# # If this option is set to false and SMTP options have not been
|
# # If this option is set to false and SMTP options have not been
|
||||||
# # configured, resetting user passwords via email will be disabled
|
# # configured, resetting user passwords via email will be disabled
|
||||||
|
# #
|
||||||
# #trust_identity_server_for_password_resets: false
|
# #trust_identity_server_for_password_resets: false
|
||||||
#
|
#
|
||||||
# # Configure the time that a validation email or text message code
|
# # Configure the time that a validation email or text message code
|
||||||
# # will expire after sending
|
# # will expire after sending
|
||||||
# #
|
# #
|
||||||
# # This is currently used for password resets
|
# # This is currently used for password resets
|
||||||
|
# #
|
||||||
# #validation_token_lifetime: 1h
|
# #validation_token_lifetime: 1h
|
||||||
#
|
#
|
||||||
# # Template directory. All template files should be stored within this
|
# # Template directory. All template files should be stored within this
|
||||||
# # directory
|
# # directory. If not set, default templates from within the Synapse
|
||||||
|
# # package will be used
|
||||||
|
# #
|
||||||
|
# # For the list of default templates, please see
|
||||||
|
# # https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
|
||||||
# #
|
# #
|
||||||
# #template_dir: res/templates
|
# #template_dir: res/templates
|
||||||
#
|
#
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
#!/usr/bin/env python
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
import shutil
|
import shutil
|
||||||
|
|
|
@ -233,11 +233,13 @@ class EmailConfig(Config):
|
||||||
# app_name: Matrix
|
# app_name: Matrix
|
||||||
#
|
#
|
||||||
# # Enable email notifications by default
|
# # Enable email notifications by default
|
||||||
|
# #
|
||||||
# notif_for_new_users: True
|
# notif_for_new_users: True
|
||||||
#
|
#
|
||||||
# # Defining a custom URL for Riot is only needed if email notifications
|
# # Defining a custom URL for Riot is only needed if email notifications
|
||||||
# # should contain links to a self-hosted installation of Riot; when set
|
# # should contain links to a self-hosted installation of Riot; when set
|
||||||
# # the "app_name" setting is ignored
|
# # the "app_name" setting is ignored
|
||||||
|
# #
|
||||||
# riot_base_url: "http://localhost/riot"
|
# riot_base_url: "http://localhost/riot"
|
||||||
#
|
#
|
||||||
# # Enable sending password reset emails via the configured, trusted
|
# # Enable sending password reset emails via the configured, trusted
|
||||||
|
@ -250,16 +252,22 @@ class EmailConfig(Config):
|
||||||
# #
|
# #
|
||||||
# # If this option is set to false and SMTP options have not been
|
# # If this option is set to false and SMTP options have not been
|
||||||
# # configured, resetting user passwords via email will be disabled
|
# # configured, resetting user passwords via email will be disabled
|
||||||
|
# #
|
||||||
# #trust_identity_server_for_password_resets: false
|
# #trust_identity_server_for_password_resets: false
|
||||||
#
|
#
|
||||||
# # Configure the time that a validation email or text message code
|
# # Configure the time that a validation email or text message code
|
||||||
# # will expire after sending
|
# # will expire after sending
|
||||||
# #
|
# #
|
||||||
# # This is currently used for password resets
|
# # This is currently used for password resets
|
||||||
|
# #
|
||||||
# #validation_token_lifetime: 1h
|
# #validation_token_lifetime: 1h
|
||||||
#
|
#
|
||||||
# # Template directory. All template files should be stored within this
|
# # Template directory. All template files should be stored within this
|
||||||
# # directory
|
# # directory. If not set, default templates from within the Synapse
|
||||||
|
# # package will be used
|
||||||
|
# #
|
||||||
|
# # For the list of default templates, please see
|
||||||
|
# # https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
|
||||||
# #
|
# #
|
||||||
# #template_dir: res/templates
|
# #template_dir: res/templates
|
||||||
#
|
#
|
||||||
|
|
|
@ -26,6 +26,7 @@ class PasswordConfig(Config):
|
||||||
password_config = {}
|
password_config = {}
|
||||||
|
|
||||||
self.password_enabled = password_config.get("enabled", True)
|
self.password_enabled = password_config.get("enabled", True)
|
||||||
|
self.password_localdb_enabled = password_config.get("localdb_enabled", True)
|
||||||
self.password_pepper = password_config.get("pepper", "")
|
self.password_pepper = password_config.get("pepper", "")
|
||||||
|
|
||||||
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
||||||
|
@ -35,6 +36,12 @@ class PasswordConfig(Config):
|
||||||
#
|
#
|
||||||
#enabled: false
|
#enabled: false
|
||||||
|
|
||||||
|
# Uncomment to disable authentication against the local password
|
||||||
|
# database. This is ignored if `enabled` is false, and is only useful
|
||||||
|
# if you have other password_providers.
|
||||||
|
#
|
||||||
|
#localdb_enabled: false
|
||||||
|
|
||||||
# Uncomment and change to a secret random string for extra security.
|
# Uncomment and change to a secret random string for extra security.
|
||||||
# DO NOT CHANGE THIS AFTER INITIAL SETUP!
|
# DO NOT CHANGE THIS AFTER INITIAL SETUP!
|
||||||
#
|
#
|
||||||
|
|
|
@ -23,7 +23,7 @@ import six
|
||||||
|
|
||||||
from unpaddedbase64 import encode_base64
|
from unpaddedbase64 import encode_base64
|
||||||
|
|
||||||
from OpenSSL import crypto
|
from OpenSSL import SSL, crypto
|
||||||
from twisted.internet._sslverify import Certificate, trustRootFromCertificates
|
from twisted.internet._sslverify import Certificate, trustRootFromCertificates
|
||||||
|
|
||||||
from synapse.config._base import Config, ConfigError
|
from synapse.config._base import Config, ConfigError
|
||||||
|
@ -81,6 +81,27 @@ class TlsConfig(Config):
|
||||||
"federation_verify_certificates", True
|
"federation_verify_certificates", True
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Minimum TLS version to use for outbound federation traffic
|
||||||
|
self.federation_client_minimum_tls_version = str(
|
||||||
|
config.get("federation_client_minimum_tls_version", 1)
|
||||||
|
)
|
||||||
|
|
||||||
|
if self.federation_client_minimum_tls_version not in ["1", "1.1", "1.2", "1.3"]:
|
||||||
|
raise ConfigError(
|
||||||
|
"federation_client_minimum_tls_version must be one of: 1, 1.1, 1.2, 1.3"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Prevent people shooting themselves in the foot here by setting it to
|
||||||
|
# the biggest number blindly
|
||||||
|
if self.federation_client_minimum_tls_version == "1.3":
|
||||||
|
if getattr(SSL, "OP_NO_TLSv1_3", None) is None:
|
||||||
|
raise ConfigError(
|
||||||
|
(
|
||||||
|
"federation_client_minimum_tls_version cannot be 1.3, "
|
||||||
|
"your OpenSSL does not support it"
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
# Whitelist of domains to not verify certificates for
|
# Whitelist of domains to not verify certificates for
|
||||||
fed_whitelist_entries = config.get(
|
fed_whitelist_entries = config.get(
|
||||||
"federation_certificate_verification_whitelist", []
|
"federation_certificate_verification_whitelist", []
|
||||||
|
@ -261,6 +282,15 @@ class TlsConfig(Config):
|
||||||
#
|
#
|
||||||
#federation_verify_certificates: false
|
#federation_verify_certificates: false
|
||||||
|
|
||||||
|
# The minimum TLS version that will be used for outbound federation requests.
|
||||||
|
#
|
||||||
|
# Defaults to `1`. Configurable to `1`, `1.1`, `1.2`, or `1.3`. Note
|
||||||
|
# that setting this value higher than `1.2` will prevent federation to most
|
||||||
|
# of the public Matrix network: only configure it to `1.3` if you have an
|
||||||
|
# entirely private federation setup and you can ensure TLS 1.3 support.
|
||||||
|
#
|
||||||
|
#federation_client_minimum_tls_version: 1.2
|
||||||
|
|
||||||
# Skip federation certificate verification on the following whitelist
|
# Skip federation certificate verification on the following whitelist
|
||||||
# of domains.
|
# of domains.
|
||||||
#
|
#
|
||||||
|
|
|
@ -24,12 +24,25 @@ from OpenSSL import SSL, crypto
|
||||||
from twisted.internet._sslverify import _defaultCurveName
|
from twisted.internet._sslverify import _defaultCurveName
|
||||||
from twisted.internet.abstract import isIPAddress, isIPv6Address
|
from twisted.internet.abstract import isIPAddress, isIPv6Address
|
||||||
from twisted.internet.interfaces import IOpenSSLClientConnectionCreator
|
from twisted.internet.interfaces import IOpenSSLClientConnectionCreator
|
||||||
from twisted.internet.ssl import CertificateOptions, ContextFactory, platformTrust
|
from twisted.internet.ssl import (
|
||||||
|
CertificateOptions,
|
||||||
|
ContextFactory,
|
||||||
|
TLSVersion,
|
||||||
|
platformTrust,
|
||||||
|
)
|
||||||
from twisted.python.failure import Failure
|
from twisted.python.failure import Failure
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
_TLS_VERSION_MAP = {
|
||||||
|
"1": TLSVersion.TLSv1_0,
|
||||||
|
"1.1": TLSVersion.TLSv1_1,
|
||||||
|
"1.2": TLSVersion.TLSv1_2,
|
||||||
|
"1.3": TLSVersion.TLSv1_3,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
class ServerContextFactory(ContextFactory):
|
class ServerContextFactory(ContextFactory):
|
||||||
"""Factory for PyOpenSSL SSL contexts that are used to handle incoming
|
"""Factory for PyOpenSSL SSL contexts that are used to handle incoming
|
||||||
connections."""
|
connections."""
|
||||||
|
@ -43,16 +56,18 @@ class ServerContextFactory(ContextFactory):
|
||||||
try:
|
try:
|
||||||
_ecCurve = crypto.get_elliptic_curve(_defaultCurveName)
|
_ecCurve = crypto.get_elliptic_curve(_defaultCurveName)
|
||||||
context.set_tmp_ecdh(_ecCurve)
|
context.set_tmp_ecdh(_ecCurve)
|
||||||
|
|
||||||
except Exception:
|
except Exception:
|
||||||
logger.exception("Failed to enable elliptic curve for TLS")
|
logger.exception("Failed to enable elliptic curve for TLS")
|
||||||
context.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3)
|
|
||||||
|
context.set_options(
|
||||||
|
SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3 | SSL.OP_NO_TLSv1 | SSL.OP_NO_TLSv1_1
|
||||||
|
)
|
||||||
context.use_certificate_chain_file(config.tls_certificate_file)
|
context.use_certificate_chain_file(config.tls_certificate_file)
|
||||||
context.use_privatekey(config.tls_private_key)
|
context.use_privatekey(config.tls_private_key)
|
||||||
|
|
||||||
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
|
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
|
||||||
context.set_cipher_list(
|
context.set_cipher_list(
|
||||||
"ECDH+AESGCM:ECDH+CHACHA20:ECDH+AES256:ECDH+AES128:!aNULL:!SHA1"
|
"ECDH+AESGCM:ECDH+CHACHA20:ECDH+AES256:ECDH+AES128:!aNULL:!SHA1:!AESCCM"
|
||||||
)
|
)
|
||||||
|
|
||||||
def getContext(self):
|
def getContext(self):
|
||||||
|
@ -79,10 +94,22 @@ class ClientTLSOptionsFactory(object):
|
||||||
# Use CA root certs provided by OpenSSL
|
# Use CA root certs provided by OpenSSL
|
||||||
trust_root = platformTrust()
|
trust_root = platformTrust()
|
||||||
|
|
||||||
self._verify_ssl_context = CertificateOptions(trustRoot=trust_root).getContext()
|
# "insecurelyLowerMinimumTo" is the argument that will go lower than
|
||||||
|
# Twisted's default, which is why it is marked as "insecure" (since
|
||||||
|
# Twisted's defaults are reasonably secure). But, since Twisted is
|
||||||
|
# moving to TLS 1.2 by default, we want to respect the config option if
|
||||||
|
# it is set to 1.0 (which the alternate option, raiseMinimumTo, will not
|
||||||
|
# let us do).
|
||||||
|
minTLS = _TLS_VERSION_MAP[config.federation_client_minimum_tls_version]
|
||||||
|
|
||||||
|
self._verify_ssl = CertificateOptions(
|
||||||
|
trustRoot=trust_root, insecurelyLowerMinimumTo=minTLS
|
||||||
|
)
|
||||||
|
self._verify_ssl_context = self._verify_ssl.getContext()
|
||||||
self._verify_ssl_context.set_info_callback(self._context_info_cb)
|
self._verify_ssl_context.set_info_callback(self._context_info_cb)
|
||||||
|
|
||||||
self._no_verify_ssl_context = CertificateOptions().getContext()
|
self._no_verify_ssl = CertificateOptions(insecurelyLowerMinimumTo=minTLS)
|
||||||
|
self._no_verify_ssl_context = self._no_verify_ssl.getContext()
|
||||||
self._no_verify_ssl_context.set_info_callback(self._context_info_cb)
|
self._no_verify_ssl_context.set_info_callback(self._context_info_cb)
|
||||||
|
|
||||||
def get_options(self, host):
|
def get_options(self, host):
|
||||||
|
|
|
@ -743,7 +743,7 @@ class AuthHandler(BaseHandler):
|
||||||
result = (result, None)
|
result = (result, None)
|
||||||
defer.returnValue(result)
|
defer.returnValue(result)
|
||||||
|
|
||||||
if login_type == LoginType.PASSWORD:
|
if login_type == LoginType.PASSWORD and self.hs.config.password_localdb_enabled:
|
||||||
known_login_type = True
|
known_login_type = True
|
||||||
|
|
||||||
canonical_user_id = yield self._check_local_password(
|
canonical_user_id = yield self._check_local_password(
|
||||||
|
|
|
@ -101,9 +101,13 @@ class DeviceWorkerHandler(BaseHandler):
|
||||||
|
|
||||||
room_ids = yield self.store.get_rooms_for_user(user_id)
|
room_ids = yield self.store.get_rooms_for_user(user_id)
|
||||||
|
|
||||||
# First we check if any devices have changed
|
# First we check if any devices have changed for users that we share
|
||||||
changed = yield self.store.get_user_whose_devices_changed(
|
# rooms with.
|
||||||
from_token.device_list_key
|
users_who_share_room = yield self.store.get_users_who_share_room_with_user(
|
||||||
|
user_id
|
||||||
|
)
|
||||||
|
changed = yield self.store.get_users_whose_devices_changed(
|
||||||
|
from_token.device_list_key, users_who_share_room
|
||||||
)
|
)
|
||||||
|
|
||||||
# Then work out if any users have since joined
|
# Then work out if any users have since joined
|
||||||
|
@ -188,10 +192,6 @@ class DeviceWorkerHandler(BaseHandler):
|
||||||
break
|
break
|
||||||
|
|
||||||
if possibly_changed or possibly_left:
|
if possibly_changed or possibly_left:
|
||||||
users_who_share_room = yield self.store.get_users_who_share_room_with_user(
|
|
||||||
user_id
|
|
||||||
)
|
|
||||||
|
|
||||||
# Take the intersection of the users whose devices may have changed
|
# Take the intersection of the users whose devices may have changed
|
||||||
# and those that actually still share a room with the user
|
# and those that actually still share a room with the user
|
||||||
possibly_joined = possibly_changed & users_who_share_room
|
possibly_joined = possibly_changed & users_who_share_room
|
||||||
|
|
|
@ -823,6 +823,7 @@ class RoomMemberHandler(object):
|
||||||
"sender": user.to_string(),
|
"sender": user.to_string(),
|
||||||
"state_key": token,
|
"state_key": token,
|
||||||
},
|
},
|
||||||
|
ratelimit=False,
|
||||||
txn_id=txn_id,
|
txn_id=txn_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
@ -33,6 +33,9 @@ class SetPasswordHandler(BaseHandler):
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def set_password(self, user_id, newpassword, requester=None):
|
def set_password(self, user_id, newpassword, requester=None):
|
||||||
|
if not self.hs.config.password_localdb_enabled:
|
||||||
|
raise SynapseError(403, "Password change disabled", errcode=Codes.FORBIDDEN)
|
||||||
|
|
||||||
password_hash = yield self._auth_handler.hash(newpassword)
|
password_hash = yield self._auth_handler.hash(newpassword)
|
||||||
|
|
||||||
except_device_id = requester.device_id if requester else None
|
except_device_id = requester.device_id if requester else None
|
||||||
|
|
|
@ -1058,40 +1058,74 @@ class SyncHandler(object):
|
||||||
newly_left_rooms,
|
newly_left_rooms,
|
||||||
newly_left_users,
|
newly_left_users,
|
||||||
):
|
):
|
||||||
|
"""Generate the DeviceLists section of sync
|
||||||
|
|
||||||
|
Args:
|
||||||
|
sync_result_builder (SyncResultBuilder)
|
||||||
|
newly_joined_rooms (set[str]): Set of rooms user has joined since
|
||||||
|
previous sync
|
||||||
|
newly_joined_or_invited_users (set[str]): Set of users that have
|
||||||
|
joined or been invited to a room since previous sync.
|
||||||
|
newly_left_rooms (set[str]): Set of rooms user has left since
|
||||||
|
previous sync
|
||||||
|
newly_left_users (set[str]): Set of users that have left a room
|
||||||
|
we're in since previous sync
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Deferred[DeviceLists]
|
||||||
|
"""
|
||||||
|
|
||||||
user_id = sync_result_builder.sync_config.user.to_string()
|
user_id = sync_result_builder.sync_config.user.to_string()
|
||||||
since_token = sync_result_builder.since_token
|
since_token = sync_result_builder.since_token
|
||||||
|
|
||||||
|
# We're going to mutate these fields, so lets copy them rather than
|
||||||
|
# assume they won't get used later.
|
||||||
|
newly_joined_or_invited_users = set(newly_joined_or_invited_users)
|
||||||
|
newly_left_users = set(newly_left_users)
|
||||||
|
|
||||||
if since_token and since_token.device_list_key:
|
if since_token and since_token.device_list_key:
|
||||||
changed = yield self.store.get_user_whose_devices_changed(
|
# We want to figure out what user IDs the client should refetch
|
||||||
since_token.device_list_key
|
# device keys for, and which users we aren't going to track changes
|
||||||
)
|
# for anymore.
|
||||||
|
#
|
||||||
# TODO: Be more clever than this, i.e. remove users who we already
|
# For the first step we check:
|
||||||
# share a room with?
|
# a. if any users we share a room with have updated their devices,
|
||||||
for room_id in newly_joined_rooms:
|
# and
|
||||||
joined_users = yield self.state.get_current_users_in_room(room_id)
|
# b. we also check if we've joined any new rooms, or if a user has
|
||||||
newly_joined_or_invited_users.update(joined_users)
|
# joined a room we're in.
|
||||||
|
#
|
||||||
for room_id in newly_left_rooms:
|
# For the second step we just find any users we no longer share a
|
||||||
left_users = yield self.state.get_current_users_in_room(room_id)
|
# room with by looking at all users that have left a room plus users
|
||||||
newly_left_users.update(left_users)
|
# that were in a room we've left.
|
||||||
|
|
||||||
# TODO: Check that these users are actually new, i.e. either they
|
|
||||||
# weren't in the previous sync *or* they left and rejoined.
|
|
||||||
changed.update(newly_joined_or_invited_users)
|
|
||||||
|
|
||||||
if not changed and not newly_left_users:
|
|
||||||
defer.returnValue(DeviceLists(changed=[], left=newly_left_users))
|
|
||||||
|
|
||||||
users_who_share_room = yield self.store.get_users_who_share_room_with_user(
|
users_who_share_room = yield self.store.get_users_who_share_room_with_user(
|
||||||
user_id
|
user_id
|
||||||
)
|
)
|
||||||
|
|
||||||
defer.returnValue(
|
# Step 1a, check for changes in devices of users we share a room with
|
||||||
DeviceLists(
|
users_that_have_changed = yield self.store.get_users_whose_devices_changed(
|
||||||
changed=users_who_share_room & changed,
|
since_token.device_list_key, users_who_share_room
|
||||||
left=set(newly_left_users) - users_who_share_room,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Step 1b, check for newly joined rooms
|
||||||
|
for room_id in newly_joined_rooms:
|
||||||
|
joined_users = yield self.state.get_current_users_in_room(room_id)
|
||||||
|
newly_joined_or_invited_users.update(joined_users)
|
||||||
|
|
||||||
|
# TODO: Check that these users are actually new, i.e. either they
|
||||||
|
# weren't in the previous sync *or* they left and rejoined.
|
||||||
|
users_that_have_changed.update(newly_joined_or_invited_users)
|
||||||
|
|
||||||
|
# Now find users that we no longer track
|
||||||
|
for room_id in newly_left_rooms:
|
||||||
|
left_users = yield self.state.get_current_users_in_room(room_id)
|
||||||
|
newly_left_users.update(left_users)
|
||||||
|
|
||||||
|
# Remove any users that we still share a room with.
|
||||||
|
newly_left_users -= users_who_share_room
|
||||||
|
|
||||||
|
defer.returnValue(
|
||||||
|
DeviceLists(changed=users_that_have_changed, left=newly_left_users)
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
defer.returnValue(DeviceLists(changed=[], left=[]))
|
defer.returnValue(DeviceLists(changed=[], left=[]))
|
||||||
|
|
|
@ -16,10 +16,11 @@
|
||||||
|
|
||||||
import cgi
|
import cgi
|
||||||
import collections
|
import collections
|
||||||
|
import http.client
|
||||||
import logging
|
import logging
|
||||||
|
import types
|
||||||
from six import PY3
|
import urllib
|
||||||
from six.moves import http_client, urllib
|
from io import BytesIO
|
||||||
|
|
||||||
from canonicaljson import encode_canonical_json, encode_pretty_printed_json, json
|
from canonicaljson import encode_canonical_json, encode_pretty_printed_json, json
|
||||||
|
|
||||||
|
@ -41,11 +42,6 @@ from synapse.api.errors import (
|
||||||
from synapse.util.caches import intern_dict
|
from synapse.util.caches import intern_dict
|
||||||
from synapse.util.logcontext import preserve_fn
|
from synapse.util.logcontext import preserve_fn
|
||||||
|
|
||||||
if PY3:
|
|
||||||
from io import BytesIO
|
|
||||||
else:
|
|
||||||
from cStringIO import StringIO as BytesIO
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
HTML_ERROR_TEMPLATE = """<!DOCTYPE html>
|
HTML_ERROR_TEMPLATE = """<!DOCTYPE html>
|
||||||
|
@ -75,10 +71,9 @@ def wrap_json_request_handler(h):
|
||||||
deferred fails with any other type of error we send a 500 reponse.
|
deferred fails with any other type of error we send a 500 reponse.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
async def wrapped_request_handler(self, request):
|
||||||
def wrapped_request_handler(self, request):
|
|
||||||
try:
|
try:
|
||||||
yield h(self, request)
|
await h(self, request)
|
||||||
except SynapseError as e:
|
except SynapseError as e:
|
||||||
code = e.code
|
code = e.code
|
||||||
logger.info("%s SynapseError: %s - %s", request, code, e.msg)
|
logger.info("%s SynapseError: %s - %s", request, code, e.msg)
|
||||||
|
@ -142,10 +137,12 @@ def wrap_html_request_handler(h):
|
||||||
where "request" must be a SynapseRequest.
|
where "request" must be a SynapseRequest.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def wrapped_request_handler(self, request):
|
async def wrapped_request_handler(self, request):
|
||||||
d = defer.maybeDeferred(h, self, request)
|
try:
|
||||||
d.addErrback(_return_html_error, request)
|
return await h(self, request)
|
||||||
return d
|
except Exception:
|
||||||
|
f = failure.Failure()
|
||||||
|
return _return_html_error(f, request)
|
||||||
|
|
||||||
return wrap_async_request_handler(wrapped_request_handler)
|
return wrap_async_request_handler(wrapped_request_handler)
|
||||||
|
|
||||||
|
@ -171,7 +168,7 @@ def _return_html_error(f, request):
|
||||||
exc_info=(f.type, f.value, f.getTracebackObject()),
|
exc_info=(f.type, f.value, f.getTracebackObject()),
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
code = http_client.INTERNAL_SERVER_ERROR
|
code = http.client.INTERNAL_SERVER_ERROR
|
||||||
msg = "Internal server error"
|
msg = "Internal server error"
|
||||||
|
|
||||||
logger.error(
|
logger.error(
|
||||||
|
@ -201,10 +198,9 @@ def wrap_async_request_handler(h):
|
||||||
logged until the deferred completes.
|
logged until the deferred completes.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
async def wrapped_async_request_handler(self, request):
|
||||||
def wrapped_async_request_handler(self, request):
|
|
||||||
with request.processing():
|
with request.processing():
|
||||||
yield h(self, request)
|
await h(self, request)
|
||||||
|
|
||||||
# we need to preserve_fn here, because the synchronous render method won't yield for
|
# we need to preserve_fn here, because the synchronous render method won't yield for
|
||||||
# us (obviously)
|
# us (obviously)
|
||||||
|
@ -270,12 +266,11 @@ class JsonResource(HttpServer, resource.Resource):
|
||||||
def render(self, request):
|
def render(self, request):
|
||||||
""" This gets called by twisted every time someone sends us a request.
|
""" This gets called by twisted every time someone sends us a request.
|
||||||
"""
|
"""
|
||||||
self._async_render(request)
|
defer.ensureDeferred(self._async_render(request))
|
||||||
return NOT_DONE_YET
|
return NOT_DONE_YET
|
||||||
|
|
||||||
@wrap_json_request_handler
|
@wrap_json_request_handler
|
||||||
@defer.inlineCallbacks
|
async def _async_render(self, request):
|
||||||
def _async_render(self, request):
|
|
||||||
""" This gets called from render() every time someone sends us a request.
|
""" This gets called from render() every time someone sends us a request.
|
||||||
This checks if anyone has registered a callback for that method and
|
This checks if anyone has registered a callback for that method and
|
||||||
path.
|
path.
|
||||||
|
@ -292,26 +287,19 @@ class JsonResource(HttpServer, resource.Resource):
|
||||||
# Now trigger the callback. If it returns a response, we send it
|
# Now trigger the callback. If it returns a response, we send it
|
||||||
# here. If it throws an exception, that is handled by the wrapper
|
# here. If it throws an exception, that is handled by the wrapper
|
||||||
# installed by @request_handler.
|
# installed by @request_handler.
|
||||||
|
|
||||||
def _unquote(s):
|
|
||||||
if PY3:
|
|
||||||
# On Python 3, unquote is unicode -> unicode
|
|
||||||
return urllib.parse.unquote(s)
|
|
||||||
else:
|
|
||||||
# On Python 2, unquote is bytes -> bytes We need to encode the
|
|
||||||
# URL again (as it was decoded by _get_handler_for request), as
|
|
||||||
# ASCII because it's a URL, and then decode it to get the UTF-8
|
|
||||||
# characters that were quoted.
|
|
||||||
return urllib.parse.unquote(s.encode("ascii")).decode("utf8")
|
|
||||||
|
|
||||||
kwargs = intern_dict(
|
kwargs = intern_dict(
|
||||||
{
|
{
|
||||||
name: _unquote(value) if value else value
|
name: urllib.parse.unquote(value) if value else value
|
||||||
for name, value in group_dict.items()
|
for name, value in group_dict.items()
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
|
||||||
callback_return = yield callback(request, **kwargs)
|
callback_return = callback(request, **kwargs)
|
||||||
|
|
||||||
|
# Is it synchronous? We'll allow this for now.
|
||||||
|
if isinstance(callback_return, (defer.Deferred, types.CoroutineType)):
|
||||||
|
callback_return = await callback_return
|
||||||
|
|
||||||
if callback_return is not None:
|
if callback_return is not None:
|
||||||
code, response = callback_return
|
code, response = callback_return
|
||||||
self._send_response(request, code, response)
|
self._send_response(request, code, response)
|
||||||
|
@ -360,6 +348,23 @@ class JsonResource(HttpServer, resource.Resource):
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class DirectServeResource(resource.Resource):
|
||||||
|
def render(self, request):
|
||||||
|
"""
|
||||||
|
Render the request, using an asynchronous render handler if it exists.
|
||||||
|
"""
|
||||||
|
render_callback_name = "_async_render_" + request.method.decode("ascii")
|
||||||
|
|
||||||
|
if hasattr(self, render_callback_name):
|
||||||
|
# Call the handler
|
||||||
|
callback = getattr(self, render_callback_name)
|
||||||
|
defer.ensureDeferred(callback(request))
|
||||||
|
|
||||||
|
return NOT_DONE_YET
|
||||||
|
else:
|
||||||
|
super().render(request)
|
||||||
|
|
||||||
|
|
||||||
def _options_handler(request):
|
def _options_handler(request):
|
||||||
"""Request handler for OPTIONS requests
|
"""Request handler for OPTIONS requests
|
||||||
|
|
||||||
|
|
|
@ -437,6 +437,9 @@ def runUntilCurrentTimer(func):
|
||||||
counts = gc.get_count()
|
counts = gc.get_count()
|
||||||
for i in (2, 1, 0):
|
for i in (2, 1, 0):
|
||||||
if threshold[i] < counts[i]:
|
if threshold[i] < counts[i]:
|
||||||
|
if i == 0:
|
||||||
|
logger.debug("Collecting gc %d", i)
|
||||||
|
else:
|
||||||
logger.info("Collecting gc %d", i)
|
logger.info("Collecting gc %d", i)
|
||||||
|
|
||||||
start = time.time()
|
start = time.time()
|
||||||
|
|
|
@ -95,6 +95,7 @@ CONDITIONAL_REQUIREMENTS = {
|
||||||
"url_preview": ["lxml>=3.5.0"],
|
"url_preview": ["lxml>=3.5.0"],
|
||||||
"test": ["mock>=2.0", "parameterized"],
|
"test": ["mock>=2.0", "parameterized"],
|
||||||
"sentry": ["sentry-sdk>=0.7.2"],
|
"sentry": ["sentry-sdk>=0.7.2"],
|
||||||
|
"jwt": ["pyjwt>=1.6.4"],
|
||||||
}
|
}
|
||||||
|
|
||||||
ALL_OPTIONAL_REQUIREMENTS = set()
|
ALL_OPTIONAL_REQUIREMENTS = set()
|
||||||
|
|
|
@ -340,7 +340,7 @@ class LoginRestServlet(RestServlet):
|
||||||
}
|
}
|
||||||
else:
|
else:
|
||||||
user_id, access_token = (
|
user_id, access_token = (
|
||||||
yield self.handlers.registration_handler.register(localpart=user)
|
yield self.registration_handler.register(localpart=user)
|
||||||
)
|
)
|
||||||
|
|
||||||
device_id = login_submission.get("device_id")
|
device_id = login_submission.get("device_id")
|
||||||
|
|
|
@ -23,13 +23,13 @@ from six.moves import http_client
|
||||||
import jinja2
|
import jinja2
|
||||||
from jinja2 import TemplateNotFound
|
from jinja2 import TemplateNotFound
|
||||||
|
|
||||||
from twisted.internet import defer
|
|
||||||
from twisted.web.resource import Resource
|
|
||||||
from twisted.web.server import NOT_DONE_YET
|
|
||||||
|
|
||||||
from synapse.api.errors import NotFoundError, StoreError, SynapseError
|
from synapse.api.errors import NotFoundError, StoreError, SynapseError
|
||||||
from synapse.config import ConfigError
|
from synapse.config import ConfigError
|
||||||
from synapse.http.server import finish_request, wrap_html_request_handler
|
from synapse.http.server import (
|
||||||
|
DirectServeResource,
|
||||||
|
finish_request,
|
||||||
|
wrap_html_request_handler,
|
||||||
|
)
|
||||||
from synapse.http.servlet import parse_string
|
from synapse.http.servlet import parse_string
|
||||||
from synapse.types import UserID
|
from synapse.types import UserID
|
||||||
|
|
||||||
|
@ -47,7 +47,7 @@ else:
|
||||||
return a == b
|
return a == b
|
||||||
|
|
||||||
|
|
||||||
class ConsentResource(Resource):
|
class ConsentResource(DirectServeResource):
|
||||||
"""A twisted Resource to display a privacy policy and gather consent to it
|
"""A twisted Resource to display a privacy policy and gather consent to it
|
||||||
|
|
||||||
When accessed via GET, returns the privacy policy via a template.
|
When accessed via GET, returns the privacy policy via a template.
|
||||||
|
@ -87,7 +87,7 @@ class ConsentResource(Resource):
|
||||||
Args:
|
Args:
|
||||||
hs (synapse.server.HomeServer): homeserver
|
hs (synapse.server.HomeServer): homeserver
|
||||||
"""
|
"""
|
||||||
Resource.__init__(self)
|
super().__init__()
|
||||||
|
|
||||||
self.hs = hs
|
self.hs = hs
|
||||||
self.store = hs.get_datastore()
|
self.store = hs.get_datastore()
|
||||||
|
@ -118,18 +118,12 @@ class ConsentResource(Resource):
|
||||||
|
|
||||||
self._hmac_secret = hs.config.form_secret.encode("utf-8")
|
self._hmac_secret = hs.config.form_secret.encode("utf-8")
|
||||||
|
|
||||||
def render_GET(self, request):
|
|
||||||
self._async_render_GET(request)
|
|
||||||
return NOT_DONE_YET
|
|
||||||
|
|
||||||
@wrap_html_request_handler
|
@wrap_html_request_handler
|
||||||
@defer.inlineCallbacks
|
async def _async_render_GET(self, request):
|
||||||
def _async_render_GET(self, request):
|
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
request (twisted.web.http.Request):
|
request (twisted.web.http.Request):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
version = parse_string(request, "v", default=self._default_consent_version)
|
version = parse_string(request, "v", default=self._default_consent_version)
|
||||||
username = parse_string(request, "u", required=False, default="")
|
username = parse_string(request, "u", required=False, default="")
|
||||||
userhmac = None
|
userhmac = None
|
||||||
|
@ -145,7 +139,7 @@ class ConsentResource(Resource):
|
||||||
else:
|
else:
|
||||||
qualified_user_id = UserID(username, self.hs.hostname).to_string()
|
qualified_user_id = UserID(username, self.hs.hostname).to_string()
|
||||||
|
|
||||||
u = yield self.store.get_user_by_id(qualified_user_id)
|
u = await self.store.get_user_by_id(qualified_user_id)
|
||||||
if u is None:
|
if u is None:
|
||||||
raise NotFoundError("Unknown user")
|
raise NotFoundError("Unknown user")
|
||||||
|
|
||||||
|
@ -165,13 +159,8 @@ class ConsentResource(Resource):
|
||||||
except TemplateNotFound:
|
except TemplateNotFound:
|
||||||
raise NotFoundError("Unknown policy version")
|
raise NotFoundError("Unknown policy version")
|
||||||
|
|
||||||
def render_POST(self, request):
|
|
||||||
self._async_render_POST(request)
|
|
||||||
return NOT_DONE_YET
|
|
||||||
|
|
||||||
@wrap_html_request_handler
|
@wrap_html_request_handler
|
||||||
@defer.inlineCallbacks
|
async def _async_render_POST(self, request):
|
||||||
def _async_render_POST(self, request):
|
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
request (twisted.web.http.Request):
|
request (twisted.web.http.Request):
|
||||||
|
@ -188,12 +177,12 @@ class ConsentResource(Resource):
|
||||||
qualified_user_id = UserID(username, self.hs.hostname).to_string()
|
qualified_user_id = UserID(username, self.hs.hostname).to_string()
|
||||||
|
|
||||||
try:
|
try:
|
||||||
yield self.store.user_set_consent_version(qualified_user_id, version)
|
await self.store.user_set_consent_version(qualified_user_id, version)
|
||||||
except StoreError as e:
|
except StoreError as e:
|
||||||
if e.code != 404:
|
if e.code != 404:
|
||||||
raise
|
raise
|
||||||
raise NotFoundError("Unknown user")
|
raise NotFoundError("Unknown user")
|
||||||
yield self.registration_handler.post_consent_actions(qualified_user_id)
|
await self.registration_handler.post_consent_actions(qualified_user_id)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self._render_template(request, "success.html")
|
self._render_template(request, "success.html")
|
||||||
|
|
|
@ -16,18 +16,20 @@ import logging
|
||||||
from io import BytesIO
|
from io import BytesIO
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
from twisted.web.resource import Resource
|
|
||||||
from twisted.web.server import NOT_DONE_YET
|
|
||||||
|
|
||||||
from synapse.api.errors import Codes, SynapseError
|
from synapse.api.errors import Codes, SynapseError
|
||||||
from synapse.crypto.keyring import ServerKeyFetcher
|
from synapse.crypto.keyring import ServerKeyFetcher
|
||||||
from synapse.http.server import respond_with_json_bytes, wrap_json_request_handler
|
from synapse.http.server import (
|
||||||
|
DirectServeResource,
|
||||||
|
respond_with_json_bytes,
|
||||||
|
wrap_json_request_handler,
|
||||||
|
)
|
||||||
from synapse.http.servlet import parse_integer, parse_json_object_from_request
|
from synapse.http.servlet import parse_integer, parse_json_object_from_request
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class RemoteKey(Resource):
|
class RemoteKey(DirectServeResource):
|
||||||
"""HTTP resource for retreiving the TLS certificate and NACL signature
|
"""HTTP resource for retreiving the TLS certificate and NACL signature
|
||||||
verification keys for a collection of servers. Checks that the reported
|
verification keys for a collection of servers. Checks that the reported
|
||||||
X.509 TLS certificate matches the one used in the HTTPS connection. Checks
|
X.509 TLS certificate matches the one used in the HTTPS connection. Checks
|
||||||
|
@ -94,13 +96,8 @@ class RemoteKey(Resource):
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
self.federation_domain_whitelist = hs.config.federation_domain_whitelist
|
self.federation_domain_whitelist = hs.config.federation_domain_whitelist
|
||||||
|
|
||||||
def render_GET(self, request):
|
|
||||||
self.async_render_GET(request)
|
|
||||||
return NOT_DONE_YET
|
|
||||||
|
|
||||||
@wrap_json_request_handler
|
@wrap_json_request_handler
|
||||||
@defer.inlineCallbacks
|
async def _async_render_GET(self, request):
|
||||||
def async_render_GET(self, request):
|
|
||||||
if len(request.postpath) == 1:
|
if len(request.postpath) == 1:
|
||||||
server, = request.postpath
|
server, = request.postpath
|
||||||
query = {server.decode("ascii"): {}}
|
query = {server.decode("ascii"): {}}
|
||||||
|
@ -114,20 +111,15 @@ class RemoteKey(Resource):
|
||||||
else:
|
else:
|
||||||
raise SynapseError(404, "Not found %r" % request.postpath, Codes.NOT_FOUND)
|
raise SynapseError(404, "Not found %r" % request.postpath, Codes.NOT_FOUND)
|
||||||
|
|
||||||
yield self.query_keys(request, query, query_remote_on_cache_miss=True)
|
await self.query_keys(request, query, query_remote_on_cache_miss=True)
|
||||||
|
|
||||||
def render_POST(self, request):
|
|
||||||
self.async_render_POST(request)
|
|
||||||
return NOT_DONE_YET
|
|
||||||
|
|
||||||
@wrap_json_request_handler
|
@wrap_json_request_handler
|
||||||
@defer.inlineCallbacks
|
async def _async_render_POST(self, request):
|
||||||
def async_render_POST(self, request):
|
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
query = content["server_keys"]
|
query = content["server_keys"]
|
||||||
|
|
||||||
yield self.query_keys(request, query, query_remote_on_cache_miss=True)
|
await self.query_keys(request, query, query_remote_on_cache_miss=True)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def query_keys(self, request, query, query_remote_on_cache_miss=False):
|
def query_keys(self, request, query, query_remote_on_cache_miss=False):
|
||||||
|
|
|
@ -14,31 +14,28 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
#
|
#
|
||||||
|
|
||||||
from twisted.internet import defer
|
|
||||||
from twisted.web.resource import Resource
|
|
||||||
from twisted.web.server import NOT_DONE_YET
|
from twisted.web.server import NOT_DONE_YET
|
||||||
|
|
||||||
from synapse.http.server import respond_with_json, wrap_json_request_handler
|
from synapse.http.server import (
|
||||||
|
DirectServeResource,
|
||||||
|
respond_with_json,
|
||||||
|
wrap_json_request_handler,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class MediaConfigResource(Resource):
|
class MediaConfigResource(DirectServeResource):
|
||||||
isLeaf = True
|
isLeaf = True
|
||||||
|
|
||||||
def __init__(self, hs):
|
def __init__(self, hs):
|
||||||
Resource.__init__(self)
|
super().__init__()
|
||||||
config = hs.get_config()
|
config = hs.get_config()
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
self.auth = hs.get_auth()
|
self.auth = hs.get_auth()
|
||||||
self.limits_dict = {"m.upload.size": config.max_upload_size}
|
self.limits_dict = {"m.upload.size": config.max_upload_size}
|
||||||
|
|
||||||
def render_GET(self, request):
|
|
||||||
self._async_render_GET(request)
|
|
||||||
return NOT_DONE_YET
|
|
||||||
|
|
||||||
@wrap_json_request_handler
|
@wrap_json_request_handler
|
||||||
@defer.inlineCallbacks
|
async def _async_render_GET(self, request):
|
||||||
def _async_render_GET(self, request):
|
await self.auth.get_user_by_req(request)
|
||||||
yield self.auth.get_user_by_req(request)
|
|
||||||
respond_with_json(request, 200, self.limits_dict, send_cors=True)
|
respond_with_json(request, 200, self.limits_dict, send_cors=True)
|
||||||
|
|
||||||
def render_OPTIONS(self, request):
|
def render_OPTIONS(self, request):
|
||||||
|
|
|
@ -14,37 +14,31 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from twisted.internet import defer
|
|
||||||
from twisted.web.resource import Resource
|
|
||||||
from twisted.web.server import NOT_DONE_YET
|
|
||||||
|
|
||||||
import synapse.http.servlet
|
import synapse.http.servlet
|
||||||
from synapse.http.server import set_cors_headers, wrap_json_request_handler
|
from synapse.http.server import (
|
||||||
|
DirectServeResource,
|
||||||
|
set_cors_headers,
|
||||||
|
wrap_json_request_handler,
|
||||||
|
)
|
||||||
|
|
||||||
from ._base import parse_media_id, respond_404
|
from ._base import parse_media_id, respond_404
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class DownloadResource(Resource):
|
class DownloadResource(DirectServeResource):
|
||||||
isLeaf = True
|
isLeaf = True
|
||||||
|
|
||||||
def __init__(self, hs, media_repo):
|
def __init__(self, hs, media_repo):
|
||||||
Resource.__init__(self)
|
super().__init__()
|
||||||
|
|
||||||
self.media_repo = media_repo
|
self.media_repo = media_repo
|
||||||
self.server_name = hs.hostname
|
self.server_name = hs.hostname
|
||||||
|
|
||||||
# this is expected by @wrap_json_request_handler
|
# this is expected by @wrap_json_request_handler
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
|
|
||||||
def render_GET(self, request):
|
|
||||||
self._async_render_GET(request)
|
|
||||||
return NOT_DONE_YET
|
|
||||||
|
|
||||||
@wrap_json_request_handler
|
@wrap_json_request_handler
|
||||||
@defer.inlineCallbacks
|
async def _async_render_GET(self, request):
|
||||||
def _async_render_GET(self, request):
|
|
||||||
set_cors_headers(request)
|
set_cors_headers(request)
|
||||||
request.setHeader(
|
request.setHeader(
|
||||||
b"Content-Security-Policy",
|
b"Content-Security-Policy",
|
||||||
|
@ -58,7 +52,7 @@ class DownloadResource(Resource):
|
||||||
)
|
)
|
||||||
server_name, media_id, name = parse_media_id(request)
|
server_name, media_id, name = parse_media_id(request)
|
||||||
if server_name == self.server_name:
|
if server_name == self.server_name:
|
||||||
yield self.media_repo.get_local_media(request, media_id, name)
|
await self.media_repo.get_local_media(request, media_id, name)
|
||||||
else:
|
else:
|
||||||
allow_remote = synapse.http.servlet.parse_boolean(
|
allow_remote = synapse.http.servlet.parse_boolean(
|
||||||
request, "allow_remote", default=True
|
request, "allow_remote", default=True
|
||||||
|
@ -72,4 +66,4 @@ class DownloadResource(Resource):
|
||||||
respond_404(request)
|
respond_404(request)
|
||||||
return
|
return
|
||||||
|
|
||||||
yield self.media_repo.get_remote_media(request, server_name, media_id, name)
|
await self.media_repo.get_remote_media(request, server_name, media_id, name)
|
||||||
|
|
|
@ -32,12 +32,11 @@ from canonicaljson import json
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
from twisted.internet.error import DNSLookupError
|
from twisted.internet.error import DNSLookupError
|
||||||
from twisted.web.resource import Resource
|
|
||||||
from twisted.web.server import NOT_DONE_YET
|
|
||||||
|
|
||||||
from synapse.api.errors import Codes, SynapseError
|
from synapse.api.errors import Codes, SynapseError
|
||||||
from synapse.http.client import SimpleHttpClient
|
from synapse.http.client import SimpleHttpClient
|
||||||
from synapse.http.server import (
|
from synapse.http.server import (
|
||||||
|
DirectServeResource,
|
||||||
respond_with_json,
|
respond_with_json,
|
||||||
respond_with_json_bytes,
|
respond_with_json_bytes,
|
||||||
wrap_json_request_handler,
|
wrap_json_request_handler,
|
||||||
|
@ -58,11 +57,11 @@ _charset_match = re.compile(br"<\s*meta[^>]*charset\s*=\s*([a-z0-9-]+)", flags=r
|
||||||
_content_type_match = re.compile(r'.*; *charset="?(.*?)"?(;|$)', flags=re.I)
|
_content_type_match = re.compile(r'.*; *charset="?(.*?)"?(;|$)', flags=re.I)
|
||||||
|
|
||||||
|
|
||||||
class PreviewUrlResource(Resource):
|
class PreviewUrlResource(DirectServeResource):
|
||||||
isLeaf = True
|
isLeaf = True
|
||||||
|
|
||||||
def __init__(self, hs, media_repo, media_storage):
|
def __init__(self, hs, media_repo, media_storage):
|
||||||
Resource.__init__(self)
|
super().__init__()
|
||||||
|
|
||||||
self.auth = hs.get_auth()
|
self.auth = hs.get_auth()
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
|
@ -98,16 +97,11 @@ class PreviewUrlResource(Resource):
|
||||||
def render_OPTIONS(self, request):
|
def render_OPTIONS(self, request):
|
||||||
return respond_with_json(request, 200, {}, send_cors=True)
|
return respond_with_json(request, 200, {}, send_cors=True)
|
||||||
|
|
||||||
def render_GET(self, request):
|
|
||||||
self._async_render_GET(request)
|
|
||||||
return NOT_DONE_YET
|
|
||||||
|
|
||||||
@wrap_json_request_handler
|
@wrap_json_request_handler
|
||||||
@defer.inlineCallbacks
|
async def _async_render_GET(self, request):
|
||||||
def _async_render_GET(self, request):
|
|
||||||
|
|
||||||
# XXX: if get_user_by_req fails, what should we do in an async render?
|
# XXX: if get_user_by_req fails, what should we do in an async render?
|
||||||
requester = yield self.auth.get_user_by_req(request)
|
requester = await self.auth.get_user_by_req(request)
|
||||||
url = parse_string(request, "url")
|
url = parse_string(request, "url")
|
||||||
if b"ts" in request.args:
|
if b"ts" in request.args:
|
||||||
ts = parse_integer(request, "ts")
|
ts = parse_integer(request, "ts")
|
||||||
|
@ -159,7 +153,7 @@ class PreviewUrlResource(Resource):
|
||||||
else:
|
else:
|
||||||
logger.info("Returning cached response")
|
logger.info("Returning cached response")
|
||||||
|
|
||||||
og = yield make_deferred_yieldable(observable.observe())
|
og = await make_deferred_yieldable(defer.maybeDeferred(observable.observe))
|
||||||
respond_with_json_bytes(request, 200, og, send_cors=True)
|
respond_with_json_bytes(request, 200, og, send_cors=True)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
|
|
@ -17,10 +17,12 @@
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
from twisted.web.resource import Resource
|
|
||||||
from twisted.web.server import NOT_DONE_YET
|
|
||||||
|
|
||||||
from synapse.http.server import set_cors_headers, wrap_json_request_handler
|
from synapse.http.server import (
|
||||||
|
DirectServeResource,
|
||||||
|
set_cors_headers,
|
||||||
|
wrap_json_request_handler,
|
||||||
|
)
|
||||||
from synapse.http.servlet import parse_integer, parse_string
|
from synapse.http.servlet import parse_integer, parse_string
|
||||||
|
|
||||||
from ._base import (
|
from ._base import (
|
||||||
|
@ -34,11 +36,11 @@ from ._base import (
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class ThumbnailResource(Resource):
|
class ThumbnailResource(DirectServeResource):
|
||||||
isLeaf = True
|
isLeaf = True
|
||||||
|
|
||||||
def __init__(self, hs, media_repo, media_storage):
|
def __init__(self, hs, media_repo, media_storage):
|
||||||
Resource.__init__(self)
|
super().__init__()
|
||||||
|
|
||||||
self.store = hs.get_datastore()
|
self.store = hs.get_datastore()
|
||||||
self.media_repo = media_repo
|
self.media_repo = media_repo
|
||||||
|
@ -47,13 +49,8 @@ class ThumbnailResource(Resource):
|
||||||
self.server_name = hs.hostname
|
self.server_name = hs.hostname
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
|
|
||||||
def render_GET(self, request):
|
|
||||||
self._async_render_GET(request)
|
|
||||||
return NOT_DONE_YET
|
|
||||||
|
|
||||||
@wrap_json_request_handler
|
@wrap_json_request_handler
|
||||||
@defer.inlineCallbacks
|
async def _async_render_GET(self, request):
|
||||||
def _async_render_GET(self, request):
|
|
||||||
set_cors_headers(request)
|
set_cors_headers(request)
|
||||||
server_name, media_id, _ = parse_media_id(request)
|
server_name, media_id, _ = parse_media_id(request)
|
||||||
width = parse_integer(request, "width", required=True)
|
width = parse_integer(request, "width", required=True)
|
||||||
|
@ -63,21 +60,21 @@ class ThumbnailResource(Resource):
|
||||||
|
|
||||||
if server_name == self.server_name:
|
if server_name == self.server_name:
|
||||||
if self.dynamic_thumbnails:
|
if self.dynamic_thumbnails:
|
||||||
yield self._select_or_generate_local_thumbnail(
|
await self._select_or_generate_local_thumbnail(
|
||||||
request, media_id, width, height, method, m_type
|
request, media_id, width, height, method, m_type
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
yield self._respond_local_thumbnail(
|
await self._respond_local_thumbnail(
|
||||||
request, media_id, width, height, method, m_type
|
request, media_id, width, height, method, m_type
|
||||||
)
|
)
|
||||||
self.media_repo.mark_recently_accessed(None, media_id)
|
self.media_repo.mark_recently_accessed(None, media_id)
|
||||||
else:
|
else:
|
||||||
if self.dynamic_thumbnails:
|
if self.dynamic_thumbnails:
|
||||||
yield self._select_or_generate_remote_thumbnail(
|
await self._select_or_generate_remote_thumbnail(
|
||||||
request, server_name, media_id, width, height, method, m_type
|
request, server_name, media_id, width, height, method, m_type
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
yield self._respond_remote_thumbnail(
|
await self._respond_remote_thumbnail(
|
||||||
request, server_name, media_id, width, height, method, m_type
|
request, server_name, media_id, width, height, method, m_type
|
||||||
)
|
)
|
||||||
self.media_repo.mark_recently_accessed(server_name, media_id)
|
self.media_repo.mark_recently_accessed(server_name, media_id)
|
||||||
|
|
|
@ -15,22 +15,24 @@
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from twisted.internet import defer
|
|
||||||
from twisted.web.resource import Resource
|
|
||||||
from twisted.web.server import NOT_DONE_YET
|
from twisted.web.server import NOT_DONE_YET
|
||||||
|
|
||||||
from synapse.api.errors import SynapseError
|
from synapse.api.errors import SynapseError
|
||||||
from synapse.http.server import respond_with_json, wrap_json_request_handler
|
from synapse.http.server import (
|
||||||
|
DirectServeResource,
|
||||||
|
respond_with_json,
|
||||||
|
wrap_json_request_handler,
|
||||||
|
)
|
||||||
from synapse.http.servlet import parse_string
|
from synapse.http.servlet import parse_string
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class UploadResource(Resource):
|
class UploadResource(DirectServeResource):
|
||||||
isLeaf = True
|
isLeaf = True
|
||||||
|
|
||||||
def __init__(self, hs, media_repo):
|
def __init__(self, hs, media_repo):
|
||||||
Resource.__init__(self)
|
super().__init__()
|
||||||
|
|
||||||
self.media_repo = media_repo
|
self.media_repo = media_repo
|
||||||
self.filepaths = media_repo.filepaths
|
self.filepaths = media_repo.filepaths
|
||||||
|
@ -41,18 +43,13 @@ class UploadResource(Resource):
|
||||||
self.max_upload_size = hs.config.max_upload_size
|
self.max_upload_size = hs.config.max_upload_size
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
|
|
||||||
def render_POST(self, request):
|
|
||||||
self._async_render_POST(request)
|
|
||||||
return NOT_DONE_YET
|
|
||||||
|
|
||||||
def render_OPTIONS(self, request):
|
def render_OPTIONS(self, request):
|
||||||
respond_with_json(request, 200, {}, send_cors=True)
|
respond_with_json(request, 200, {}, send_cors=True)
|
||||||
return NOT_DONE_YET
|
return NOT_DONE_YET
|
||||||
|
|
||||||
@wrap_json_request_handler
|
@wrap_json_request_handler
|
||||||
@defer.inlineCallbacks
|
async def _async_render_POST(self, request):
|
||||||
def _async_render_POST(self, request):
|
requester = await self.auth.get_user_by_req(request)
|
||||||
requester = yield self.auth.get_user_by_req(request)
|
|
||||||
# TODO: The checks here are a bit late. The content will have
|
# TODO: The checks here are a bit late. The content will have
|
||||||
# already been uploaded to a tmp file at this point
|
# already been uploaded to a tmp file at this point
|
||||||
content_length = request.getHeader(b"Content-Length").decode("ascii")
|
content_length = request.getHeader(b"Content-Length").decode("ascii")
|
||||||
|
@ -81,7 +78,7 @@ class UploadResource(Resource):
|
||||||
# disposition = headers.getRawHeaders(b"Content-Disposition")[0]
|
# disposition = headers.getRawHeaders(b"Content-Disposition")[0]
|
||||||
# TODO(markjh): parse content-dispostion
|
# TODO(markjh): parse content-dispostion
|
||||||
|
|
||||||
content_uri = yield self.media_repo.create_content(
|
content_uri = await self.media_repo.create_content(
|
||||||
media_type, upload_name, request.content, content_length, requester.user
|
media_type, upload_name, request.content, content_length, requester.user
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
@ -14,25 +14,18 @@
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from twisted.web.resource import Resource
|
from synapse.http.server import DirectServeResource, wrap_html_request_handler
|
||||||
from twisted.web.server import NOT_DONE_YET
|
|
||||||
|
|
||||||
from synapse.http.server import wrap_html_request_handler
|
|
||||||
|
|
||||||
|
|
||||||
class SAML2ResponseResource(Resource):
|
class SAML2ResponseResource(DirectServeResource):
|
||||||
"""A Twisted web resource which handles the SAML response"""
|
"""A Twisted web resource which handles the SAML response"""
|
||||||
|
|
||||||
isLeaf = 1
|
isLeaf = 1
|
||||||
|
|
||||||
def __init__(self, hs):
|
def __init__(self, hs):
|
||||||
Resource.__init__(self)
|
super().__init__()
|
||||||
self._saml_handler = hs.get_saml_handler()
|
self._saml_handler = hs.get_saml_handler()
|
||||||
|
|
||||||
def render_POST(self, request):
|
|
||||||
self._async_render_POST(request)
|
|
||||||
return NOT_DONE_YET
|
|
||||||
|
|
||||||
@wrap_html_request_handler
|
@wrap_html_request_handler
|
||||||
def _async_render_POST(self, request):
|
async def _async_render_POST(self, request):
|
||||||
return self._saml_handler.handle_saml_response(request)
|
return await self._saml_handler.handle_saml_response(request)
|
||||||
|
|
|
@ -24,6 +24,7 @@ from synapse.api.errors import StoreError
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
from synapse.storage._base import Cache, SQLBaseStore, db_to_json
|
from synapse.storage._base import Cache, SQLBaseStore, db_to_json
|
||||||
from synapse.storage.background_updates import BackgroundUpdateStore
|
from synapse.storage.background_updates import BackgroundUpdateStore
|
||||||
|
from synapse.util import batch_iter
|
||||||
from synapse.util.caches.descriptors import cached, cachedInlineCallbacks, cachedList
|
from synapse.util.caches.descriptors import cached, cachedInlineCallbacks, cachedList
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
@ -391,22 +392,47 @@ class DeviceWorkerStore(SQLBaseStore):
|
||||||
|
|
||||||
return now_stream_id, []
|
return now_stream_id, []
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
def get_users_whose_devices_changed(self, from_key, user_ids):
|
||||||
def get_user_whose_devices_changed(self, from_key):
|
"""Get set of users whose devices have changed since `from_key` that
|
||||||
"""Get set of users whose devices have changed since `from_key`.
|
are in the given list of user_ids.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
from_key (str): The device lists stream token
|
||||||
|
user_ids (Iterable[str])
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Deferred[set[str]]: The set of user_ids whose devices have changed
|
||||||
|
since `from_key`
|
||||||
"""
|
"""
|
||||||
from_key = int(from_key)
|
from_key = int(from_key)
|
||||||
changed = self._device_list_stream_cache.get_all_entities_changed(from_key)
|
|
||||||
if changed is not None:
|
# Get set of users who *may* have changed. Users not in the returned
|
||||||
defer.returnValue(set(changed))
|
# list have definitely not changed.
|
||||||
|
to_check = list(
|
||||||
|
self._device_list_stream_cache.get_entities_changed(user_ids, from_key)
|
||||||
|
)
|
||||||
|
|
||||||
|
if not to_check:
|
||||||
|
return defer.succeed(set())
|
||||||
|
|
||||||
|
def _get_users_whose_devices_changed_txn(txn):
|
||||||
|
changes = set()
|
||||||
|
|
||||||
sql = """
|
sql = """
|
||||||
SELECT DISTINCT user_id FROM device_lists_stream WHERE stream_id > ?
|
SELECT DISTINCT user_id FROM device_lists_stream
|
||||||
|
WHERE stream_id > ?
|
||||||
|
AND user_id IN (%s)
|
||||||
"""
|
"""
|
||||||
rows = yield self._execute(
|
|
||||||
"get_user_whose_devices_changed", None, sql, from_key
|
for chunk in batch_iter(to_check, 100):
|
||||||
|
txn.execute(sql % (",".join("?" for _ in chunk),), (from_key,) + chunk)
|
||||||
|
changes.update(user_id for user_id, in txn)
|
||||||
|
|
||||||
|
return changes
|
||||||
|
|
||||||
|
return self.runInteraction(
|
||||||
|
"get_users_whose_devices_changed", _get_users_whose_devices_changed_txn
|
||||||
)
|
)
|
||||||
defer.returnValue(set(row[0] for row in rows))
|
|
||||||
|
|
||||||
def get_all_device_list_changes_for_remotes(self, from_key, to_key):
|
def get_all_device_list_changes_for_remotes(self, from_key, to_key):
|
||||||
"""Return a list of `(stream_id, user_id, destination)` which is the
|
"""Return a list of `(stream_id, user_id, destination)` which is the
|
||||||
|
|
|
@ -1,5 +1,6 @@
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
# Copyright 2019 New Vector Ltd
|
# Copyright 2019 New Vector Ltd
|
||||||
|
# Copyright 2019 Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
|
@ -15,7 +16,10 @@
|
||||||
|
|
||||||
import os
|
import os
|
||||||
|
|
||||||
from synapse.config.tls import TlsConfig
|
from OpenSSL import SSL
|
||||||
|
|
||||||
|
from synapse.config.tls import ConfigError, TlsConfig
|
||||||
|
from synapse.crypto.context_factory import ClientTLSOptionsFactory
|
||||||
|
|
||||||
from tests.unittest import TestCase
|
from tests.unittest import TestCase
|
||||||
|
|
||||||
|
@ -78,3 +82,112 @@ s4niecZKPBizL6aucT59CsunNmmb5Glq8rlAcU+1ZTZZzGYqVYhF6axB9Qg=
|
||||||
"or use Synapse's ACME support to provision one."
|
"or use Synapse's ACME support to provision one."
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def test_tls_client_minimum_default(self):
|
||||||
|
"""
|
||||||
|
The default client TLS version is 1.0.
|
||||||
|
"""
|
||||||
|
config = {}
|
||||||
|
t = TestConfig()
|
||||||
|
t.read_config(config, config_dir_path="", data_dir_path="")
|
||||||
|
|
||||||
|
self.assertEqual(t.federation_client_minimum_tls_version, "1")
|
||||||
|
|
||||||
|
def test_tls_client_minimum_set(self):
|
||||||
|
"""
|
||||||
|
The default client TLS version can be set to 1.0, 1.1, and 1.2.
|
||||||
|
"""
|
||||||
|
config = {"federation_client_minimum_tls_version": 1}
|
||||||
|
t = TestConfig()
|
||||||
|
t.read_config(config, config_dir_path="", data_dir_path="")
|
||||||
|
self.assertEqual(t.federation_client_minimum_tls_version, "1")
|
||||||
|
|
||||||
|
config = {"federation_client_minimum_tls_version": 1.1}
|
||||||
|
t = TestConfig()
|
||||||
|
t.read_config(config, config_dir_path="", data_dir_path="")
|
||||||
|
self.assertEqual(t.federation_client_minimum_tls_version, "1.1")
|
||||||
|
|
||||||
|
config = {"federation_client_minimum_tls_version": 1.2}
|
||||||
|
t = TestConfig()
|
||||||
|
t.read_config(config, config_dir_path="", data_dir_path="")
|
||||||
|
self.assertEqual(t.federation_client_minimum_tls_version, "1.2")
|
||||||
|
|
||||||
|
# Also test a string version
|
||||||
|
config = {"federation_client_minimum_tls_version": "1"}
|
||||||
|
t = TestConfig()
|
||||||
|
t.read_config(config, config_dir_path="", data_dir_path="")
|
||||||
|
self.assertEqual(t.federation_client_minimum_tls_version, "1")
|
||||||
|
|
||||||
|
config = {"federation_client_minimum_tls_version": "1.2"}
|
||||||
|
t = TestConfig()
|
||||||
|
t.read_config(config, config_dir_path="", data_dir_path="")
|
||||||
|
self.assertEqual(t.federation_client_minimum_tls_version, "1.2")
|
||||||
|
|
||||||
|
def test_tls_client_minimum_1_point_3_missing(self):
|
||||||
|
"""
|
||||||
|
If TLS 1.3 support is missing and it's configured, it will raise a
|
||||||
|
ConfigError.
|
||||||
|
"""
|
||||||
|
# thanks i hate it
|
||||||
|
if hasattr(SSL, "OP_NO_TLSv1_3"):
|
||||||
|
OP_NO_TLSv1_3 = SSL.OP_NO_TLSv1_3
|
||||||
|
delattr(SSL, "OP_NO_TLSv1_3")
|
||||||
|
self.addCleanup(setattr, SSL, "SSL.OP_NO_TLSv1_3", OP_NO_TLSv1_3)
|
||||||
|
assert not hasattr(SSL, "OP_NO_TLSv1_3")
|
||||||
|
|
||||||
|
config = {"federation_client_minimum_tls_version": 1.3}
|
||||||
|
t = TestConfig()
|
||||||
|
with self.assertRaises(ConfigError) as e:
|
||||||
|
t.read_config(config, config_dir_path="", data_dir_path="")
|
||||||
|
self.assertEqual(
|
||||||
|
e.exception.args[0],
|
||||||
|
(
|
||||||
|
"federation_client_minimum_tls_version cannot be 1.3, "
|
||||||
|
"your OpenSSL does not support it"
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_tls_client_minimum_1_point_3_exists(self):
|
||||||
|
"""
|
||||||
|
If TLS 1.3 support exists and it's configured, it will be settable.
|
||||||
|
"""
|
||||||
|
# thanks i hate it, still
|
||||||
|
if not hasattr(SSL, "OP_NO_TLSv1_3"):
|
||||||
|
SSL.OP_NO_TLSv1_3 = 0x00
|
||||||
|
self.addCleanup(lambda: delattr(SSL, "OP_NO_TLSv1_3"))
|
||||||
|
assert hasattr(SSL, "OP_NO_TLSv1_3")
|
||||||
|
|
||||||
|
config = {"federation_client_minimum_tls_version": 1.3}
|
||||||
|
t = TestConfig()
|
||||||
|
t.read_config(config, config_dir_path="", data_dir_path="")
|
||||||
|
self.assertEqual(t.federation_client_minimum_tls_version, "1.3")
|
||||||
|
|
||||||
|
def test_tls_client_minimum_set_passed_through_1_2(self):
|
||||||
|
"""
|
||||||
|
The configured TLS version is correctly configured by the ContextFactory.
|
||||||
|
"""
|
||||||
|
config = {"federation_client_minimum_tls_version": 1.2}
|
||||||
|
t = TestConfig()
|
||||||
|
t.read_config(config, config_dir_path="", data_dir_path="")
|
||||||
|
|
||||||
|
cf = ClientTLSOptionsFactory(t)
|
||||||
|
|
||||||
|
# The context has had NO_TLSv1_1 and NO_TLSv1_0 set, but not NO_TLSv1_2
|
||||||
|
self.assertNotEqual(cf._verify_ssl._options & SSL.OP_NO_TLSv1, 0)
|
||||||
|
self.assertNotEqual(cf._verify_ssl._options & SSL.OP_NO_TLSv1_1, 0)
|
||||||
|
self.assertEqual(cf._verify_ssl._options & SSL.OP_NO_TLSv1_2, 0)
|
||||||
|
|
||||||
|
def test_tls_client_minimum_set_passed_through_1_0(self):
|
||||||
|
"""
|
||||||
|
The configured TLS version is correctly configured by the ContextFactory.
|
||||||
|
"""
|
||||||
|
config = {"federation_client_minimum_tls_version": 1}
|
||||||
|
t = TestConfig()
|
||||||
|
t.read_config(config, config_dir_path="", data_dir_path="")
|
||||||
|
|
||||||
|
cf = ClientTLSOptionsFactory(t)
|
||||||
|
|
||||||
|
# The context has not had any of the NO_TLS set.
|
||||||
|
self.assertEqual(cf._verify_ssl._options & SSL.OP_NO_TLSv1, 0)
|
||||||
|
self.assertEqual(cf._verify_ssl._options & SSL.OP_NO_TLSv1_1, 0)
|
||||||
|
self.assertEqual(cf._verify_ssl._options & SSL.OP_NO_TLSv1_2, 0)
|
||||||
|
|
|
@ -22,7 +22,6 @@ from binascii import unhexlify
|
||||||
from mock import Mock
|
from mock import Mock
|
||||||
from six.moves.urllib import parse
|
from six.moves.urllib import parse
|
||||||
|
|
||||||
from twisted.internet import defer, reactor
|
|
||||||
from twisted.internet.defer import Deferred
|
from twisted.internet.defer import Deferred
|
||||||
|
|
||||||
from synapse.rest.media.v1._base import FileInfo
|
from synapse.rest.media.v1._base import FileInfo
|
||||||
|
@ -34,15 +33,17 @@ from synapse.util.logcontext import make_deferred_yieldable
|
||||||
from tests import unittest
|
from tests import unittest
|
||||||
|
|
||||||
|
|
||||||
class MediaStorageTests(unittest.TestCase):
|
class MediaStorageTests(unittest.HomeserverTestCase):
|
||||||
def setUp(self):
|
|
||||||
|
needs_threadpool = True
|
||||||
|
|
||||||
|
def prepare(self, reactor, clock, hs):
|
||||||
self.test_dir = tempfile.mkdtemp(prefix="synapse-tests-")
|
self.test_dir = tempfile.mkdtemp(prefix="synapse-tests-")
|
||||||
|
self.addCleanup(shutil.rmtree, self.test_dir)
|
||||||
|
|
||||||
self.primary_base_path = os.path.join(self.test_dir, "primary")
|
self.primary_base_path = os.path.join(self.test_dir, "primary")
|
||||||
self.secondary_base_path = os.path.join(self.test_dir, "secondary")
|
self.secondary_base_path = os.path.join(self.test_dir, "secondary")
|
||||||
|
|
||||||
hs = Mock()
|
|
||||||
hs.get_reactor = Mock(return_value=reactor)
|
|
||||||
hs.config.media_store_path = self.primary_base_path
|
hs.config.media_store_path = self.primary_base_path
|
||||||
|
|
||||||
storage_providers = [FileStorageProviderBackend(hs, self.secondary_base_path)]
|
storage_providers = [FileStorageProviderBackend(hs, self.secondary_base_path)]
|
||||||
|
@ -52,10 +53,6 @@ class MediaStorageTests(unittest.TestCase):
|
||||||
hs, self.primary_base_path, self.filepaths, storage_providers
|
hs, self.primary_base_path, self.filepaths, storage_providers
|
||||||
)
|
)
|
||||||
|
|
||||||
def tearDown(self):
|
|
||||||
shutil.rmtree(self.test_dir)
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
def test_ensure_media_is_in_local_cache(self):
|
def test_ensure_media_is_in_local_cache(self):
|
||||||
media_id = "some_media_id"
|
media_id = "some_media_id"
|
||||||
test_body = "Test\n"
|
test_body = "Test\n"
|
||||||
|
@ -73,7 +70,15 @@ class MediaStorageTests(unittest.TestCase):
|
||||||
# Now we run ensure_media_is_in_local_cache, which should copy the file
|
# Now we run ensure_media_is_in_local_cache, which should copy the file
|
||||||
# to the local cache.
|
# to the local cache.
|
||||||
file_info = FileInfo(None, media_id)
|
file_info = FileInfo(None, media_id)
|
||||||
local_path = yield self.media_storage.ensure_media_is_in_local_cache(file_info)
|
|
||||||
|
# This uses a real blocking threadpool so we have to wait for it to be
|
||||||
|
# actually done :/
|
||||||
|
x = self.media_storage.ensure_media_is_in_local_cache(file_info)
|
||||||
|
|
||||||
|
# Hotloop until the threadpool does its job...
|
||||||
|
self.wait_on_thread(x)
|
||||||
|
|
||||||
|
local_path = self.get_success(x)
|
||||||
|
|
||||||
self.assertTrue(os.path.exists(local_path))
|
self.assertTrue(os.path.exists(local_path))
|
||||||
|
|
||||||
|
|
|
@ -17,6 +17,7 @@ import gc
|
||||||
import hashlib
|
import hashlib
|
||||||
import hmac
|
import hmac
|
||||||
import logging
|
import logging
|
||||||
|
import time
|
||||||
|
|
||||||
from mock import Mock
|
from mock import Mock
|
||||||
|
|
||||||
|
@ -24,7 +25,8 @@ from canonicaljson import json
|
||||||
|
|
||||||
import twisted
|
import twisted
|
||||||
import twisted.logger
|
import twisted.logger
|
||||||
from twisted.internet.defer import Deferred
|
from twisted.internet.defer import Deferred, succeed
|
||||||
|
from twisted.python.threadpool import ThreadPool
|
||||||
from twisted.trial import unittest
|
from twisted.trial import unittest
|
||||||
|
|
||||||
from synapse.api.constants import EventTypes
|
from synapse.api.constants import EventTypes
|
||||||
|
@ -164,6 +166,7 @@ class HomeserverTestCase(TestCase):
|
||||||
|
|
||||||
servlets = []
|
servlets = []
|
||||||
hijack_auth = True
|
hijack_auth = True
|
||||||
|
needs_threadpool = False
|
||||||
|
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
"""
|
"""
|
||||||
|
@ -192,16 +195,20 @@ class HomeserverTestCase(TestCase):
|
||||||
if self.hijack_auth:
|
if self.hijack_auth:
|
||||||
|
|
||||||
def get_user_by_access_token(token=None, allow_guest=False):
|
def get_user_by_access_token(token=None, allow_guest=False):
|
||||||
return {
|
return succeed(
|
||||||
|
{
|
||||||
"user": UserID.from_string(self.helper.auth_user_id),
|
"user": UserID.from_string(self.helper.auth_user_id),
|
||||||
"token_id": 1,
|
"token_id": 1,
|
||||||
"is_guest": False,
|
"is_guest": False,
|
||||||
}
|
}
|
||||||
|
)
|
||||||
|
|
||||||
def get_user_by_req(request, allow_guest=False, rights="access"):
|
def get_user_by_req(request, allow_guest=False, rights="access"):
|
||||||
return create_requester(
|
return succeed(
|
||||||
|
create_requester(
|
||||||
UserID.from_string(self.helper.auth_user_id), 1, False, None
|
UserID.from_string(self.helper.auth_user_id), 1, False, None
|
||||||
)
|
)
|
||||||
|
)
|
||||||
|
|
||||||
self.hs.get_auth().get_user_by_req = get_user_by_req
|
self.hs.get_auth().get_user_by_req = get_user_by_req
|
||||||
self.hs.get_auth().get_user_by_access_token = get_user_by_access_token
|
self.hs.get_auth().get_user_by_access_token = get_user_by_access_token
|
||||||
|
@ -209,9 +216,26 @@ class HomeserverTestCase(TestCase):
|
||||||
return_value="1234"
|
return_value="1234"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if self.needs_threadpool:
|
||||||
|
self.reactor.threadpool = ThreadPool()
|
||||||
|
self.addCleanup(self.reactor.threadpool.stop)
|
||||||
|
self.reactor.threadpool.start()
|
||||||
|
|
||||||
if hasattr(self, "prepare"):
|
if hasattr(self, "prepare"):
|
||||||
self.prepare(self.reactor, self.clock, self.hs)
|
self.prepare(self.reactor, self.clock, self.hs)
|
||||||
|
|
||||||
|
def wait_on_thread(self, deferred, timeout=10):
|
||||||
|
"""
|
||||||
|
Wait until a Deferred is done, where it's waiting on a real thread.
|
||||||
|
"""
|
||||||
|
start_time = time.time()
|
||||||
|
|
||||||
|
while not deferred.called:
|
||||||
|
if start_time + timeout < time.time():
|
||||||
|
raise ValueError("Timed out waiting for threadpool")
|
||||||
|
self.reactor.advance(0.01)
|
||||||
|
time.sleep(0.01)
|
||||||
|
|
||||||
def make_homeserver(self, reactor, clock):
|
def make_homeserver(self, reactor, clock):
|
||||||
"""
|
"""
|
||||||
Make and return a homeserver.
|
Make and return a homeserver.
|
||||||
|
|
Loading…
Reference in a new issue