Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes

This commit is contained in:
Erik Johnston 2019-10-25 10:19:09 +01:00
commit b409d51dee
375 changed files with 6860 additions and 5086 deletions

View file

@ -1,48 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
from tap.parser import Parser
from tap.line import Result, Unknown, Diagnostic
out = ["### TAP Output for " + sys.argv[2]]
p = Parser()
in_error = False
for line in p.parse_file(sys.argv[1]):
if isinstance(line, Result):
if in_error:
out.append("")
out.append("</pre></code></details>")
out.append("")
out.append("----")
out.append("")
in_error = False
if not line.ok and not line.todo:
in_error = True
out.append("FAILURE Test #%d: ``%s``" % (line.number, line.description))
out.append("")
out.append("<details><summary>Show log</summary><code><pre>")
elif isinstance(line, Diagnostic) and in_error:
out.append(line.text)
if out:
for line in out[:-3]:
print(line)

View file

@ -1,3 +1,99 @@
Synapse 1.5.0rc1 (2019-10-24)
==========================
This release includes a database migration step **which may take a long time to complete**:
- Allow devices to be marked as hidden, for use by features such as cross-signing.
This adds a new field with a default value to the devices field in the database,
and so the database upgrade may take a long time depending on how many devices
are in the database. ([\#5759](https://github.com/matrix-org/synapse/issues/5759))
Features
--------
- Improve quality of thumbnails for 1-bit/8-bit color palette images. ([\#2142](https://github.com/matrix-org/synapse/issues/2142))
- Add ability to upload cross-signing signatures. ([\#5726](https://github.com/matrix-org/synapse/issues/5726))
- Allow uploading of cross-signing keys. ([\#5769](https://github.com/matrix-org/synapse/issues/5769))
- CAS login now provides a default display name for users if a `displayname_attribute` is set in the configuration file. ([\#6114](https://github.com/matrix-org/synapse/issues/6114))
- Reject all pending invites for a user during deactivation. ([\#6125](https://github.com/matrix-org/synapse/issues/6125))
- Add config option to suppress client side resource limit alerting. ([\#6173](https://github.com/matrix-org/synapse/issues/6173))
Bugfixes
--------
- Return an HTTP 404 instead of 400 when requesting a filter by ID that is unknown to the server. Thanks to @krombel for contributing this! ([\#2380](https://github.com/matrix-org/synapse/issues/2380))
- Fix a bug where users could be invited twice to the same group. ([\#3436](https://github.com/matrix-org/synapse/issues/3436))
- Fix `/createRoom` failing with badly-formatted MXIDs in the invitee list. Thanks to @wener291! ([\#4088](https://github.com/matrix-org/synapse/issues/4088))
- Make the `synapse_port_db` script create the right indexes on a new PostgreSQL database. ([\#6102](https://github.com/matrix-org/synapse/issues/6102), [\#6178](https://github.com/matrix-org/synapse/issues/6178), [\#6243](https://github.com/matrix-org/synapse/issues/6243))
- Fix bug when uploading a large file: Synapse responds with `M_UNKNOWN` while it should be `M_TOO_LARGE` according to spec. Contributed by Anshul Angaria. ([\#6109](https://github.com/matrix-org/synapse/issues/6109))
- Fix user push rules being deleted from a room when it is upgraded. ([\#6144](https://github.com/matrix-org/synapse/issues/6144))
- Don't 500 when trying to exchange a revoked 3PID invite. ([\#6147](https://github.com/matrix-org/synapse/issues/6147))
- Fix transferring notifications and tags when joining an upgraded room that is new to your server. ([\#6155](https://github.com/matrix-org/synapse/issues/6155))
- Fix bug where guest account registration can wedge after restart. ([\#6161](https://github.com/matrix-org/synapse/issues/6161))
- Fix monthly active user reaping when reserved users are specified. ([\#6168](https://github.com/matrix-org/synapse/issues/6168))
- Fix `/federation/v1/state` endpoint not supporting newer room versions. ([\#6170](https://github.com/matrix-org/synapse/issues/6170))
- Fix bug where we were updating censored events as bytes rather than text, occaisonally causing invalid JSON being inserted breaking APIs that attempted to fetch such events. ([\#6186](https://github.com/matrix-org/synapse/issues/6186))
- Fix occasional missed updates in the room and user directories. ([\#6187](https://github.com/matrix-org/synapse/issues/6187))
- Fix tracing of non-JSON APIs, `/media`, `/key` etc. ([\#6195](https://github.com/matrix-org/synapse/issues/6195))
- Fix bug where presence would not get timed out correctly if a synchrotron worker is used and restarted. ([\#6212](https://github.com/matrix-org/synapse/issues/6212))
- synapse_port_db: Add 2 additional BOOLEAN_COLUMNS to be able to convert from database schema v56. ([\#6216](https://github.com/matrix-org/synapse/issues/6216))
- Fix a bug where the Synapse demo script blacklisted `::1` (ipv6 localhost) from receiving federation traffic. ([\#6229](https://github.com/matrix-org/synapse/issues/6229))
Updates to the Docker image
---------------------------
- Fix logging getting lost for the docker image. ([\#6197](https://github.com/matrix-org/synapse/issues/6197))
Internal Changes
----------------
- Update `user_filters` table to have a unique index, and non-null columns. Thanks to @pik for contributing this. ([\#1172](https://github.com/matrix-org/synapse/issues/1172), [\#6175](https://github.com/matrix-org/synapse/issues/6175), [\#6184](https://github.com/matrix-org/synapse/issues/6184))
- Move lookup-related functions from RoomMemberHandler to IdentityHandler. ([\#5978](https://github.com/matrix-org/synapse/issues/5978))
- Improve performance of the public room list directory. ([\#6019](https://github.com/matrix-org/synapse/issues/6019), [\#6152](https://github.com/matrix-org/synapse/issues/6152), [\#6153](https://github.com/matrix-org/synapse/issues/6153), [\#6154](https://github.com/matrix-org/synapse/issues/6154))
- Edit header dicts docstrings in `SimpleHttpClient` to note that `str` or `bytes` can be passed as header keys. ([\#6077](https://github.com/matrix-org/synapse/issues/6077))
- Add snapcraft packaging information. Contributed by @devec0. ([\#6084](https://github.com/matrix-org/synapse/issues/6084), [\#6191](https://github.com/matrix-org/synapse/issues/6191))
- Kill off half-implemented password-reset via sms. ([\#6101](https://github.com/matrix-org/synapse/issues/6101))
- Remove `get_user_by_req` opentracing span and add some tags. ([\#6108](https://github.com/matrix-org/synapse/issues/6108))
- Drop some unused database tables. ([\#6115](https://github.com/matrix-org/synapse/issues/6115))
- Add env var to turn on tracking of log context changes. ([\#6127](https://github.com/matrix-org/synapse/issues/6127))
- Refactor configuration loading to allow better typechecking. ([\#6137](https://github.com/matrix-org/synapse/issues/6137))
- Log responder when responding to media request. ([\#6139](https://github.com/matrix-org/synapse/issues/6139))
- Improve performance of `find_next_generated_user_id` DB query. ([\#6148](https://github.com/matrix-org/synapse/issues/6148))
- Expand type-checking on modules imported by `synapse.config`. ([\#6150](https://github.com/matrix-org/synapse/issues/6150))
- Use Postgres ANY for selecting many values. ([\#6156](https://github.com/matrix-org/synapse/issues/6156))
- Add more caching to `_get_joined_users_from_context` DB query. ([\#6159](https://github.com/matrix-org/synapse/issues/6159))
- Add some metrics on the federation sender. ([\#6160](https://github.com/matrix-org/synapse/issues/6160))
- Add some logging to the rooms stats updates, to try to track down a flaky test. ([\#6167](https://github.com/matrix-org/synapse/issues/6167))
- Remove unused `timeout` parameter from `_get_public_room_list`. ([\#6179](https://github.com/matrix-org/synapse/issues/6179))
- Reject (accidental) attempts to insert bytes into postgres tables. ([\#6186](https://github.com/matrix-org/synapse/issues/6186))
- Make `version` optional in body of `PUT /room_keys/version/{version}`, since it's redundant. ([\#6189](https://github.com/matrix-org/synapse/issues/6189))
- Make storage layer responsible for adding device names to key, rather than the handler. ([\#6193](https://github.com/matrix-org/synapse/issues/6193))
- Port `synapse.rest.admin` module to use async/await. ([\#6196](https://github.com/matrix-org/synapse/issues/6196))
- Enforce that all boolean configuration values are lowercase in CI. ([\#6203](https://github.com/matrix-org/synapse/issues/6203))
- Remove some unused event-auth code. ([\#6214](https://github.com/matrix-org/synapse/issues/6214))
- Remove `Auth.check` method. ([\#6217](https://github.com/matrix-org/synapse/issues/6217))
- Remove `format_tap.py` script in favour of a perl reimplementation in Sytest's repo. ([\#6219](https://github.com/matrix-org/synapse/issues/6219))
- Refactor storage layer in preparation to support having multiple databases. ([\#6231](https://github.com/matrix-org/synapse/issues/6231))
- Remove some extra quotation marks across the codebase. ([\#6236](https://github.com/matrix-org/synapse/issues/6236))
Synapse 1.4.1 (2019-10-18)
==========================
No changes since 1.4.1rc1.
Synapse 1.4.1rc1 (2019-10-17)
=============================
Bugfixes
--------
- Fix bug where redacted events were sometimes incorrectly censored in the database, breaking APIs that attempted to fetch such events. ([\#6185](https://github.com/matrix-org/synapse/issues/6185), [5b0e9948](https://github.com/matrix-org/synapse/commit/5b0e9948eaae801643e594b5abc8ee4b10bd194e))
Synapse 1.4.0 (2019-10-03)
==========================

View file

@ -8,11 +8,12 @@ include demo/demo.tls.dh
include demo/*.py
include demo/*.sh
recursive-include synapse/storage/schema *.sql
recursive-include synapse/storage/schema *.sql.postgres
recursive-include synapse/storage/schema *.sql.sqlite
recursive-include synapse/storage/schema *.py
recursive-include synapse/storage/schema *.txt
recursive-include synapse/storage *.sql
recursive-include synapse/storage *.sql.postgres
recursive-include synapse/storage *.sql.sqlite
recursive-include synapse/storage *.py
recursive-include synapse/storage *.txt
recursive-include synapse/storage *.md
recursive-include docs *
recursive-include scripts *

View file

@ -1 +0,0 @@
Update `user_filters` table to have a unique index, and non-null columns. Thanks to @pik for contributing this.

View file

@ -1 +0,0 @@
Improve quality of thumbnails for 1-bit/8-bit color palette images.

View file

@ -1 +0,0 @@
Return an HTTP 404 instead of 400 when requesting a filter by ID that is unknown to the server. Thanks to @krombel for contributing this!

View file

@ -1 +0,0 @@
Fix a problem where users could be invited twice to the same group.

View file

@ -1 +0,0 @@
Added domain validation when including a list of invitees upon room creation.

View file

@ -1 +0,0 @@
Move lookup-related functions from RoomMemberHandler to IdentityHandler.

View file

@ -1 +0,0 @@
Improve performance of the public room list directory.

View file

@ -1 +0,0 @@
Edit header dicts docstrings in SimpleHttpClient to note that `str` or `bytes` can be passed as header keys.

View file

@ -1 +0,0 @@
Add snapcraft packaging information. Contributed by @devec0.

View file

@ -1 +0,0 @@
Kill off half-implemented password-reset via sms.

View file

@ -1 +0,0 @@
Remove `get_user_by_req` opentracing span and add some tags.

View file

@ -1 +0,0 @@
Fix bug when uploading a large file: Synapse responds with `M_UNKNOWN` while it should be `M_TOO_LARGE` according to spec. Contributed by Anshul Angaria.

View file

@ -1 +0,0 @@
Drop some unused database tables.

View file

@ -1 +0,0 @@
Reject all pending invites for a user during deactivation.

View file

@ -1 +0,0 @@
Add env var to turn on tracking of log context changes.

View file

@ -1 +0,0 @@
Refactor configuration loading to allow better typechecking.

View file

@ -1 +0,0 @@
Log responder when responding to media request.

View file

@ -1 +0,0 @@
Prevent user push rules being deleted from a room when it is upgraded.

View file

@ -1 +0,0 @@
Don't 500 when trying to exchange a revoked 3PID invite.

View file

@ -1 +0,0 @@
Improve performance of `find_next_generated_user_id` DB query.

View file

@ -1 +0,0 @@
Expand type-checking on modules imported by synapse.config.

View file

@ -1 +0,0 @@
Improve performance of the public room list directory.

View file

@ -1 +0,0 @@
Improve performance of the public room list directory.

View file

@ -1 +0,0 @@
Improve performance of the public room list directory.

View file

@ -1 +0,0 @@
Fix transferring notifications and tags when joining an upgraded room that is new to your server.

View file

@ -1 +0,0 @@
Use Postgres ANY for selecting many values.

View file

@ -1 +0,0 @@
Add more caching to `_get_joined_users_from_context` DB query.

View file

@ -1 +0,0 @@
Add some metrics on the federation sender.

View file

@ -1 +0,0 @@
Fix bug where guest account registration can wedge after restart.

View file

@ -1 +0,0 @@
Add some logging to the rooms stats updates, to try to track down a flaky test.

View file

@ -1 +0,0 @@
Fix monthly active user reaping where reserved users are specified.

View file

@ -1 +0,0 @@
Fix /federation/v1/state endpoint for recent room versions.

View file

@ -1 +0,0 @@
Update `user_filters` table to have a unique index, and non-null columns. Thanks to @pik for contributing this.

View file

@ -1 +0,0 @@
Make the `synapse_port_db` script create the right indexes on a new PostgreSQL database.

View file

@ -1 +0,0 @@
Remove unused `timeout` parameter from `_get_public_room_list`.

View file

@ -1 +0,0 @@
Update `user_filters` table to have a unique index, and non-null columns. Thanks to @pik for contributing this.

View file

@ -1 +0,0 @@
Fix bug where redacted events were sometimes incorrectly censored in the database, breaking APIs that attempted to fetch such events.

View file

@ -1 +0,0 @@
Fix bug where we were updating censored events as bytes rather than text, occaisonally causing invalid JSON being inserted breaking APIs that attempted to fetch such events.

View file

@ -1 +0,0 @@
Fix occasional missed updates in the room and user directories.

View file

@ -1 +0,0 @@
Add snapcraft packaging information. Contributed by @devec0.

1
changelog.d/6247.bugfix Normal file
View file

@ -0,0 +1 @@
Update list of boolean columns in `synapse_port_db`.

1
changelog.d/6248.misc Normal file
View file

@ -0,0 +1 @@
Move schema delta files to the correct data store.

1
changelog.d/6250.misc Normal file
View file

@ -0,0 +1 @@
Reduce verbosity of user/room stats.

1
changelog.d/6251.misc Normal file
View file

@ -0,0 +1 @@
Reduce impact of debug logging.

View file

@ -1,39 +1,26 @@
# Synapse Docker
FIXME: this is out-of-date as of
https://github.com/matrix-org/synapse/issues/5518. Contributions to bring it up
to date would be welcome.
### Automated configuration
It is recommended that you use Docker Compose to run your containers, including
this image and a Postgres server. A sample ``docker-compose.yml`` is provided,
including example labels for reverse proxying and other artifacts.
Read the section about environment variables and set at least mandatory variables,
then run the server:
```
docker-compose up -d
```
If secrets are not specified in the environment variables, they will be generated
as part of the startup. Please ensure these secrets are kept between launches of the
Docker container, as their loss may require users to log in again.
### Manual configuration
### Configuration
A sample ``docker-compose.yml`` is provided, including example labels for
reverse proxying and other artifacts. The docker-compose file is an example,
please comment/uncomment sections that are not suitable for your usecase.
Specify a ``SYNAPSE_CONFIG_PATH``, preferably to a persistent path,
to use manual configuration. To generate a fresh ``homeserver.yaml``, simply run:
to use manual configuration.
To generate a fresh `homeserver.yaml`, you can use the `generate` command.
(See the [documentation](../../docker/README.md#generating-a-configuration-file)
for more information.) You will need to specify appropriate values for at least the
`SYNAPSE_SERVER_NAME` and `SYNAPSE_REPORT_STATS` environment variables. For example:
```
docker-compose run --rm -e SYNAPSE_SERVER_NAME=my.matrix.host synapse generate
docker-compose run --rm -e SYNAPSE_SERVER_NAME=my.matrix.host -e SYNAPSE_REPORT_STATS=yes synapse generate
```
(This will also generate necessary signing keys.)
Then, customize your configuration and run the server:
```

View file

@ -15,13 +15,10 @@ services:
restart: unless-stopped
# See the readme for a full documentation of the environment settings
environment:
- SYNAPSE_SERVER_NAME=my.matrix.host
- SYNAPSE_REPORT_STATS=no
- SYNAPSE_ENABLE_REGISTRATION=yes
- SYNAPSE_LOG_LEVEL=INFO
- POSTGRES_PASSWORD=changeme
- SYNAPSE_CONFIG_PATH=/etc/homeserver.yaml
volumes:
# You may either store all the files in a local folder
- ./matrix-config:/etc
- ./files:/data
# .. or you may split this between different storage points
# - ./files:/data
@ -35,9 +32,23 @@ services:
- 8448:8448/tcp
# ... or use a reverse proxy, here is an example for traefik:
labels:
# The following lines are valid for Traefik version 1.x:
- traefik.enable=true
- traefik.frontend.rule=Host:my.matrix.Host
- traefik.port=8008
# Alternatively, for Traefik version 2.0:
- traefik.enable=true
- traefik.http.routers.http-synapse.entryPoints=http
- traefik.http.routers.http-synapse.rule=Host(`my.matrix.host`)
- traefik.http.middlewares.https_redirect.redirectscheme.scheme=https
- traefik.http.middlewares.https_redirect.redirectscheme.permanent=true
- traefik.http.routers.http-synapse.middlewares=https_redirect
- traefik.http.routers.https-synapse.entryPoints=https
- traefik.http.routers.https-synapse.rule=Host(`my.matrix.host`)
- traefik.http.routers.https-synapse.service=synapse
- traefik.http.routers.https-synapse.tls=true
- traefik.http.services.synapse.loadbalancer.server.port=8008
- traefik.http.routers.https-synapse.tls.certResolver=le-ssl
db:
image: docker.io/postgres:10-alpine

View file

@ -339,7 +339,7 @@ def main(stdscr):
root_logger = logging.getLogger()
formatter = logging.Formatter(
"%(asctime)s - %(name)s - %(lineno)d - " "%(levelname)s - %(message)s"
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(message)s"
)
if not os.path.exists("logs"):
os.makedirs("logs")

View file

@ -36,7 +36,7 @@ def make_graph(db_name, room_id, file_prefix, limit):
args = [room_id]
if limit:
sql += " ORDER BY topological_ordering DESC, stream_ordering DESC " "LIMIT ?"
sql += " ORDER BY topological_ordering DESC, stream_ordering DESC LIMIT ?"
args.append(limit)
@ -53,7 +53,7 @@ def make_graph(db_name, room_id, file_prefix, limit):
for event in events:
c = conn.execute(
"SELECT state_group FROM event_to_state_groups " "WHERE event_id = ?",
"SELECT state_group FROM event_to_state_groups WHERE event_id = ?",
(event.event_id,),
)

6
debian/changelog vendored
View file

@ -1,3 +1,9 @@
matrix-synapse-py3 (1.4.1) stable; urgency=medium
* New synapse release 1.4.1.
-- Synapse Packaging team <packages@matrix.org> Fri, 18 Oct 2019 10:13:27 +0100
matrix-synapse-py3 (1.4.0) stable; urgency=medium
* New synapse release 1.4.0.

View file

@ -77,14 +77,13 @@ for port in 8080 8081 8082; do
# Reduce the blacklist
blacklist=$(cat <<-BLACK
# Set the blacklist so that it doesn't include 127.0.0.1
# Set the blacklist so that it doesn't include 127.0.0.1, ::1
federation_ip_range_blacklist:
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- '::1/128'
- 'fe80::/64'
- 'fc00::/7'
BLACK

View file

@ -24,3 +24,5 @@ loggers:
root:
level: {{ SYNAPSE_LOG_LEVEL or "INFO" }}
handlers: [console]
disable_existing_loggers: false

View file

@ -27,17 +27,21 @@ connect to a postgres database.
## Set up database
Assuming your PostgreSQL database user is called `postgres`, create a
user `synapse_user` with:
Assuming your PostgreSQL database user is called `postgres`, first authenticate as the database user with:
su - postgres
# Or, if your system uses sudo to get administrative rights
sudo -u postgres bash
Then, create a user ``synapse_user`` with:
createuser --pwprompt synapse_user
Before you can authenticate with the `synapse_user`, you must create a
database that it can access. To create a database, first connect to the
database with your database user:
su - postgres
su - postgres # Or: sudo -u postgres bash
psql
and then run:

View file

@ -86,7 +86,7 @@ pid_file: DATADIR/homeserver.pid
# Whether room invites to users on this server should be blocked
# (except those sent by local server admins). The default is False.
#
#block_non_admin_invites: True
#block_non_admin_invites: true
# Room searching
#
@ -239,9 +239,8 @@ listeners:
# Global blocking
#
#hs_disabled: False
#hs_disabled: false
#hs_disabled_message: 'Human readable reason for why the HS is blocked'
#hs_disabled_limit_type: 'error code(str), to help clients decode reason'
# Monthly Active User Blocking
#
@ -261,15 +260,22 @@ listeners:
# sign up in a short space of time never to return after their initial
# session.
#
#limit_usage_by_mau: False
# 'mau_limit_alerting' is a means of limiting client side alerting
# should the mau limit be reached. This is useful for small instances
# where the admin has 5 mau seats (say) for 5 specific people and no
# interest increasing the mau limit further. Defaults to True, which
# means that alerting is enabled
#
#limit_usage_by_mau: false
#max_mau_value: 50
#mau_trial_days: 2
#mau_limit_alerting: false
# If enabled, the metrics for the number of monthly active users will
# be populated, however no one will be limited. If limit_usage_by_mau
# is true, this is implied to be true.
#
#mau_stats_only: False
#mau_stats_only: false
# Sometimes the server admin will want to ensure certain accounts are
# never blocked by mau checking. These accounts are specified here.
@ -294,7 +300,7 @@ listeners:
#
# Uncomment the below lines to enable:
#limit_remote_rooms:
# enabled: True
# enabled: true
# complexity: 1.0
# complexity_error: "This room is too complex."
@ -411,7 +417,7 @@ acme:
# ACME support is disabled by default. Set this to `true` and uncomment
# tls_certificate_path and tls_private_key_path above to enable it.
#
enabled: False
enabled: false
# Endpoint to use to request certificates. If you only want to test,
# use Let's Encrypt's staging url:
@ -786,7 +792,7 @@ uploads_path: "DATADIR/uploads"
# connect to arbitrary endpoints without having first signed up for a
# valid account (e.g. by passing a CAPTCHA).
#
#turn_allow_guests: True
#turn_allow_guests: true
## Registration ##
@ -829,7 +835,7 @@ uploads_path: "DATADIR/uploads"
# where d is equal to 10% of the validity period.
#
#account_validity:
# enabled: True
# enabled: true
# period: 6w
# renew_at: 1w
# renew_email_subject: "Renew your %(app)s account"
@ -971,7 +977,7 @@ account_threepid_delegates:
# Enable collection and rendering of performance metrics
#
#enable_metrics: False
#enable_metrics: false
# Enable sentry integration
# NOTE: While attempts are made to ensure that the logs don't contain
@ -1023,7 +1029,7 @@ metrics_flags:
# Uncomment to enable tracking of application service IP addresses. Implicitly
# enables MAU tracking for application service users.
#
#track_appservice_user_ips: True
#track_appservice_user_ips: true
# a secret which is used to sign access tokens. If none is specified,
@ -1149,7 +1155,7 @@ saml2_config:
# - url: https://our_idp/metadata.xml
#
# # By default, the user has to go to our login page first. If you'd like
# # to allow IdP-initiated login, set 'allow_unsolicited: True' in a
# # to allow IdP-initiated login, set 'allow_unsolicited: true' in a
# # 'service.sp' section:
# #
# #service:
@ -1220,6 +1226,7 @@ saml2_config:
# enabled: true
# server_url: "https://cas-server.com"
# service_url: "https://homeserver.domain.com:8448"
# #displayname_attribute: name
# #required_attributes:
# # name: value
@ -1262,13 +1269,13 @@ password_config:
# smtp_port: 25 # SSL: 465, STARTTLS: 587
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# require_transport_security: False
# require_transport_security: false
# notif_from: "Your Friendly %(app)s Home Server <noreply@example.com>"
# app_name: Matrix
#
# # Enable email notifications by default
# #
# notif_for_new_users: True
# notif_for_new_users: true
#
# # Defining a custom URL for Riot is only needed if email notifications
# # should contain links to a self-hosted installation of Riot; when set
@ -1446,11 +1453,11 @@ password_config:
# body: >-
# To continue using this homeserver you must review and agree to the
# terms and conditions at %(consent_uri)s
# send_server_notice_to_guests: True
# send_server_notice_to_guests: true
# block_events_error: >-
# To continue using this homeserver you must review and agree to the
# terms and conditions at %(consent_uri)s
# require_at_registration: False
# require_at_registration: false
# policy_name: Privacy Policy
#

View file

@ -1,58 +0,0 @@
from __future__ import print_function
import argparse
import itertools
import json
import sys
from mock import Mock
from synapse.api.auth import Auth
from synapse.events import FrozenEvent
def check_auth(auth, auth_chain, events):
auth_chain.sort(key=lambda e: e.depth)
auth_map = {e.event_id: e for e in auth_chain}
create_events = {}
for e in auth_chain:
if e.type == "m.room.create":
create_events[e.room_id] = e
for e in itertools.chain(auth_chain, events):
auth_events_list = [auth_map[i] for i, _ in e.auth_events]
auth_events = {(e.type, e.state_key): e for e in auth_events_list}
auth_events[("m.room.create", "")] = create_events[e.room_id]
try:
auth.check(e, auth_events=auth_events)
except Exception as ex:
print("Failed:", e.event_id, e.type, e.state_key)
print("Auth_events:", auth_events)
print(ex)
print(json.dumps(e.get_dict(), sort_keys=True, indent=4))
# raise
print("Success:", e.event_id, e.type, e.state_key)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"json", nargs="?", type=argparse.FileType("r"), default=sys.stdin
)
args = parser.parse_args()
js = json.load(args.json)
auth = Auth(Mock())
check_auth(
auth,
[FrozenEvent(d) for d in js["auth_chain"]],
[FrozenEvent(d) for d in js.get("pdus", [])],
)

9
scripts-dev/config-lint.sh Executable file
View file

@ -0,0 +1,9 @@
#!/bin/bash
# Find linting errors in Synapse's default config file.
# Exits with 0 if there are no problems, or another code otherwise.
# Fix non-lowercase true/false values
sed -i -E "s/: +True/: true/g; s/: +False/: false/g;" docs/sample_config.yaml
# Check if anything changed
git diff --exit-code docs/sample_config.yaml

View file

@ -10,3 +10,4 @@ set -e
isort -y -rc synapse tests scripts-dev scripts
flake8 synapse tests
python3 -m black synapse tests scripts-dev scripts
./scripts-dev/config-lint.sh

View file

@ -2,6 +2,7 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -29,9 +30,33 @@ import yaml
from twisted.enterprise import adbapi
from twisted.internet import defer, reactor
from synapse.storage._base import LoggingTransaction, SQLBaseStore
from synapse.config.homeserver import HomeServerConfig
from synapse.logging.context import PreserveLoggingContext
from synapse.storage._base import LoggingTransaction
from synapse.storage.data_stores.main.client_ips import ClientIpBackgroundUpdateStore
from synapse.storage.data_stores.main.deviceinbox import (
DeviceInboxBackgroundUpdateStore,
)
from synapse.storage.data_stores.main.devices import DeviceBackgroundUpdateStore
from synapse.storage.data_stores.main.events_bg_updates import (
EventsBackgroundUpdatesStore,
)
from synapse.storage.data_stores.main.media_repository import (
MediaRepositoryBackgroundUpdateStore,
)
from synapse.storage.data_stores.main.registration import (
RegistrationBackgroundUpdateStore,
)
from synapse.storage.data_stores.main.roommember import RoomMemberBackgroundUpdateStore
from synapse.storage.data_stores.main.search import SearchBackgroundUpdateStore
from synapse.storage.data_stores.main.state import StateBackgroundUpdateStore
from synapse.storage.data_stores.main.stats import StatsStore
from synapse.storage.data_stores.main.user_directory import (
UserDirectoryBackgroundUpdateStore,
)
from synapse.storage.engines import create_engine
from synapse.storage.prepare_database import prepare_database
from synapse.util import Clock
logger = logging.getLogger("synapse_port_db")
@ -43,6 +68,7 @@ BOOLEAN_COLUMNS = {
"presence_list": ["accepted"],
"presence_stream": ["currently_active"],
"public_room_list_stream": ["visibility"],
"devices": ["hidden"],
"device_lists_outbound_pokes": ["sent"],
"users_who_share_rooms": ["share_private"],
"groups": ["is_public"],
@ -55,6 +81,8 @@ BOOLEAN_COLUMNS = {
"local_group_membership": ["is_publicised", "is_admin"],
"e2e_room_keys": ["is_verified"],
"account_validity": ["email_sent"],
"redactions": ["have_censored"],
"room_stats_state": ["is_federatable"],
}
@ -96,33 +124,24 @@ APPEND_ONLY_TABLES = [
end_error_exec_info = None
class Store(object):
"""This object is used to pull out some of the convenience API from the
Storage layer.
*All* database interactions should go through this object.
"""
def __init__(self, db_pool, engine):
self.db_pool = db_pool
self.database_engine = engine
_simple_insert_txn = SQLBaseStore.__dict__["_simple_insert_txn"]
_simple_insert = SQLBaseStore.__dict__["_simple_insert"]
_simple_select_onecol_txn = SQLBaseStore.__dict__["_simple_select_onecol_txn"]
_simple_select_onecol = SQLBaseStore.__dict__["_simple_select_onecol"]
_simple_select_one = SQLBaseStore.__dict__["_simple_select_one"]
_simple_select_one_txn = SQLBaseStore.__dict__["_simple_select_one_txn"]
_simple_select_one_onecol = SQLBaseStore.__dict__["_simple_select_one_onecol"]
_simple_select_one_onecol_txn = SQLBaseStore.__dict__[
"_simple_select_one_onecol_txn"
]
_simple_update_one = SQLBaseStore.__dict__["_simple_update_one"]
_simple_update_one_txn = SQLBaseStore.__dict__["_simple_update_one_txn"]
_simple_update_txn = SQLBaseStore.__dict__["_simple_update_txn"]
class Store(
ClientIpBackgroundUpdateStore,
DeviceInboxBackgroundUpdateStore,
DeviceBackgroundUpdateStore,
EventsBackgroundUpdatesStore,
MediaRepositoryBackgroundUpdateStore,
RegistrationBackgroundUpdateStore,
RoomMemberBackgroundUpdateStore,
SearchBackgroundUpdateStore,
StateBackgroundUpdateStore,
UserDirectoryBackgroundUpdateStore,
StatsStore,
):
def __init__(self, db_conn, hs):
super().__init__(db_conn, hs)
self.db_pool = hs.get_db_pool()
@defer.inlineCallbacks
def runInteraction(self, desc, func, *args, **kwargs):
def r(conn):
try:
@ -148,7 +167,8 @@ class Store(object):
logger.debug("[TXN FAIL] {%s} %s", desc, e)
raise
return self.db_pool.runWithConnection(r)
with PreserveLoggingContext():
return (yield self.db_pool.runWithConnection(r))
def execute(self, f, *args, **kwargs):
return self.runInteraction(f.__name__, f, *args, **kwargs)
@ -174,6 +194,25 @@ class Store(object):
raise
class MockHomeserver:
def __init__(self, config, database_engine, db_conn, db_pool):
self.database_engine = database_engine
self.db_conn = db_conn
self.db_pool = db_pool
self.clock = Clock(reactor)
self.config = config
self.hostname = config.server_name
def get_db_conn(self):
return self.db_conn
def get_db_pool(self):
return self.db_pool
def get_clock(self):
return self.clock
class Porter(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
@ -445,31 +484,75 @@ class Porter(object):
db_conn.commit()
return db_conn
@defer.inlineCallbacks
def build_db_store(self, config):
"""Builds and returns a database store using the provided configuration.
Args:
config: The database configuration, i.e. a dict following the structure of
the "database" section of Synapse's configuration file.
Returns:
The built Store object.
"""
engine = create_engine(config)
self.progress.set_state("Preparing %s" % config["name"])
conn = self.setup_db(config, engine)
db_pool = adbapi.ConnectionPool(
config["name"], **config["args"]
)
hs = MockHomeserver(self.hs_config, engine, conn, db_pool)
store = Store(conn, hs)
yield store.runInteraction(
"%s_engine.check_database" % config["name"],
engine.check_database,
)
return store
@defer.inlineCallbacks
def run_background_updates_on_postgres(self):
# Manually apply all background updates on the PostgreSQL database.
postgres_ready = yield self.postgres_store.has_completed_background_updates()
if not postgres_ready:
# Only say that we're running background updates when there are background
# updates to run.
self.progress.set_state("Running background updates on PostgreSQL")
while not postgres_ready:
yield self.postgres_store.do_next_background_update(100)
postgres_ready = yield (
self.postgres_store.has_completed_background_updates()
)
@defer.inlineCallbacks
def run(self):
try:
sqlite_db_pool = adbapi.ConnectionPool(
self.sqlite_config["name"], **self.sqlite_config["args"]
self.sqlite_store = yield self.build_db_store(self.sqlite_config)
# Check if all background updates are done, abort if not.
updates_complete = yield self.sqlite_store.has_completed_background_updates()
if not updates_complete:
sys.stderr.write(
"Pending background updates exist in the SQLite3 database."
" Please start Synapse again and wait until every update has finished"
" before running this script.\n"
)
defer.returnValue(None)
self.postgres_store = yield self.build_db_store(
self.hs_config.database_config
)
postgres_db_pool = adbapi.ConnectionPool(
self.postgres_config["name"], **self.postgres_config["args"]
)
sqlite_engine = create_engine(sqlite_config)
postgres_engine = create_engine(postgres_config)
self.sqlite_store = Store(sqlite_db_pool, sqlite_engine)
self.postgres_store = Store(postgres_db_pool, postgres_engine)
yield self.postgres_store.execute(postgres_engine.check_database)
# Step 1. Set up databases.
self.progress.set_state("Preparing SQLite3")
self.setup_db(sqlite_config, sqlite_engine)
self.progress.set_state("Preparing PostgreSQL")
self.setup_db(postgres_config, postgres_engine)
yield self.run_background_updates_on_postgres()
self.progress.set_state("Creating port tables")
@ -561,6 +644,8 @@ class Porter(object):
def conv(j, col):
if j in bool_cols:
return bool(col)
if isinstance(col, bytes):
return bytearray(col)
elif isinstance(col, string_types) and "\0" in col:
logger.warn(
"DROPPING ROW: NUL value in table %s col %s: %r",
@ -924,18 +1009,24 @@ if __name__ == "__main__":
},
}
postgres_config = yaml.safe_load(args.postgres_config)
hs_config = yaml.safe_load(args.postgres_config)
if "database" in postgres_config:
postgres_config = postgres_config["database"]
if "database" not in hs_config:
sys.stderr.write("The configuration file must have a 'database' section.\n")
sys.exit(4)
postgres_config = hs_config["database"]
if "name" not in postgres_config:
sys.stderr.write("Malformed database config: no 'name'")
sys.stderr.write("Malformed database config: no 'name'\n")
sys.exit(2)
if postgres_config["name"] != "psycopg2":
sys.stderr.write("Database must use 'psycopg2' connector.")
sys.stderr.write("Database must use the 'psycopg2' connector.\n")
sys.exit(3)
config = HomeServerConfig()
config.parse_config_dict(hs_config, "", "")
def start(stdscr=None):
if stdscr:
progress = CursesProgress(stdscr)
@ -944,9 +1035,9 @@ if __name__ == "__main__":
porter = Porter(
sqlite_config=sqlite_config,
postgres_config=postgres_config,
progress=progress,
batch_size=args.batch_size,
hs_config=config,
)
reactor.callWhenRunning(porter.run)

View file

@ -36,7 +36,7 @@ try:
except ImportError:
pass
__version__ = "1.4.0"
__version__ = "1.5.0rc1"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View file

@ -25,7 +25,13 @@ from twisted.internet import defer
import synapse.logging.opentracing as opentracing
import synapse.types
from synapse import event_auth
from synapse.api.constants import EventTypes, JoinRules, Membership, UserTypes
from synapse.api.constants import (
EventTypes,
JoinRules,
LimitBlockingTypes,
Membership,
UserTypes,
)
from synapse.api.errors import (
AuthError,
Codes,
@ -84,27 +90,10 @@ class Auth(object):
)
auth_events = yield self.store.get_events(auth_events_ids)
auth_events = {(e.type, e.state_key): e for e in itervalues(auth_events)}
self.check(
event_auth.check(
room_version, event, auth_events=auth_events, do_sig_check=do_sig_check
)
def check(self, room_version, event, auth_events, do_sig_check=True):
""" Checks if this event is correctly authed.
Args:
room_version (str): version of the room
event: the event being checked.
auth_events (dict: event-key -> event): the existing room state.
Returns:
True if the auth checks pass.
"""
with Measure(self.clock, "auth.check"):
event_auth.check(
room_version, event, auth_events, do_sig_check=do_sig_check
)
@defer.inlineCallbacks
def check_joined_room(self, room_id, user_id, current_state=None):
"""Check if the user is currently joined in the room
@ -743,7 +732,7 @@ class Auth(object):
self.hs.config.hs_disabled_message,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
admin_contact=self.hs.config.admin_contact,
limit_type=self.hs.config.hs_disabled_limit_type,
limit_type=LimitBlockingTypes.HS_DISABLED,
)
if self.hs.config.limit_usage_by_mau is True:
assert not (user_id and threepid)
@ -776,5 +765,5 @@ class Auth(object):
"Monthly Active User Limit Exceeded",
admin_contact=self.hs.config.admin_contact,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
limit_type="monthly_active_user",
limit_type=LimitBlockingTypes.MONTHLY_ACTIVE_USER,
)

View file

@ -97,8 +97,6 @@ class EventTypes(object):
class RejectedReason(object):
AUTH_ERROR = "auth_error"
REPLACED = "replaced"
NOT_ANCESTOR = "not_ancestor"
class RoomCreationPreset(object):
@ -133,3 +131,10 @@ class RelationTypes(object):
ANNOTATION = "m.annotation"
REPLACE = "m.replace"
REFERENCE = "m.reference"
class LimitBlockingTypes(object):
"""Reasons that a server may be blocked"""
MONTHLY_ACTIVE_USER = "monthly_active_user"
HS_DISABLED = "hs_disabled"

View file

@ -62,6 +62,7 @@ class Codes(object):
INCOMPATIBLE_ROOM_VERSION = "M_INCOMPATIBLE_ROOM_VERSION"
WRONG_ROOM_KEYS_VERSION = "M_WRONG_ROOM_KEYS_VERSION"
EXPIRED_ACCOUNT = "ORG_MATRIX_EXPIRED_ACCOUNT"
INVALID_SIGNATURE = "M_INVALID_SIGNATURE"
USER_DEACTIVATED = "M_USER_DEACTIVATED"

View file

@ -56,8 +56,8 @@ from synapse.rest.client.v1.room import (
RoomStateEventRestServlet,
)
from synapse.server import HomeServer
from synapse.storage.data_stores.main.user_directory import UserDirectoryStore
from synapse.storage.engines import create_engine
from synapse.storage.user_directory import UserDirectoryStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string

View file

@ -39,8 +39,8 @@ from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.admin import register_servlets_for_media_repo
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.server import HomeServer
from synapse.storage.data_stores.main.media_repository import MediaRepositoryStore
from synapse.storage.engines import create_engine
from synapse.storage.media_repository import MediaRepositoryStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string

View file

@ -54,8 +54,8 @@ from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
from synapse.rest.client.v1.room import RoomInitialSyncRestServlet
from synapse.rest.client.v2_alpha import sync
from synapse.server import HomeServer
from synapse.storage.data_stores.main.presence import UserPresenceState
from synapse.storage.engines import create_engine
from synapse.storage.presence import UserPresenceState
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.stringutils import random_string

View file

@ -42,8 +42,8 @@ from synapse.replication.tcp.streams.events import (
)
from synapse.rest.client.v2_alpha import user_directory
from synapse.server import HomeServer
from synapse.storage.data_stores.main.user_directory import UserDirectoryStore
from synapse.storage.engines import create_engine
from synapse.storage.user_directory import UserDirectoryStore
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole

View file

@ -48,7 +48,7 @@ class AppServiceConfig(Config):
# Uncomment to enable tracking of application service IP addresses. Implicitly
# enables MAU tracking for application service users.
#
#track_appservice_user_ips: True
#track_appservice_user_ips: true
"""

View file

@ -30,11 +30,13 @@ class CasConfig(Config):
self.cas_enabled = cas_config.get("enabled", True)
self.cas_server_url = cas_config["server_url"]
self.cas_service_url = cas_config["service_url"]
self.cas_displayname_attribute = cas_config.get("displayname_attribute")
self.cas_required_attributes = cas_config.get("required_attributes", {})
else:
self.cas_enabled = False
self.cas_server_url = None
self.cas_service_url = None
self.cas_displayname_attribute = None
self.cas_required_attributes = {}
def generate_config_section(self, config_dir_path, server_name, **kwargs):
@ -45,6 +47,7 @@ class CasConfig(Config):
# enabled: true
# server_url: "https://cas-server.com"
# service_url: "https://homeserver.domain.com:8448"
# #displayname_attribute: name
# #required_attributes:
# # name: value
"""

View file

@ -62,11 +62,11 @@ DEFAULT_CONFIG = """\
# body: >-
# To continue using this homeserver you must review and agree to the
# terms and conditions at %(consent_uri)s
# send_server_notice_to_guests: True
# send_server_notice_to_guests: true
# block_events_error: >-
# To continue using this homeserver you must review and agree to the
# terms and conditions at %(consent_uri)s
# require_at_registration: False
# require_at_registration: false
# policy_name: Privacy Policy
#
"""

View file

@ -304,13 +304,13 @@ class EmailConfig(Config):
# smtp_port: 25 # SSL: 465, STARTTLS: 587
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# require_transport_security: False
# require_transport_security: false
# notif_from: "Your Friendly %(app)s Home Server <noreply@example.com>"
# app_name: Matrix
#
# # Enable email notifications by default
# #
# notif_for_new_users: True
# notif_for_new_users: true
#
# # Defining a custom URL for Riot is only needed if email notifications
# # should contain links to a self-hosted installation of Riot; when set

View file

@ -68,9 +68,6 @@ handlers:
filters: [context]
loggers:
synapse:
level: INFO
synapse.storage.SQL:
# beware: increasing this to DEBUG will make synapse log sensitive
# information such as access tokens.
@ -79,6 +76,8 @@ loggers:
root:
level: INFO
handlers: [file, console]
disable_existing_loggers: false
"""
)

View file

@ -70,7 +70,7 @@ class MetricsConfig(Config):
# Enable collection and rendering of performance metrics
#
#enable_metrics: False
#enable_metrics: false
# Enable sentry integration
# NOTE: While attempts are made to ensure that the logs don't contain

View file

@ -180,7 +180,7 @@ class RegistrationConfig(Config):
# where d is equal to 10%% of the validity period.
#
#account_validity:
# enabled: True
# enabled: true
# period: 6w
# renew_at: 1w
# renew_email_subject: "Renew your %%(app)s account"

View file

@ -176,7 +176,7 @@ class SAML2Config(Config):
# - url: https://our_idp/metadata.xml
#
# # By default, the user has to go to our login page first. If you'd like
# # to allow IdP-initiated login, set 'allow_unsolicited: True' in a
# # to allow IdP-initiated login, set 'allow_unsolicited: true' in a
# # 'service.sp' section:
# #
# #service:

View file

@ -171,6 +171,7 @@ class ServerConfig(Config):
)
self.mau_trial_days = config.get("mau_trial_days", 0)
self.mau_limit_alerting = config.get("mau_limit_alerting", True)
# How long to keep redacted events in the database in unredacted form
# before redacting them.
@ -192,7 +193,6 @@ class ServerConfig(Config):
# Options to disable HS
self.hs_disabled = config.get("hs_disabled", False)
self.hs_disabled_message = config.get("hs_disabled_message", "")
self.hs_disabled_limit_type = config.get("hs_disabled_limit_type", "")
# Admin uri to direct users at should their instance become blocked
# due to resource constraints
@ -532,7 +532,7 @@ class ServerConfig(Config):
# Whether room invites to users on this server should be blocked
# (except those sent by local server admins). The default is False.
#
#block_non_admin_invites: True
#block_non_admin_invites: true
# Room searching
#
@ -673,9 +673,8 @@ class ServerConfig(Config):
# Global blocking
#
#hs_disabled: False
#hs_disabled: false
#hs_disabled_message: 'Human readable reason for why the HS is blocked'
#hs_disabled_limit_type: 'error code(str), to help clients decode reason'
# Monthly Active User Blocking
#
@ -695,15 +694,22 @@ class ServerConfig(Config):
# sign up in a short space of time never to return after their initial
# session.
#
#limit_usage_by_mau: False
# 'mau_limit_alerting' is a means of limiting client side alerting
# should the mau limit be reached. This is useful for small instances
# where the admin has 5 mau seats (say) for 5 specific people and no
# interest increasing the mau limit further. Defaults to True, which
# means that alerting is enabled
#
#limit_usage_by_mau: false
#max_mau_value: 50
#mau_trial_days: 2
#mau_limit_alerting: false
# If enabled, the metrics for the number of monthly active users will
# be populated, however no one will be limited. If limit_usage_by_mau
# is true, this is implied to be true.
#
#mau_stats_only: False
#mau_stats_only: false
# Sometimes the server admin will want to ensure certain accounts are
# never blocked by mau checking. These accounts are specified here.
@ -728,7 +734,7 @@ class ServerConfig(Config):
#
# Uncomment the below lines to enable:
#limit_remote_rooms:
# enabled: True
# enabled: true
# complexity: 1.0
# complexity_error: "This room is too complex."

View file

@ -289,6 +289,9 @@ class TlsConfig(Config):
"http://localhost:8009/.well-known/acme-challenge"
)
# flake8 doesn't recognise that variables are used in the below string
_ = tls_enabled, proxypassline, acme_enabled, default_acme_account_file
return (
"""\
## TLS ##
@ -451,7 +454,11 @@ class TlsConfig(Config):
#tls_fingerprints: [{"sha256": "<base64_encoded_sha256_fingerprint>"}]
"""
% locals()
# Lowercase the string representation of boolean values
% {
x[0]: str(x[1]).lower() if isinstance(x[1], bool) else x[1]
for x in locals().items()
}
)
def read_tls_certificate(self):

View file

@ -56,5 +56,5 @@ class VoipConfig(Config):
# connect to arbitrary endpoints without having first signed up for a
# valid account (e.g. by passing a CAPTCHA).
#
#turn_allow_guests: True
#turn_allow_guests: true
"""

View file

@ -125,9 +125,11 @@ def compute_event_signature(event_dict, signature_name, signing_key):
redact_json = prune_event_dict(event_dict)
redact_json.pop("age_ts", None)
redact_json.pop("unsigned", None)
logger.debug("Signing event: %s", encode_canonical_json(redact_json))
if logger.isEnabledFor(logging.DEBUG):
logger.debug("Signing event: %s", encode_canonical_json(redact_json))
redact_json = sign_json(redact_json, signature_name, signing_key)
logger.debug("Signed event: %s", encode_canonical_json(redact_json))
if logger.isEnabledFor(logging.DEBUG):
logger.debug("Signed event: %s", encode_canonical_json(redact_json))
return redact_json["signatures"]

View file

@ -493,8 +493,7 @@ def _check_power_levels(event, auth_events):
new_level_too_big = new_level is not None and new_level > user_level
if old_level_too_big or new_level_too_big:
raise AuthError(
403,
"You don't have permission to add ops level greater " "than your own",
403, "You don't have permission to add ops level greater than your own"
)

View file

@ -196,7 +196,7 @@ class FederationClient(FederationBase):
dest, room_id, extremities, limit
)
logger.debug("backfill transaction_data=%s", repr(transaction_data))
logger.debug("backfill transaction_data=%r", transaction_data)
room_version = yield self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
@ -878,44 +878,6 @@ class FederationClient(FederationBase):
third_party_instance_id=third_party_instance_id,
)
@defer.inlineCallbacks
def query_auth(self, destination, room_id, event_id, local_auth):
"""
Params:
destination (str)
event_it (str)
local_auth (list)
"""
time_now = self._clock.time_msec()
send_content = {"auth_chain": [e.get_pdu_json(time_now) for e in local_auth]}
code, content = yield self.transport_layer.send_query_auth(
destination=destination,
room_id=room_id,
event_id=event_id,
content=send_content,
)
room_version = yield self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
auth_chain = [event_from_pdu_json(e, format_ver) for e in content["auth_chain"]]
signed_auth = yield self._check_sigs_and_hash_and_fetch(
destination, auth_chain, outlier=True, room_version=room_version
)
signed_auth.sort(key=lambda e: e.depth)
ret = {
"auth_chain": signed_auth,
"rejects": content.get("rejects", []),
"missing": content.get("missing", []),
}
return ret
@defer.inlineCallbacks
def get_missing_events(
self,

View file

@ -31,7 +31,7 @@ from synapse.federation.units import Edu
from synapse.handlers.presence import format_user_presence_state
from synapse.metrics import sent_transactions_counter
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage import UserPresenceState
from synapse.storage.presence import UserPresenceState
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
# This is defined in the Matrix spec and enforced by the receiver.

View file

@ -122,10 +122,10 @@ class TransportLayerClient(object):
Deferred: Results in a dict received from the remote homeserver.
"""
logger.debug(
"backfill dest=%s, room_id=%s, event_tuples=%s, limit=%s",
"backfill dest=%s, room_id=%s, event_tuples=%r, limit=%s",
destination,
room_id,
repr(event_tuples),
event_tuples,
str(limit),
)
@ -381,17 +381,6 @@ class TransportLayerClient(object):
return content
@defer.inlineCallbacks
@log_function
def send_query_auth(self, destination, room_id, event_id, content):
path = _create_v1_path("/query_auth/%s/%s", room_id, event_id)
content = yield self.client.post_json(
destination=destination, path=path, data=content
)
return content
@defer.inlineCallbacks
@log_function
def query_client_keys(self, destination, query_content, timeout):

View file

@ -1,5 +1,7 @@
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
# Copyright 2019 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -438,6 +440,21 @@ class DeviceHandler(DeviceWorkerHandler):
self.federation_sender.send_device_messages(host)
log_kv({"message": "sent device update to host", "host": host})
@defer.inlineCallbacks
def notify_user_signature_update(self, from_user_id, user_ids):
"""Notify a user that they have made new signatures of other users.
Args:
from_user_id (str): the user who made the signature
user_ids (list[str]): the users IDs that have new signatures
"""
position = yield self.store.add_user_signature_change_to_streams(
from_user_id, user_ids
)
self.notifier.on_new_event("device_list_key", position, users=[from_user_id])
@defer.inlineCallbacks
def on_federation_query_user_devices(self, user_id):
stream_id, devices = yield self.store.get_devices_with_keys_by_user(user_id)

View file

@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# Copyright 2018-2019 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -18,14 +19,22 @@ import logging
from six import iteritems
import attr
from canonicaljson import encode_canonical_json, json
from signedjson.key import decode_verify_key_bytes
from signedjson.sign import SignatureVerifyException, verify_signed_json
from unpaddedbase64 import decode_base64
from twisted.internet import defer
from synapse.api.errors import CodeMessageException, SynapseError
from synapse.api.errors import CodeMessageException, Codes, NotFoundError, SynapseError
from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.logging.opentracing import log_kv, set_tag, tag_args, trace
from synapse.types import UserID, get_domain_from_id
from synapse.types import (
UserID,
get_domain_from_id,
get_verify_key_from_cross_signing_key,
)
from synapse.util import unwrapFirstError
from synapse.util.retryutils import NotRetryingDestination
@ -49,7 +58,7 @@ class E2eKeysHandler(object):
@trace
@defer.inlineCallbacks
def query_devices(self, query_body, timeout):
def query_devices(self, query_body, timeout, from_user_id):
""" Handle a device key query from a client
{
@ -67,6 +76,11 @@ class E2eKeysHandler(object):
}
}
}
Args:
from_user_id (str): the user making the query. This is used when
adding cross-signing signatures to limit what signatures users
can see.
"""
device_keys_query = query_body.get("device_keys", {})
@ -125,6 +139,11 @@ class E2eKeysHandler(object):
r = remote_queries_not_in_cache.setdefault(domain, {})
r[user_id] = remote_queries[user_id]
# Get cached cross-signing keys
cross_signing_keys = yield self.get_cross_signing_keys_from_cache(
device_keys_query, from_user_id
)
# Now fetch any devices that we don't have in our cache
@trace
@defer.inlineCallbacks
@ -188,6 +207,14 @@ class E2eKeysHandler(object):
if user_id in destination_query:
results[user_id] = keys
for user_id, key in remote_result["master_keys"].items():
if user_id in destination_query:
cross_signing_keys["master_keys"][user_id] = key
for user_id, key in remote_result["self_signing_keys"].items():
if user_id in destination_query:
cross_signing_keys["self_signing_keys"][user_id] = key
except Exception as e:
failure = _exception_to_failure(e)
failures[destination] = failure
@ -204,7 +231,61 @@ class E2eKeysHandler(object):
).addErrback(unwrapFirstError)
)
return {"device_keys": results, "failures": failures}
ret = {"device_keys": results, "failures": failures}
ret.update(cross_signing_keys)
return ret
@defer.inlineCallbacks
def get_cross_signing_keys_from_cache(self, query, from_user_id):
"""Get cross-signing keys for users from the database
Args:
query (Iterable[string]) an iterable of user IDs. A dict whose keys
are user IDs satisfies this, so the query format used for
query_devices can be used here.
from_user_id (str): the user making the query. This is used when
adding cross-signing signatures to limit what signatures users
can see.
Returns:
defer.Deferred[dict[str, dict[str, dict]]]: map from
(master|self_signing|user_signing) -> user_id -> key
"""
master_keys = {}
self_signing_keys = {}
user_signing_keys = {}
for user_id in query:
# XXX: consider changing the store functions to allow querying
# multiple users simultaneously.
key = yield self.store.get_e2e_cross_signing_key(
user_id, "master", from_user_id
)
if key:
master_keys[user_id] = key
key = yield self.store.get_e2e_cross_signing_key(
user_id, "self_signing", from_user_id
)
if key:
self_signing_keys[user_id] = key
# users can see other users' master and self-signing keys, but can
# only see their own user-signing keys
if from_user_id == user_id:
key = yield self.store.get_e2e_cross_signing_key(
user_id, "user_signing", from_user_id
)
if key:
user_signing_keys[user_id] = key
return {
"master_keys": master_keys,
"self_signing_keys": self_signing_keys,
"user_signing_keys": user_signing_keys,
}
@trace
@defer.inlineCallbacks
@ -248,16 +329,10 @@ class E2eKeysHandler(object):
results = yield self.store.get_e2e_device_keys(local_query)
# Build the result structure, un-jsonify the results, and add the
# "unsigned" section
# Build the result structure
for user_id, device_keys in results.items():
for device_id, device_info in device_keys.items():
r = dict(device_info["keys"])
r["unsigned"] = {}
display_name = device_info["device_display_name"]
if display_name is not None:
r["unsigned"]["device_display_name"] = display_name
result_dict[user_id][device_id] = r
result_dict[user_id][device_id] = device_info
log_kv(results)
return result_dict
@ -447,8 +522,493 @@ class E2eKeysHandler(object):
log_kv({"message": "Inserting new one_time_keys.", "keys": new_keys})
yield self.store.add_e2e_one_time_keys(user_id, device_id, time_now, new_keys)
@defer.inlineCallbacks
def upload_signing_keys_for_user(self, user_id, keys):
"""Upload signing keys for cross-signing
Args:
user_id (string): the user uploading the keys
keys (dict[string, dict]): the signing keys
"""
# if a master key is uploaded, then check it. Otherwise, load the
# stored master key, to check signatures on other keys
if "master_key" in keys:
master_key = keys["master_key"]
_check_cross_signing_key(master_key, user_id, "master")
else:
master_key = yield self.store.get_e2e_cross_signing_key(user_id, "master")
# if there is no master key, then we can't do anything, because all the
# other cross-signing keys need to be signed by the master key
if not master_key:
raise SynapseError(400, "No master key available", Codes.MISSING_PARAM)
try:
master_key_id, master_verify_key = get_verify_key_from_cross_signing_key(
master_key
)
except ValueError:
if "master_key" in keys:
# the invalid key came from the request
raise SynapseError(400, "Invalid master key", Codes.INVALID_PARAM)
else:
# the invalid key came from the database
logger.error("Invalid master key found for user %s", user_id)
raise SynapseError(500, "Invalid master key")
# for the other cross-signing keys, make sure that they have valid
# signatures from the master key
if "self_signing_key" in keys:
self_signing_key = keys["self_signing_key"]
_check_cross_signing_key(
self_signing_key, user_id, "self_signing", master_verify_key
)
if "user_signing_key" in keys:
user_signing_key = keys["user_signing_key"]
_check_cross_signing_key(
user_signing_key, user_id, "user_signing", master_verify_key
)
# if everything checks out, then store the keys and send notifications
deviceids = []
if "master_key" in keys:
yield self.store.set_e2e_cross_signing_key(user_id, "master", master_key)
deviceids.append(master_verify_key.version)
if "self_signing_key" in keys:
yield self.store.set_e2e_cross_signing_key(
user_id, "self_signing", self_signing_key
)
try:
deviceids.append(
get_verify_key_from_cross_signing_key(self_signing_key)[1].version
)
except ValueError:
raise SynapseError(400, "Invalid self-signing key", Codes.INVALID_PARAM)
if "user_signing_key" in keys:
yield self.store.set_e2e_cross_signing_key(
user_id, "user_signing", user_signing_key
)
# the signature stream matches the semantics that we want for
# user-signing key updates: only the user themselves is notified of
# their own user-signing key updates
yield self.device_handler.notify_user_signature_update(user_id, [user_id])
# master key and self-signing key updates match the semantics of device
# list updates: all users who share an encrypted room are notified
if len(deviceids):
yield self.device_handler.notify_device_update(user_id, deviceids)
return {}
@defer.inlineCallbacks
def upload_signatures_for_device_keys(self, user_id, signatures):
"""Upload device signatures for cross-signing
Args:
user_id (string): the user uploading the signatures
signatures (dict[string, dict[string, dict]]): map of users to
devices to signed keys. This is the submission from the user; an
exception will be raised if it is malformed.
Returns:
dict: response to be sent back to the client. The response will have
a "failures" key, which will be a dict mapping users to devices
to errors for the signatures that failed.
Raises:
SynapseError: if the signatures dict is not valid.
"""
failures = {}
# signatures to be stored. Each item will be a SignatureListItem
signature_list = []
# split between checking signatures for own user and signatures for
# other users, since we verify them with different keys
self_signatures = signatures.get(user_id, {})
other_signatures = {k: v for k, v in signatures.items() if k != user_id}
self_signature_list, self_failures = yield self._process_self_signatures(
user_id, self_signatures
)
signature_list.extend(self_signature_list)
failures.update(self_failures)
other_signature_list, other_failures = yield self._process_other_signatures(
user_id, other_signatures
)
signature_list.extend(other_signature_list)
failures.update(other_failures)
# store the signature, and send the appropriate notifications for sync
logger.debug("upload signature failures: %r", failures)
yield self.store.store_e2e_cross_signing_signatures(user_id, signature_list)
self_device_ids = [item.target_device_id for item in self_signature_list]
if self_device_ids:
yield self.device_handler.notify_device_update(user_id, self_device_ids)
signed_users = [item.target_user_id for item in other_signature_list]
if signed_users:
yield self.device_handler.notify_user_signature_update(
user_id, signed_users
)
return {"failures": failures}
@defer.inlineCallbacks
def _process_self_signatures(self, user_id, signatures):
"""Process uploaded signatures of the user's own keys.
Signatures of the user's own keys from this API come in two forms:
- signatures of the user's devices by the user's self-signing key,
- signatures of the user's master key by the user's devices.
Args:
user_id (string): the user uploading the keys
signatures (dict[string, dict]): map of devices to signed keys
Returns:
(list[SignatureListItem], dict[string, dict[string, dict]]):
a list of signatures to store, and a map of users to devices to failure
reasons
Raises:
SynapseError: if the input is malformed
"""
signature_list = []
failures = {}
if not signatures:
return signature_list, failures
if not isinstance(signatures, dict):
raise SynapseError(400, "Invalid parameter", Codes.INVALID_PARAM)
try:
# get our self-signing key to verify the signatures
_, self_signing_key_id, self_signing_verify_key = yield self._get_e2e_cross_signing_verify_key(
user_id, "self_signing"
)
# get our master key, since we may have received a signature of it.
# We need to fetch it here so that we know what its key ID is, so
# that we can check if a signature that was sent is a signature of
# the master key or of a device
master_key, _, master_verify_key = yield self._get_e2e_cross_signing_verify_key(
user_id, "master"
)
# fetch our stored devices. This is used to 1. verify
# signatures on the master key, and 2. to compare with what
# was sent if the device was signed
devices = yield self.store.get_e2e_device_keys([(user_id, None)])
if user_id not in devices:
raise NotFoundError("No device keys found")
devices = devices[user_id]
except SynapseError as e:
failure = _exception_to_failure(e)
failures[user_id] = {device: failure for device in signatures.keys()}
return signature_list, failures
for device_id, device in signatures.items():
# make sure submitted data is in the right form
if not isinstance(device, dict):
raise SynapseError(400, "Invalid parameter", Codes.INVALID_PARAM)
try:
if "signatures" not in device or user_id not in device["signatures"]:
# no signature was sent
raise SynapseError(
400, "Invalid signature", Codes.INVALID_SIGNATURE
)
if device_id == master_verify_key.version:
# The signature is of the master key. This needs to be
# handled differently from signatures of normal devices.
master_key_signature_list = self._check_master_key_signature(
user_id, device_id, device, master_key, devices
)
signature_list.extend(master_key_signature_list)
continue
# at this point, we have a device that should be signed
# by the self-signing key
if self_signing_key_id not in device["signatures"][user_id]:
# no signature was sent
raise SynapseError(
400, "Invalid signature", Codes.INVALID_SIGNATURE
)
try:
stored_device = devices[device_id]
except KeyError:
raise NotFoundError("Unknown device")
if self_signing_key_id in stored_device.get("signatures", {}).get(
user_id, {}
):
# we already have a signature on this device, so we
# can skip it, since it should be exactly the same
continue
_check_device_signature(
user_id, self_signing_verify_key, device, stored_device
)
signature = device["signatures"][user_id][self_signing_key_id]
signature_list.append(
SignatureListItem(
self_signing_key_id, user_id, device_id, signature
)
)
except SynapseError as e:
failures.setdefault(user_id, {})[device_id] = _exception_to_failure(e)
return signature_list, failures
def _check_master_key_signature(
self, user_id, master_key_id, signed_master_key, stored_master_key, devices
):
"""Check signatures of a user's master key made by their devices.
Args:
user_id (string): the user whose master key is being checked
master_key_id (string): the ID of the user's master key
signed_master_key (dict): the user's signed master key that was uploaded
stored_master_key (dict): our previously-stored copy of the user's master key
devices (iterable(dict)): the user's devices
Returns:
list[SignatureListItem]: a list of signatures to store
Raises:
SynapseError: if a signature is invalid
"""
# for each device that signed the master key, check the signature.
master_key_signature_list = []
sigs = signed_master_key["signatures"]
for signing_key_id, signature in sigs[user_id].items():
_, signing_device_id = signing_key_id.split(":", 1)
if (
signing_device_id not in devices
or signing_key_id not in devices[signing_device_id]["keys"]
):
# signed by an unknown device, or the
# device does not have the key
raise SynapseError(400, "Invalid signature", Codes.INVALID_SIGNATURE)
# get the key and check the signature
pubkey = devices[signing_device_id]["keys"][signing_key_id]
verify_key = decode_verify_key_bytes(signing_key_id, decode_base64(pubkey))
_check_device_signature(
user_id, verify_key, signed_master_key, stored_master_key
)
master_key_signature_list.append(
SignatureListItem(signing_key_id, user_id, master_key_id, signature)
)
return master_key_signature_list
@defer.inlineCallbacks
def _process_other_signatures(self, user_id, signatures):
"""Process uploaded signatures of other users' keys. These will be the
target user's master keys, signed by the uploading user's user-signing
key.
Args:
user_id (string): the user uploading the keys
signatures (dict[string, dict]): map of users to devices to signed keys
Returns:
(list[SignatureListItem], dict[string, dict[string, dict]]):
a list of signatures to store, and a map of users to devices to failure
reasons
Raises:
SynapseError: if the input is malformed
"""
signature_list = []
failures = {}
if not signatures:
return signature_list, failures
try:
# get our user-signing key to verify the signatures
user_signing_key, user_signing_key_id, user_signing_verify_key = yield self._get_e2e_cross_signing_verify_key(
user_id, "user_signing"
)
except SynapseError as e:
failure = _exception_to_failure(e)
for user, devicemap in signatures.items():
failures[user] = {device_id: failure for device_id in devicemap.keys()}
return signature_list, failures
for target_user, devicemap in signatures.items():
# make sure submitted data is in the right form
if not isinstance(devicemap, dict):
raise SynapseError(400, "Invalid parameter", Codes.INVALID_PARAM)
for device in devicemap.values():
if not isinstance(device, dict):
raise SynapseError(400, "Invalid parameter", Codes.INVALID_PARAM)
device_id = None
try:
# get the target user's master key, to make sure it matches
# what was sent
master_key, master_key_id, _ = yield self._get_e2e_cross_signing_verify_key(
target_user, "master", user_id
)
# make sure that the target user's master key is the one that
# was signed (and no others)
device_id = master_key_id.split(":", 1)[1]
if device_id not in devicemap:
logger.debug(
"upload signature: could not find signature for device %s",
device_id,
)
# set device to None so that the failure gets
# marked on all the signatures
device_id = None
raise NotFoundError("Unknown device")
key = devicemap[device_id]
other_devices = [k for k in devicemap.keys() if k != device_id]
if other_devices:
# other devices were signed -- mark those as failures
logger.debug("upload signature: too many devices specified")
failure = _exception_to_failure(NotFoundError("Unknown device"))
failures[target_user] = {
device: failure for device in other_devices
}
if user_signing_key_id in master_key.get("signatures", {}).get(
user_id, {}
):
# we already have the signature, so we can skip it
continue
_check_device_signature(
user_id, user_signing_verify_key, key, master_key
)
signature = key["signatures"][user_id][user_signing_key_id]
signature_list.append(
SignatureListItem(
user_signing_key_id, target_user, device_id, signature
)
)
except SynapseError as e:
failure = _exception_to_failure(e)
if device_id is None:
failures[target_user] = {
device_id: failure for device_id in devicemap.keys()
}
else:
failures.setdefault(target_user, {})[device_id] = failure
return signature_list, failures
@defer.inlineCallbacks
def _get_e2e_cross_signing_verify_key(self, user_id, key_type, from_user_id=None):
"""Fetch the cross-signing public key from storage and interpret it.
Args:
user_id (str): the user whose key should be fetched
key_type (str): the type of key to fetch
from_user_id (str): the user that we are fetching the keys for.
This affects what signatures are fetched.
Returns:
dict, str, VerifyKey: the raw key data, the key ID, and the
signedjson verify key
Raises:
NotFoundError: if the key is not found
"""
key = yield self.store.get_e2e_cross_signing_key(
user_id, key_type, from_user_id
)
if key is None:
logger.debug("no %s key found for %s", key_type, user_id)
raise NotFoundError("No %s key found for %s" % (key_type, user_id))
key_id, verify_key = get_verify_key_from_cross_signing_key(key)
return key, key_id, verify_key
def _check_cross_signing_key(key, user_id, key_type, signing_key=None):
"""Check a cross-signing key uploaded by a user. Performs some basic sanity
checking, and ensures that it is signed, if a signature is required.
Args:
key (dict): the key data to verify
user_id (str): the user whose key is being checked
key_type (str): the type of key that the key should be
signing_key (VerifyKey): (optional) the signing key that the key should
be signed with. If omitted, signatures will not be checked.
"""
if (
key.get("user_id") != user_id
or key_type not in key.get("usage", [])
or len(key.get("keys", {})) != 1
):
raise SynapseError(400, ("Invalid %s key" % (key_type,)), Codes.INVALID_PARAM)
if signing_key:
try:
verify_signed_json(key, user_id, signing_key)
except SignatureVerifyException:
raise SynapseError(
400, ("Invalid signature on %s key" % key_type), Codes.INVALID_SIGNATURE
)
def _check_device_signature(user_id, verify_key, signed_device, stored_device):
"""Check that a signature on a device or cross-signing key is correct and
matches the copy of the device/key that we have stored. Throws an
exception if an error is detected.
Args:
user_id (str): the user ID whose signature is being checked
verify_key (VerifyKey): the key to verify the device with
signed_device (dict): the uploaded signed device data
stored_device (dict): our previously stored copy of the device
Raises:
SynapseError: if the signature was invalid or the sent device is not the
same as the stored device
"""
# make sure that the device submitted matches what we have stored
stripped_signed_device = {
k: v for k, v in signed_device.items() if k not in ["signatures", "unsigned"]
}
stripped_stored_device = {
k: v for k, v in stored_device.items() if k not in ["signatures", "unsigned"]
}
if stripped_signed_device != stripped_stored_device:
logger.debug(
"upload signatures: key does not match %s vs %s",
signed_device,
stored_device,
)
raise SynapseError(400, "Key does not match")
try:
verify_signed_json(signed_device, user_id, verify_key)
except SignatureVerifyException:
logger.debug("invalid signature on key")
raise SynapseError(400, "Invalid signature", Codes.INVALID_SIGNATURE)
def _exception_to_failure(e):
if isinstance(e, SynapseError):
return {"status": e.code, "errcode": e.errcode, "message": str(e)}
if isinstance(e, CodeMessageException):
return {"status": e.code, "message": str(e)}
@ -476,3 +1036,14 @@ def _one_time_keys_match(old_key_json, new_key):
new_key_copy.pop("signatures", None)
return old_key == new_key_copy
@attr.s
class SignatureListItem:
"""An item in the signature list as used by upload_signatures_for_device_keys.
"""
signing_key_id = attr.ib()
target_user_id = attr.ib()
target_device_id = attr.ib()
signature = attr.ib()

View file

@ -352,8 +352,8 @@ class E2eRoomKeysHandler(object):
A deferred of an empty dict.
"""
if "version" not in version_info:
raise SynapseError(400, "Missing version in body", Codes.MISSING_PARAM)
if version_info["version"] != version:
version_info["version"] = version
elif version_info["version"] != version:
raise SynapseError(
400, "Version in body does not match", Codes.INVALID_PARAM
)

View file

@ -30,6 +30,7 @@ from unpaddedbase64 import decode_base64
from twisted.internet import defer
from synapse import event_auth
from synapse.api.constants import EventTypes, Membership, RejectedReason
from synapse.api.errors import (
AuthError,
@ -1763,7 +1764,7 @@ class FederationHandler(BaseHandler):
auth_for_e[(EventTypes.Create, "")] = create_event
try:
self.auth.check(room_version, e, auth_events=auth_for_e)
event_auth.check(room_version, e, auth_events=auth_for_e)
except SynapseError as err:
# we may get SynapseErrors here as well as AuthErrors. For
# instance, there are a couple of (ancient) events in some
@ -1919,7 +1920,7 @@ class FederationHandler(BaseHandler):
}
try:
self.auth.check(room_version, event, auth_events=current_auth_events)
event_auth.check(room_version, event, auth_events=current_auth_events)
except AuthError as e:
logger.warn("Soft-failing %r because %s", event, e)
event.internal_metadata.soft_failed = True
@ -2018,7 +2019,7 @@ class FederationHandler(BaseHandler):
)
try:
self.auth.check(room_version, event, auth_events=auth_events)
event_auth.check(room_version, event, auth_events=auth_events)
except AuthError as e:
logger.warn("Failed auth resolution for %r because %s", event, e)
raise e
@ -2181,103 +2182,10 @@ class FederationHandler(BaseHandler):
auth_events.update(new_state)
different_auth = event_auth_events.difference(
e.event_id for e in auth_events.values()
)
yield self._update_context_for_auth_events(
event, context, auth_events, event_key
)
if not different_auth:
# we're done
return
logger.info(
"auth_events still refers to events which are not in the calculated auth "
"chain after state resolution: %s",
different_auth,
)
# Only do auth resolution if we have something new to say.
# We can't prove an auth failure.
do_resolution = False
for e_id in different_auth:
if e_id in have_events:
if have_events[e_id] == RejectedReason.NOT_ANCESTOR:
do_resolution = True
break
if not do_resolution:
logger.info(
"Skipping auth resolution due to lack of provable rejection reasons"
)
return
logger.info("Doing auth resolution")
prev_state_ids = yield context.get_prev_state_ids(self.store)
# 1. Get what we think is the auth chain.
auth_ids = yield self.auth.compute_auth_events(event, prev_state_ids)
local_auth_chain = yield self.store.get_auth_chain(auth_ids, include_given=True)
try:
# 2. Get remote difference.
try:
result = yield self.federation_client.query_auth(
origin, event.room_id, event.event_id, local_auth_chain
)
except RequestSendFailed as e:
# The other side isn't around or doesn't implement the
# endpoint, so lets just bail out.
logger.info("Failed to query auth from remote: %s", e)
return
seen_remotes = yield self.store.have_seen_events(
[e.event_id for e in result["auth_chain"]]
)
# 3. Process any remote auth chain events we haven't seen.
for ev in result["auth_chain"]:
if ev.event_id in seen_remotes:
continue
if ev.event_id == event.event_id:
continue
try:
auth_ids = ev.auth_event_ids()
auth = {
(e.type, e.state_key): e
for e in result["auth_chain"]
if e.event_id in auth_ids or event.type == EventTypes.Create
}
ev.internal_metadata.outlier = True
logger.debug(
"do_auth %s different_auth: %s", event.event_id, e.event_id
)
yield self._handle_new_event(origin, ev, auth_events=auth)
if ev.event_id in event_auth_events:
auth_events[(ev.type, ev.state_key)] = ev
except AuthError:
pass
except Exception:
# FIXME:
logger.exception("Failed to query auth chain")
# 4. Look at rejects and their proofs.
# TODO.
yield self._update_context_for_auth_events(
event, context, auth_events, event_key
)
@defer.inlineCallbacks
def _update_context_for_auth_events(self, event, context, auth_events, event_key):
"""Update the state_ids in an event context after auth event resolution,
@ -2444,15 +2352,6 @@ class FederationHandler(BaseHandler):
reason_map[e.event_id] = reason
if reason == RejectedReason.AUTH_ERROR:
pass
elif reason == RejectedReason.REPLACED:
# TODO: Get proof
pass
elif reason == RejectedReason.NOT_ANCESTOR:
# TODO: Get proof.
pass
logger.debug("construct_auth_difference returning")
return {

View file

@ -24,6 +24,7 @@ The methods that define policy are:
import logging
from contextlib import contextmanager
from typing import Dict, Set
from six import iteritems, itervalues
@ -179,8 +180,9 @@ class PresenceHandler(object):
# we assume that all the sync requests on that process have stopped.
# Stored as a dict from process_id to set of user_id, and a dict of
# process_id to millisecond timestamp last updated.
self.external_process_to_current_syncs = {}
self.external_process_last_updated_ms = {}
self.external_process_to_current_syncs = {} # type: Dict[int, Set[str]]
self.external_process_last_updated_ms = {} # type: Dict[int, int]
self.external_sync_linearizer = Linearizer(name="external_sync_linearizer")
# Start a LoopingCall in 30s that fires every 5s.
@ -349,10 +351,13 @@ class PresenceHandler(object):
if now - last_update > EXTERNAL_PROCESS_EXPIRY
]
for process_id in expired_process_ids:
# For each expired process drop tracking info and check the users
# that were syncing on that process to see if they need to be timed
# out.
users_to_check.update(
self.external_process_last_updated_ms.pop(process_id, ())
self.external_process_to_current_syncs.pop(process_id, ())
)
self.external_process_last_update.pop(process_id)
self.external_process_last_updated_ms.pop(process_id)
states = [
self.user_to_current_state.get(user_id, UserPresenceState.default(user_id))

View file

@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# Copyright 2018, 2019 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -1127,6 +1127,11 @@ class SyncHandler(object):
# weren't in the previous sync *or* they left and rejoined.
users_that_have_changed.update(newly_joined_or_invited_users)
user_signatures_changed = yield self.store.get_users_whose_signatures_changed(
user_id, since_token.device_list_key
)
users_that_have_changed.update(user_signatures_changed)
# Now find users that we no longer track
for room_id in newly_left_rooms:
left_users = yield self.state.get_current_users_in_room(room_id)

View file

@ -388,7 +388,7 @@ class DirectServeResource(resource.Resource):
if not callback:
return super().render(request)
resp = callback(request)
resp = trace_servlet(self.__class__.__name__)(callback)(request)
# If it's a coroutine, turn it into a Deferred
if isinstance(resp, types.CoroutineType):

View file

@ -169,6 +169,7 @@ import contextlib
import inspect
import logging
import re
import types
from functools import wraps
from typing import Dict
@ -778,8 +779,7 @@ def trace_servlet(servlet_name, extract_context=False):
return func
@wraps(func)
@defer.inlineCallbacks
def _trace_servlet_inner(request, *args, **kwargs):
async def _trace_servlet_inner(request, *args, **kwargs):
request_tags = {
"request_id": request.get_request_id(),
tags.SPAN_KIND: tags.SPAN_KIND_RPC_SERVER,
@ -796,8 +796,14 @@ def trace_servlet(servlet_name, extract_context=False):
scope = start_active_span(servlet_name, tags=request_tags)
with scope:
result = yield defer.maybeDeferred(func, request, *args, **kwargs)
return result
result = func(request, *args, **kwargs)
if not isinstance(result, (types.CoroutineType, defer.Deferred)):
# Some servlets aren't async and just return results
# directly, so we handle that here.
return result
return await result
return _trace_servlet_inner

View file

@ -16,8 +16,8 @@
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
from synapse.storage.account_data import AccountDataWorkerStore
from synapse.storage.tags import TagsWorkerStore
from synapse.storage.data_stores.main.account_data import AccountDataWorkerStore
from synapse.storage.data_stores.main.tags import TagsWorkerStore
class SlavedAccountDataStore(TagsWorkerStore, AccountDataWorkerStore, BaseSlavedStore):

View file

@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.storage.appservice import (
from synapse.storage.data_stores.main.appservice import (
ApplicationServiceTransactionWorkerStore,
ApplicationServiceWorkerStore,
)

View file

@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.storage.client_ips import LAST_SEEN_GRANULARITY
from synapse.storage.data_stores.main.client_ips import LAST_SEEN_GRANULARITY
from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.caches.descriptors import Cache

View file

@ -15,7 +15,7 @@
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
from synapse.storage.deviceinbox import DeviceInboxWorkerStore
from synapse.storage.data_stores.main.deviceinbox import DeviceInboxWorkerStore
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.caches.stream_change_cache import StreamChangeCache

View file

@ -15,8 +15,8 @@
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
from synapse.storage.devices import DeviceWorkerStore
from synapse.storage.end_to_end_keys import EndToEndKeyWorkerStore
from synapse.storage.data_stores.main.devices import DeviceWorkerStore
from synapse.storage.data_stores.main.end_to_end_keys import EndToEndKeyWorkerStore
from synapse.util.caches.stream_change_cache import StreamChangeCache
@ -33,6 +33,9 @@ class SlavedDeviceStore(EndToEndKeyWorkerStore, DeviceWorkerStore, BaseSlavedSto
self._device_list_stream_cache = StreamChangeCache(
"DeviceListStreamChangeCache", device_list_max
)
self._user_signature_stream_cache = StreamChangeCache(
"UserSignatureStreamChangeCache", device_list_max
)
self._device_list_federation_stream_cache = StreamChangeCache(
"DeviceListFederationStreamChangeCache", device_list_max
)

View file

@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.storage.directory import DirectoryWorkerStore
from synapse.storage.data_stores.main.directory import DirectoryWorkerStore
from ._base import BaseSlavedStore

View file

@ -20,15 +20,17 @@ from synapse.replication.tcp.streams.events import (
EventsStreamCurrentStateRow,
EventsStreamEventRow,
)
from synapse.storage.event_federation import EventFederationWorkerStore
from synapse.storage.event_push_actions import EventPushActionsWorkerStore
from synapse.storage.events_worker import EventsWorkerStore
from synapse.storage.relations import RelationsWorkerStore
from synapse.storage.roommember import RoomMemberWorkerStore
from synapse.storage.signatures import SignatureWorkerStore
from synapse.storage.state import StateGroupWorkerStore
from synapse.storage.stream import StreamWorkerStore
from synapse.storage.user_erasure_store import UserErasureWorkerStore
from synapse.storage.data_stores.main.event_federation import EventFederationWorkerStore
from synapse.storage.data_stores.main.event_push_actions import (
EventPushActionsWorkerStore,
)
from synapse.storage.data_stores.main.events_worker import EventsWorkerStore
from synapse.storage.data_stores.main.relations import RelationsWorkerStore
from synapse.storage.data_stores.main.roommember import RoomMemberWorkerStore
from synapse.storage.data_stores.main.signatures import SignatureWorkerStore
from synapse.storage.data_stores.main.state import StateGroupWorkerStore
from synapse.storage.data_stores.main.stream import StreamWorkerStore
from synapse.storage.data_stores.main.user_erasure_store import UserErasureWorkerStore
from ._base import BaseSlavedStore
from ._slaved_id_tracker import SlavedIdTracker

View file

@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.storage.filtering import FilteringStore
from synapse.storage.data_stores.main.filtering import FilteringStore
from ._base import BaseSlavedStore

Some files were not shown because too many files have changed in this diff Show more