Merge branch 'develop' into rav/enforce_report_api

This commit is contained in:
Richard van der Hoff 2018-07-12 09:56:28 +01:00
commit 482d17b58b
370 changed files with 5309 additions and 2679 deletions

1
.gitignore vendored
View file

@ -43,6 +43,7 @@ media_store/
build/ build/
venv/ venv/
venv*/
localhost-800*/ localhost-800*/
static/client/register/register_config.js static/client/register/register_config.js

View file

@ -4,7 +4,12 @@ language: python
# tell travis to cache ~/.cache/pip # tell travis to cache ~/.cache/pip
cache: pip cache: pip
before_script:
- git remote set-branches --add origin develop
- git fetch origin develop
matrix: matrix:
fast_finish: true
include: include:
- python: 2.7 - python: 2.7
env: TOX_ENV=packaging env: TOX_ENV=packaging
@ -14,10 +19,13 @@ matrix:
- python: 2.7 - python: 2.7
env: TOX_ENV=py27 env: TOX_ENV=py27
- python: 3.6 - python: 3.6
env: TOX_ENV=py36 env: TOX_ENV=py36
- python: 3.6
env: TOX_ENV=check-newsfragment
install: install:
- pip install tox - pip install tox

View file

@ -1,3 +1,150 @@
Synapse 0.32.2 (2018-07-07)
===========================
Bugfixes
--------
- Amend the Python dependencies to depend on attrs from PyPI, not attr (`#3492 <https://github.com/matrix-org/synapse/issues/3492>`_)
Synapse 0.32.1 (2018-07-06)
===========================
Bugfixes
--------
- Add explicit dependency on netaddr (`#3488 <https://github.com/matrix-org/synapse/issues/3488>`_)
Changes in synapse v0.32.0 (2018-07-06)
===========================================
No changes since 0.32.0rc1
Synapse 0.32.0rc1 (2018-07-05)
==============================
Features
--------
- Add blacklist & whitelist of servers allowed to send events to a room via ``m.room.server_acl`` event.
- Cache factor override system for specific caches (`#3334 <https://github.com/matrix-org/synapse/issues/3334>`_)
- Add metrics to track appservice transactions (`#3344 <https://github.com/matrix-org/synapse/issues/3344>`_)
- Try to log more helpful info when a sig verification fails (`#3372 <https://github.com/matrix-org/synapse/issues/3372>`_)
- Synapse now uses the best performing JSON encoder/decoder according to your runtime (simplejson on CPython, stdlib json on PyPy). (`#3462 <https://github.com/matrix-org/synapse/issues/3462>`_)
- Add optional ip_range_whitelist param to AS registration files to lock AS IP access (`#3465 <https://github.com/matrix-org/synapse/issues/3465>`_)
- Reject invalid server names in federation requests (`#3480 <https://github.com/matrix-org/synapse/issues/3480>`_)
- Reject invalid server names in homeserver.yaml (`#3483 <https://github.com/matrix-org/synapse/issues/3483>`_)
Bugfixes
--------
- Strip access_token from outgoing requests (`#3327 <https://github.com/matrix-org/synapse/issues/3327>`_)
- Redact AS tokens in logs (`#3349 <https://github.com/matrix-org/synapse/issues/3349>`_)
- Fix federation backfill from SQLite servers (`#3355 <https://github.com/matrix-org/synapse/issues/3355>`_)
- Fix event-purge-by-ts admin API (`#3363 <https://github.com/matrix-org/synapse/issues/3363>`_)
- Fix event filtering in get_missing_events handler (`#3371 <https://github.com/matrix-org/synapse/issues/3371>`_)
- Synapse is now stricter regarding accepting events which it cannot retrieve the prev_events for. (`#3456 <https://github.com/matrix-org/synapse/issues/3456>`_)
- Fix bug where synapse would explode when receiving unicode in HTTP User-Agent header (`#3470 <https://github.com/matrix-org/synapse/issues/3470>`_)
- Invalidate cache on correct thread to avoid race (`#3473 <https://github.com/matrix-org/synapse/issues/3473>`_)
Improved Documentation
----------------------
- ``doc/postgres.rst``: fix display of the last command block. Thanks to @ArchangeGabriel! (`#3340 <https://github.com/matrix-org/synapse/issues/3340>`_)
Deprecations and Removals
-------------------------
- Remove was_forgotten_at (`#3324 <https://github.com/matrix-org/synapse/issues/3324>`_)
Misc
----
- `#3332 <https://github.com/matrix-org/synapse/issues/3332>`_, `#3341 <https://github.com/matrix-org/synapse/issues/3341>`_, `#3347 <https://github.com/matrix-org/synapse/issues/3347>`_, `#3348 <https://github.com/matrix-org/synapse/issues/3348>`_, `#3356 <https://github.com/matrix-org/synapse/issues/3356>`_, `#3385 <https://github.com/matrix-org/synapse/issues/3385>`_, `#3446 <https://github.com/matrix-org/synapse/issues/3446>`_, `#3447 <https://github.com/matrix-org/synapse/issues/3447>`_, `#3467 <https://github.com/matrix-org/synapse/issues/3467>`_, `#3474 <https://github.com/matrix-org/synapse/issues/3474>`_
Changes in synapse v0.31.2 (2018-06-14)
=======================================
SECURITY UPDATE: Prevent unauthorised users from setting state events in a room
when there is no ``m.room.power_levels`` event in force in the room. (PR #3397)
Discussion around the Matrix Spec change proposal for this change can be
followed at https://github.com/matrix-org/matrix-doc/issues/1304.
Changes in synapse v0.31.1 (2018-06-08)
=======================================
v0.31.1 fixes a security bug in the ``get_missing_events`` federation API
where event visibility rules were not applied correctly.
We are not aware of it being actively exploited but please upgrade asap.
Bug Fixes:
* Fix event filtering in get_missing_events handler (PR #3371)
Changes in synapse v0.31.0 (2018-06-06)
=======================================
Most notable change from v0.30.0 is to switch to the python prometheus library to improve system
stats reporting. WARNING: this changes a number of prometheus metrics in a
backwards-incompatible manner. For more details, see
`docs/metrics-howto.rst <docs/metrics-howto.rst#removal-of-deprecated-metrics--time-based-counters-becoming-histograms-in-0310>`_.
Bug Fixes:
* Fix metric documentation tables (PR #3341)
* Fix LaterGauge error handling (694968f)
* Fix replication metrics (b7e7fd2)
Changes in synapse v0.31.0-rc1 (2018-06-04)
==========================================
Features:
* Switch to the Python Prometheus library (PR #3256, #3274)
* Let users leave the server notice room after joining (PR #3287)
Changes:
* daily user type phone home stats (PR #3264)
* Use iter* methods for _filter_events_for_server (PR #3267)
* Docs on consent bits (PR #3268)
* Remove users from user directory on deactivate (PR #3277)
* Avoid sending consent notice to guest users (PR #3288)
* disable CPUMetrics if no /proc/self/stat (PR #3299)
* Consistently use six's iteritems and wrap lazy keys/values in list() if they're not meant to be lazy (PR #3307)
* Add private IPv6 addresses to example config for url preview blacklist (PR #3317) Thanks to @thegcat!
* Reduce stuck read-receipts: ignore depth when updating (PR #3318)
* Put python's logs into Trial when running unit tests (PR #3319)
Changes, python 3 migration:
* Replace some more comparisons with six (PR #3243) Thanks to @NotAFile!
* replace some iteritems with six (PR #3244) Thanks to @NotAFile!
* Add batch_iter to utils (PR #3245) Thanks to @NotAFile!
* use repr, not str (PR #3246) Thanks to @NotAFile!
* Misc Python3 fixes (PR #3247) Thanks to @NotAFile!
* Py3 storage/_base.py (PR #3278) Thanks to @NotAFile!
* more six iteritems (PR #3279) Thanks to @NotAFile!
* More Misc. py3 fixes (PR #3280) Thanks to @NotAFile!
* remaining isintance fixes (PR #3281) Thanks to @NotAFile!
* py3-ize state.py (PR #3283) Thanks to @NotAFile!
* extend tox testing for py3 to avoid regressions (PR #3302) Thanks to @krombel!
* use memoryview in py3 (PR #3303) Thanks to @NotAFile!
Bugs:
* Fix federation backfill bugs (PR #3261)
* federation: fix LaterGauge usage (PR #3328) Thanks to @intelfx!
Changes in synapse v0.30.0 (2018-05-24) Changes in synapse v0.30.0 (2018-05-24)
========================================== ==========================================

View file

@ -48,6 +48,26 @@ Please ensure your changes match the cosmetic style of the existing project,
and **never** mix cosmetic and functional changes in the same commit, as it and **never** mix cosmetic and functional changes in the same commit, as it
makes it horribly hard to review otherwise. makes it horribly hard to review otherwise.
Changelog
~~~~~~~~~
All changes, even minor ones, need a corresponding changelog
entry. These are managed by Towncrier
(https://github.com/hawkowl/towncrier).
To create a changelog entry, make a new file in the ``changelog.d``
file named in the format of ``issuenumberOrPR.type``. The type can be
one of ``feature``, ``bugfix``, ``removal`` (also used for
deprecations), or ``misc`` (for internal-only changes). The content of
the file is your changelog entry, which can contain RestructuredText
formatting. A note of contributors is welcomed in changelogs for
non-misc changes (the content of misc changes is not displayed).
For example, a fix for a bug reported in #1234 would have its
changelog entry in ``changelog.d/1234.bugfix``, and contain content
like "The security levels of Florbs are now validated when
recieved over federation. Contributed by Jane Matrix".
Attribution Attribution
~~~~~~~~~~~ ~~~~~~~~~~~
@ -110,11 +130,15 @@ If you agree to this for your contribution, then all that's needed is to
include the line in your commit or pull request comment:: include the line in your commit or pull request comment::
Signed-off-by: Your Name <your@email.example.org> Signed-off-by: Your Name <your@email.example.org>
...using your real name; unfortunately pseudonyms and anonymous contributions We accept contributions under a legally identifiable name, such as
can't be accepted. Git makes this trivial - just use the -s flag when you do your name on government documentation or common-law names (names
``git commit``, having first set ``user.name`` and ``user.email`` git configs claimed by legitimate usage or repute). Unfortunately, we cannot
(which you should have done anyway :) accept anonymous contributions at this time.
Git allows you to add this signoff automatically when using the ``-s``
flag to ``git commit``, which uses the name and email set in your
``user.name`` and ``user.email`` git configs.
Conclusion Conclusion
~~~~~~~~~~ ~~~~~~~~~~

View file

@ -29,5 +29,8 @@ exclude Dockerfile
exclude .dockerignore exclude .dockerignore
recursive-exclude jenkins *.sh recursive-exclude jenkins *.sh
include pyproject.toml
recursive-include changelog.d *
prune .github prune .github
prune demo/etc prune demo/etc

1
changelog.d/.gitignore vendored Normal file
View file

@ -0,0 +1 @@
!.gitignore

0
changelog.d/3463.misc Normal file
View file

0
changelog.d/3464.misc Normal file
View file

1
changelog.d/3496.feature Normal file
View file

@ -0,0 +1 @@
Include CPU time from database threads in request/block metrics.

1
changelog.d/3497.feature Normal file
View file

@ -0,0 +1 @@
Add CPU metrics for _fetch_event_list

0
changelog.d/3498.misc Normal file
View file

0
changelog.d/3501.misc Normal file
View file

1
changelog.d/3505.feature Normal file
View file

@ -0,0 +1 @@
Reduce database consumption when processing large numbers of receipts

View file

@ -44,13 +44,26 @@ Deactivate Account
This API deactivates an account. It removes active access tokens, resets the This API deactivates an account. It removes active access tokens, resets the
password, and deletes third-party IDs (to prevent the user requesting a password, and deletes third-party IDs (to prevent the user requesting a
password reset). password reset). It can also mark the user as GDPR-erased (stopping their data
from distributed further, and deleting it entirely if there are no other
references to it).
The api is:: The api is::
POST /_matrix/client/r0/admin/deactivate/<user_id> POST /_matrix/client/r0/admin/deactivate/<user_id>
including an ``access_token`` of a server admin, and an empty request body. with a body of:
.. code:: json
{
"erase": true
}
including an ``access_token`` of a server admin.
The erase parameter is optional and defaults to 'false'.
An empty body may be passed for backwards compatibility.
Reset password Reset password

View file

@ -16,7 +16,7 @@
print("I am a fish %s" % print("I am a fish %s" %
"moo") "moo")
and this:: and this::
print( print(
"I am a fish %s" % "I am a fish %s" %

View file

@ -1,25 +1,47 @@
How to monitor Synapse metrics using Prometheus How to monitor Synapse metrics using Prometheus
=============================================== ===============================================
1. Install prometheus: 1. Install Prometheus:
Follow instructions at http://prometheus.io/docs/introduction/install/ Follow instructions at http://prometheus.io/docs/introduction/install/
2. Enable synapse metrics: 2. Enable Synapse metrics:
Simply setting a (local) port number will enable it. Pick a port. There are two methods of enabling metrics in Synapse.
prometheus itself defaults to 9090, so starting just above that for
locally monitored services seems reasonable. E.g. 9092:
Add to homeserver.yaml:: The first serves the metrics as a part of the usual web server and can be
enabled by adding the "metrics" resource to the existing listener as such::
metrics_port: 9092 resources:
- names:
- client
- metrics
Also ensure that ``enable_metrics`` is set to ``True``. This provides a simple way of adding metrics to your Synapse installation,
and serves under ``/_synapse/metrics``. If you do not wish your metrics be
publicly exposed, you will need to either filter it out at your load
balancer, or use the second method.
Restart synapse. The second method runs the metrics server on a different port, in a
different thread to Synapse. This can make it more resilient to heavy load
meaning metrics cannot be retrieved, and can be exposed to just internal
networks easier. The served metrics are available over HTTP only, and will
be available at ``/``.
3. Add a prometheus target for synapse. Add a new listener to homeserver.yaml::
listeners:
- type: metrics
port: 9000
bind_addresses:
- '0.0.0.0'
For both options, you will need to ensure that ``enable_metrics`` is set to
``True``.
Restart Synapse.
3. Add a Prometheus target for Synapse.
It needs to set the ``metrics_path`` to a non-default value (under ``scrape_configs``):: It needs to set the ``metrics_path`` to a non-default value (under ``scrape_configs``)::
@ -31,7 +53,50 @@ How to monitor Synapse metrics using Prometheus
If your prometheus is older than 1.5.2, you will need to replace If your prometheus is older than 1.5.2, you will need to replace
``static_configs`` in the above with ``target_groups``. ``static_configs`` in the above with ``target_groups``.
Restart prometheus. Restart Prometheus.
Removal of deprecated metrics & time based counters becoming histograms in 0.31.0
---------------------------------------------------------------------------------
The duplicated metrics deprecated in Synapse 0.27.0 have been removed.
All time duration-based metrics have been changed to be seconds. This affects:
+----------------------------------+
| msec -> sec metrics |
+==================================+
| python_gc_time |
+----------------------------------+
| python_twisted_reactor_tick_time |
+----------------------------------+
| synapse_storage_query_time |
+----------------------------------+
| synapse_storage_schedule_time |
+----------------------------------+
| synapse_storage_transaction_time |
+----------------------------------+
Several metrics have been changed to be histograms, which sort entries into
buckets and allow better analysis. The following metrics are now histograms:
+-------------------------------------------+
| Altered metrics |
+===========================================+
| python_gc_time |
+-------------------------------------------+
| python_twisted_reactor_pending_calls |
+-------------------------------------------+
| python_twisted_reactor_tick_time |
+-------------------------------------------+
| synapse_http_server_response_time_seconds |
+-------------------------------------------+
| synapse_storage_query_time |
+-------------------------------------------+
| synapse_storage_schedule_time |
+-------------------------------------------+
| synapse_storage_transaction_time |
+-------------------------------------------+
Block and response metrics renamed for 0.27.0 Block and response metrics renamed for 0.27.0

View file

@ -9,19 +9,19 @@ Set up database
Assuming your PostgreSQL database user is called ``postgres``, create a user Assuming your PostgreSQL database user is called ``postgres``, create a user
``synapse_user`` with:: ``synapse_user`` with::
su - postgres su - postgres
createuser --pwprompt synapse_user createuser --pwprompt synapse_user
The PostgreSQL database used *must* have the correct encoding set, otherwise it The PostgreSQL database used *must* have the correct encoding set, otherwise it
would not be able to store UTF8 strings. To create a database with the correct would not be able to store UTF8 strings. To create a database with the correct
encoding use, e.g.:: encoding use, e.g.::
CREATE DATABASE synapse CREATE DATABASE synapse
ENCODING 'UTF8' ENCODING 'UTF8'
LC_COLLATE='C' LC_COLLATE='C'
LC_CTYPE='C' LC_CTYPE='C'
template=template0 template=template0
OWNER synapse_user; OWNER synapse_user;
This would create an appropriate database named ``synapse`` owned by the This would create an appropriate database named ``synapse`` owned by the
``synapse_user`` user (which must already exist). ``synapse_user`` user (which must already exist).
@ -126,7 +126,7 @@ run::
--postgres-config homeserver-postgres.yaml --postgres-config homeserver-postgres.yaml
Once that has completed, change the synapse config to point at the PostgreSQL Once that has completed, change the synapse config to point at the PostgreSQL
database configuration file ``homeserver-postgres.yaml``: database configuration file ``homeserver-postgres.yaml``::
./synctl stop ./synctl stop
mv homeserver.yaml homeserver-old-sqlite.yaml mv homeserver.yaml homeserver-old-sqlite.yaml

5
pyproject.toml Normal file
View file

@ -0,0 +1,5 @@
[tool.towncrier]
package = "synapse"
filename = "CHANGES.rst"
directory = "changelog.d"
issue_format = "`#{issue} <https://github.com/matrix-org/synapse/issues/{issue}>`_"

View file

@ -18,14 +18,22 @@
from __future__ import print_function from __future__ import print_function
import argparse import argparse
from urlparse import urlparse, urlunparse
import nacl.signing import nacl.signing
import json import json
import base64 import base64
import requests import requests
import sys import sys
from requests.adapters import HTTPAdapter
import srvlookup import srvlookup
import yaml import yaml
# uncomment the following to enable debug logging of http requests
#from httplib import HTTPConnection
#HTTPConnection.debuglevel = 1
def encode_base64(input_bytes): def encode_base64(input_bytes):
"""Encode bytes as a base64 string without any padding.""" """Encode bytes as a base64 string without any padding."""
@ -113,17 +121,6 @@ def read_signing_keys(stream):
return keys return keys
def lookup(destination, path):
if ":" in destination:
return "https://%s%s" % (destination, path)
else:
try:
srv = srvlookup.lookup("matrix", "tcp", destination)[0]
return "https://%s:%d%s" % (srv.host, srv.port, path)
except:
return "https://%s:%d%s" % (destination, 8448, path)
def request_json(method, origin_name, origin_key, destination, path, content): def request_json(method, origin_name, origin_key, destination, path, content):
if method is None: if method is None:
if content is None: if content is None:
@ -152,13 +149,19 @@ def request_json(method, origin_name, origin_key, destination, path, content):
authorization_headers.append(bytes(header)) authorization_headers.append(bytes(header))
print ("Authorization: %s" % header, file=sys.stderr) print ("Authorization: %s" % header, file=sys.stderr)
dest = lookup(destination, path) dest = "matrix://%s%s" % (destination, path)
print ("Requesting %s" % dest, file=sys.stderr) print ("Requesting %s" % dest, file=sys.stderr)
result = requests.request( s = requests.Session()
s.mount("matrix://", MatrixConnectionAdapter())
result = s.request(
method=method, method=method,
url=dest, url=dest,
headers={"Authorization": authorization_headers[0]}, headers={
"Host": destination,
"Authorization": authorization_headers[0]
},
verify=False, verify=False,
data=content, data=content,
) )
@ -242,5 +245,39 @@ def read_args_from_config(args):
args.signing_key_path = config['signing_key_path'] args.signing_key_path = config['signing_key_path']
class MatrixConnectionAdapter(HTTPAdapter):
@staticmethod
def lookup(s):
if s[-1] == ']':
# ipv6 literal (with no port)
return s, 8448
if ":" in s:
out = s.rsplit(":",1)
try:
port = int(out[1])
except ValueError:
raise ValueError("Invalid host:port '%s'" % s)
return out[0], port
try:
srv = srvlookup.lookup("matrix", "tcp", s)[0]
return srv.host, srv.port
except:
return s, 8448
def get_connection(self, url, proxies=None):
parsed = urlparse(url)
(host, port) = self.lookup(parsed.netloc)
netloc = "%s:%d" % (host, port)
print("Connecting to %s" % (netloc,), file=sys.stderr)
url = urlunparse((
"https", netloc, parsed.path, parsed.params, parsed.query,
parsed.fragment,
))
return super(MatrixConnectionAdapter, self).get_connection(url, proxies)
if __name__ == "__main__": if __name__ == "__main__":
main() main()

View file

@ -17,4 +17,17 @@ ignore =
[flake8] [flake8]
max-line-length = 90 max-line-length = 90
# W503 requires that binary operators be at the end, not start, of lines. Erik doesn't like it. # W503 requires that binary operators be at the end, not start, of lines. Erik doesn't like it.
ignore = W503 # E203 is contrary to PEP8.
ignore = W503,E203
[isort]
line_length = 89
not_skip = __init__.py
sections=FUTURE,STDLIB,COMPAT,THIRDPARTY,TWISTED,FIRSTPARTY,TESTS,LOCALFOLDER
default_section=THIRDPARTY
known_first_party = synapse
known_tests=tests
known_compat = mock,six
known_twisted=twisted,OpenSSL
multi_line_output=3
include_trailing_comma=true

View file

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd # Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -16,4 +17,4 @@
""" This is a reference implementation of a Matrix home server. """ This is a reference implementation of a Matrix home server.
""" """
__version__ = "0.30.0" __version__ = "0.32.2"

View file

@ -15,15 +15,19 @@
import logging import logging
from six import itervalues
import pymacaroons import pymacaroons
from netaddr import IPAddress
from twisted.internet import defer from twisted.internet import defer
import synapse.types import synapse.types
from synapse import event_auth from synapse import event_auth
from synapse.api.constants import EventTypes, Membership, JoinRules from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.api.errors import AuthError, Codes from synapse.api.errors import AuthError, Codes
from synapse.types import UserID from synapse.types import UserID
from synapse.util.caches import register_cache, CACHE_SIZE_FACTOR from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache
from synapse.util.caches.lrucache import LruCache from synapse.util.caches.lrucache import LruCache
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
@ -66,7 +70,7 @@ class Auth(object):
) )
auth_events = yield self.store.get_events(auth_events_ids) auth_events = yield self.store.get_events(auth_events_ids)
auth_events = { auth_events = {
(e.type, e.state_key): e for e in auth_events.values() (e.type, e.state_key): e for e in itervalues(auth_events)
} }
self.check(event, auth_events=auth_events, do_sig_check=do_sig_check) self.check(event, auth_events=auth_events, do_sig_check=do_sig_check)
@ -242,6 +246,11 @@ class Auth(object):
if app_service is None: if app_service is None:
defer.returnValue((None, None)) defer.returnValue((None, None))
if app_service.ip_range_whitelist:
ip_address = IPAddress(self.hs.get_ip_from_request(request))
if ip_address not in app_service.ip_range_whitelist:
defer.returnValue((None, None))
if "user_id" not in request.args: if "user_id" not in request.args:
defer.returnValue((app_service.sender, app_service)) defer.returnValue((app_service.sender, app_service))
@ -486,7 +495,7 @@ class Auth(object):
def _look_up_user_by_access_token(self, token): def _look_up_user_by_access_token(self, token):
ret = yield self.store.get_user_by_access_token(token) ret = yield self.store.get_user_by_access_token(token)
if not ret: if not ret:
logger.warn("Unrecognised access token - not in store: %s" % (token,)) logger.warn("Unrecognised access token - not in store.")
raise AuthError( raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Unrecognised access token.", self.TOKEN_NOT_FOUND_HTTP_STATUS, "Unrecognised access token.",
errcode=Codes.UNKNOWN_TOKEN errcode=Codes.UNKNOWN_TOKEN
@ -509,7 +518,7 @@ class Auth(object):
) )
service = self.store.get_app_service_by_token(token) service = self.store.get_app_service_by_token(token)
if not service: if not service:
logger.warn("Unrecognised appservice access token: %s" % (token,)) logger.warn("Unrecognised appservice access token.")
raise AuthError( raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Unrecognised access token.", "Unrecognised access token.",
@ -653,7 +662,7 @@ class Auth(object):
auth_events[(EventTypes.PowerLevels, "")] = power_level_event auth_events[(EventTypes.PowerLevels, "")] = power_level_event
send_level = event_auth.get_send_level( send_level = event_auth.get_send_level(
EventTypes.Aliases, "", auth_events EventTypes.Aliases, "", power_level_event,
) )
user_level = event_auth.get_user_power_level(user_id, auth_events) user_level = event_auth.get_user_power_level(user_id, auth_events)

View file

@ -76,6 +76,8 @@ class EventTypes(object):
Topic = "m.room.topic" Topic = "m.room.topic"
Name = "m.room.name" Name = "m.room.name"
ServerACL = "m.room.server_acl"
class RejectedReason(object): class RejectedReason(object):
AUTH_ERROR = "auth_error" AUTH_ERROR = "auth_error"

View file

@ -17,10 +17,11 @@
import logging import logging
import simplejson as json
from six import iteritems from six import iteritems
from six.moves import http_client from six.moves import http_client
from canonicaljson import json
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -12,14 +12,15 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from synapse.api.errors import SynapseError import jsonschema
from synapse.storage.presence import UserPresenceState from canonicaljson import json
from synapse.types import UserID, RoomID from jsonschema import FormatChecker
from twisted.internet import defer from twisted.internet import defer
import simplejson as json from synapse.api.errors import SynapseError
import jsonschema from synapse.storage.presence import UserPresenceState
from jsonschema import FormatChecker from synapse.types import RoomID, UserID
FILTER_SCHEMA = { FILTER_SCHEMA = {
"additionalProperties": False, "additionalProperties": False,
@ -411,7 +412,7 @@ class Filter(object):
return room_ids return room_ids
def filter(self, events): def filter(self, events):
return filter(self.check, events) return list(filter(self.check, events))
def limit(self): def limit(self):
return self.filter_json.get("limit", 10) return self.filter_json.get("limit", 10)

View file

@ -15,8 +15,8 @@
# limitations under the License. # limitations under the License.
"""Contains the URL paths to prefix various aspects of the server with. """ """Contains the URL paths to prefix various aspects of the server with. """
from hashlib import sha256
import hmac import hmac
from hashlib import sha256
from six.moves.urllib.parse import urlencode from six.moves.urllib.parse import urlencode

View file

@ -14,9 +14,11 @@
# limitations under the License. # limitations under the License.
import sys import sys
from synapse import python_dependencies # noqa: E402
sys.dont_write_bytecode = True sys.dont_write_bytecode = True
from synapse import python_dependencies # noqa: E402
try: try:
python_dependencies.check_requirements() python_dependencies.check_requirements()

View file

@ -17,15 +17,18 @@ import gc
import logging import logging
import sys import sys
from daemonize import Daemonize
from twisted.internet import error, reactor
from synapse.util import PreserveLoggingContext
from synapse.util.rlimit import change_resource_limit
try: try:
import affinity import affinity
except Exception: except Exception:
affinity = None affinity = None
from daemonize import Daemonize
from synapse.util import PreserveLoggingContext
from synapse.util.rlimit import change_resource_limit
from twisted.internet import error, reactor
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -124,6 +127,19 @@ def quit_with_error(error_string):
sys.exit(1) sys.exit(1)
def listen_metrics(bind_addresses, port):
"""
Start Prometheus metrics server.
"""
from synapse.metrics import RegistryProxy
from prometheus_client import start_http_server
for host in bind_addresses:
reactor.callInThread(start_http_server, int(port),
addr=host, registry=RegistryProxy)
logger.info("Metrics now reporting on %s:%d", host, port)
def listen_tcp(bind_addresses, port, factory, backlog=50): def listen_tcp(bind_addresses, port, factory, backlog=50):
""" """
Create a TCP socket for a port and several addresses Create a TCP socket for a port and several addresses

View file

@ -16,6 +16,9 @@
import logging import logging
import sys import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse import synapse
from synapse import events from synapse import events
from synapse.app import _base from synapse.app import _base
@ -23,6 +26,7 @@ from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.directory import DirectoryStore from synapse.replication.slave.storage.directory import DirectoryStore
@ -35,8 +39,6 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, run_in_background from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor, defer
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.appservice") logger = logging.getLogger("synapse.app.appservice")
@ -62,7 +64,7 @@ class AppserviceServer(HomeServer):
for res in listener_config["resources"]: for res in listener_config["resources"]:
for name in res["names"]: for name in res["names"]:
if name == "metrics": if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self) resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
root_resource = create_resource_tree(resources, NoResource()) root_resource = create_resource_tree(resources, NoResource())
@ -94,6 +96,13 @@ class AppserviceServer(HomeServer):
globals={"hs": self}, globals={"hs": self},
) )
) )
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else: else:
logger.warn("Unrecognized listener type: %s", listener["type"]) logger.warn("Unrecognized listener type: %s", listener["type"])

View file

@ -16,6 +16,9 @@
import logging import logging
import sys import sys
from twisted.internet import reactor
from twisted.web.resource import NoResource
import synapse import synapse
from synapse import events from synapse import events
from synapse.app import _base from synapse.app import _base
@ -25,6 +28,7 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory from synapse.crypto import context_factory
from synapse.http.server import JsonResource from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@ -43,8 +47,6 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.client_reader") logger = logging.getLogger("synapse.app.client_reader")
@ -77,7 +79,7 @@ class ClientReaderServer(HomeServer):
for res in listener_config["resources"]: for res in listener_config["resources"]:
for name in res["names"]: for name in res["names"]:
if name == "metrics": if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self) resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client": elif name == "client":
resource = JsonResource(self, canonical_json=False) resource = JsonResource(self, canonical_json=False)
PublicRoomListRestServlet(self).register(resource) PublicRoomListRestServlet(self).register(resource)
@ -118,7 +120,13 @@ class ClientReaderServer(HomeServer):
globals={"hs": self}, globals={"hs": self},
) )
) )
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else: else:
logger.warn("Unrecognized listener type: %s", listener["type"]) logger.warn("Unrecognized listener type: %s", listener["type"])

View file

@ -16,6 +16,9 @@
import logging import logging
import sys import sys
from twisted.internet import reactor
from twisted.web.resource import NoResource
import synapse import synapse
from synapse import events from synapse import events
from synapse.app import _base from synapse.app import _base
@ -25,6 +28,7 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory from synapse.crypto import context_factory
from synapse.http.server import JsonResource from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
@ -42,8 +46,10 @@ from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.room import ( from synapse.rest.client.v1.room import (
RoomSendEventRestServlet, RoomMembershipRestServlet, RoomStateEventRestServlet,
JoinRoomAliasServlet, JoinRoomAliasServlet,
RoomMembershipRestServlet,
RoomSendEventRestServlet,
RoomStateEventRestServlet,
) )
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.storage.engines import create_engine from synapse.storage.engines import create_engine
@ -51,8 +57,6 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.event_creator") logger = logging.getLogger("synapse.app.event_creator")
@ -90,7 +94,7 @@ class EventCreatorServer(HomeServer):
for res in listener_config["resources"]: for res in listener_config["resources"]:
for name in res["names"]: for name in res["names"]:
if name == "metrics": if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self) resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client": elif name == "client":
resource = JsonResource(self, canonical_json=False) resource = JsonResource(self, canonical_json=False)
RoomSendEventRestServlet(self).register(resource) RoomSendEventRestServlet(self).register(resource)
@ -134,6 +138,13 @@ class EventCreatorServer(HomeServer):
globals={"hs": self}, globals={"hs": self},
) )
) )
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else: else:
logger.warn("Unrecognized listener type: %s", listener["type"]) logger.warn("Unrecognized listener type: %s", listener["type"])

View file

@ -16,6 +16,9 @@
import logging import logging
import sys import sys
from twisted.internet import reactor
from twisted.web.resource import NoResource
import synapse import synapse
from synapse import events from synapse import events
from synapse.api.urls import FEDERATION_PREFIX from synapse.api.urls import FEDERATION_PREFIX
@ -26,6 +29,7 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory from synapse.crypto import context_factory
from synapse.federation.transport.server import TransportLayerServer from synapse.federation.transport.server import TransportLayerServer
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.directory import DirectoryStore from synapse.replication.slave.storage.directory import DirectoryStore
@ -40,8 +44,6 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.federation_reader") logger = logging.getLogger("synapse.app.federation_reader")
@ -71,7 +73,7 @@ class FederationReaderServer(HomeServer):
for res in listener_config["resources"]: for res in listener_config["resources"]:
for name in res["names"]: for name in res["names"]:
if name == "metrics": if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self) resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "federation": elif name == "federation":
resources.update({ resources.update({
FEDERATION_PREFIX: TransportLayerServer(self), FEDERATION_PREFIX: TransportLayerServer(self),
@ -107,6 +109,13 @@ class FederationReaderServer(HomeServer):
globals={"hs": self}, globals={"hs": self},
) )
) )
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else: else:
logger.warn("Unrecognized listener type: %s", listener["type"]) logger.warn("Unrecognized listener type: %s", listener["type"])

View file

@ -16,6 +16,9 @@
import logging import logging
import sys import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse import synapse
from synapse import events from synapse import events
from synapse.app import _base from synapse.app import _base
@ -25,6 +28,7 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory from synapse.crypto import context_factory
from synapse.federation import send_queue from synapse.federation import send_queue
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore from synapse.replication.slave.storage.devices import SlavedDeviceStore
@ -41,8 +45,6 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, run_in_background from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.federation_sender") logger = logging.getLogger("synapse.app.federation_sender")
@ -89,7 +91,7 @@ class FederationSenderServer(HomeServer):
for res in listener_config["resources"]: for res in listener_config["resources"]:
for name in res["names"]: for name in res["names"]:
if name == "metrics": if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self) resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
root_resource = create_resource_tree(resources, NoResource()) root_resource = create_resource_tree(resources, NoResource())
@ -121,6 +123,13 @@ class FederationSenderServer(HomeServer):
globals={"hs": self}, globals={"hs": self},
) )
) )
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else: else:
logger.warn("Unrecognized listener type: %s", listener["type"]) logger.warn("Unrecognized listener type: %s", listener["type"])

View file

@ -16,6 +16,9 @@
import logging import logging
import sys import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse import synapse
from synapse import events from synapse import events
from synapse.api.errors import SynapseError from synapse.api.errors import SynapseError
@ -25,10 +28,9 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging from synapse.config.logger import setup_logging
from synapse.crypto import context_factory from synapse.crypto import context_factory
from synapse.http.server import JsonResource from synapse.http.server import JsonResource
from synapse.http.servlet import ( from synapse.http.servlet import RestServlet, parse_json_object_from_request
RestServlet, parse_json_object_from_request,
)
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@ -43,8 +45,6 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.frontend_proxy") logger = logging.getLogger("synapse.app.frontend_proxy")
@ -131,7 +131,7 @@ class FrontendProxyServer(HomeServer):
for res in listener_config["resources"]: for res in listener_config["resources"]:
for name in res["names"]: for name in res["names"]:
if name == "metrics": if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self) resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client": elif name == "client":
resource = JsonResource(self, canonical_json=False) resource = JsonResource(self, canonical_json=False)
KeyUploadServlet(self).register(resource) KeyUploadServlet(self).register(resource)
@ -172,6 +172,13 @@ class FrontendProxyServer(HomeServer):
globals={"hs": self}, globals={"hs": self},
) )
) )
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else: else:
logger.warn("Unrecognized listener type: %s", listener["type"]) logger.warn("Unrecognized listener type: %s", listener["type"])

View file

@ -18,27 +18,39 @@ import logging
import os import os
import sys import sys
from twisted.application import service
from twisted.internet import defer, reactor
from twisted.web.resource import EncodingResourceWrapper, NoResource
from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File
import synapse import synapse
import synapse.config.logger import synapse.config.logger
from synapse import events from synapse import events
from synapse.api.urls import CONTENT_REPO_PREFIX, FEDERATION_PREFIX, \ from synapse.api.urls import (
LEGACY_MEDIA_PREFIX, MEDIA_PREFIX, SERVER_KEY_PREFIX, SERVER_KEY_V2_PREFIX, \ CONTENT_REPO_PREFIX,
STATIC_PREFIX, WEB_CLIENT_PREFIX FEDERATION_PREFIX,
LEGACY_MEDIA_PREFIX,
MEDIA_PREFIX,
SERVER_KEY_PREFIX,
SERVER_KEY_V2_PREFIX,
STATIC_PREFIX,
WEB_CLIENT_PREFIX,
)
from synapse.app import _base from synapse.app import _base
from synapse.app._base import quit_with_error, listen_ssl, listen_tcp from synapse.app._base import listen_ssl, listen_tcp, quit_with_error
from synapse.config._base import ConfigError from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig from synapse.config.homeserver import HomeServerConfig
from synapse.crypto import context_factory from synapse.crypto import context_factory
from synapse.federation.transport.server import TransportLayerServer from synapse.federation.transport.server import TransportLayerServer
from synapse.module_api import ModuleApi
from synapse.http.additional_resource import AdditionalResource from synapse.http.additional_resource import AdditionalResource
from synapse.http.server import RootRedirect from synapse.http.server import RootRedirect
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, \ from synapse.module_api import ModuleApi
check_requirements from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, check_requirements
from synapse.replication.http import ReplicationRestResource, REPLICATION_PREFIX from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
from synapse.rest import ClientRestResource from synapse.rest import ClientRestResource
from synapse.rest.key.v1.server_key_resource import LocalKey from synapse.rest.key.v1.server_key_resource import LocalKey
@ -55,13 +67,6 @@ from synapse.util.manhole import manhole
from synapse.util.module_loader import load_module from synapse.util.module_loader import load_module
from synapse.util.rlimit import change_resource_limit from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.application import service
from twisted.internet import defer, reactor
from twisted.web.resource import EncodingResourceWrapper, NoResource
from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File
from prometheus_client.twisted import MetricsResource
logger = logging.getLogger("synapse.app.homeserver") logger = logging.getLogger("synapse.app.homeserver")
@ -232,7 +237,7 @@ class SynapseHomeServer(HomeServer):
resources[WEB_CLIENT_PREFIX] = build_resource_for_web_client(self) resources[WEB_CLIENT_PREFIX] = build_resource_for_web_client(self)
if name == "metrics" and self.get_config().enable_metrics: if name == "metrics" and self.get_config().enable_metrics:
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy()) resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
if name == "replication": if name == "replication":
resources[REPLICATION_PREFIX] = ReplicationRestResource(self) resources[REPLICATION_PREFIX] = ReplicationRestResource(self)
@ -265,6 +270,13 @@ class SynapseHomeServer(HomeServer):
reactor.addSystemEventTrigger( reactor.addSystemEventTrigger(
"before", "shutdown", server_listener.stopListening, "before", "shutdown", server_listener.stopListening,
) )
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else: else:
logger.warn("Unrecognized listener type: %s", listener["type"]) logger.warn("Unrecognized listener type: %s", listener["type"])
@ -313,11 +325,6 @@ def setup(config_options):
# check any extra requirements we have now we have a config # check any extra requirements we have now we have a config
check_requirements(config) check_requirements(config)
version_string = "Synapse/" + get_version_string(synapse)
logger.info("Server hostname: %s", config.server_name)
logger.info("Server version: %s", version_string)
events.USE_FROZEN_DICTS = config.use_frozen_dicts events.USE_FROZEN_DICTS = config.use_frozen_dicts
tls_server_context_factory = context_factory.ServerContextFactory(config) tls_server_context_factory = context_factory.ServerContextFactory(config)
@ -330,7 +337,7 @@ def setup(config_options):
db_config=config.database_config, db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory, tls_server_context_factory=tls_server_context_factory,
config=config, config=config,
version_string=version_string, version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine, database_engine=database_engine,
) )
@ -434,6 +441,10 @@ def run(hs):
total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users() total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users()
stats["total_nonbridged_users"] = total_nonbridged_users stats["total_nonbridged_users"] = total_nonbridged_users
daily_user_type_results = yield hs.get_datastore().count_daily_user_type()
for name, count in daily_user_type_results.iteritems():
stats["daily_user_type_" + name] = count
room_count = yield hs.get_datastore().get_room_count() room_count = yield hs.get_datastore().get_room_count()
stats["total_room_count"] = room_count stats["total_room_count"] = room_count

View file

@ -16,17 +16,19 @@
import logging import logging
import sys import sys
from twisted.internet import reactor
from twisted.web.resource import NoResource
import synapse import synapse
from synapse import events from synapse import events
from synapse.api.urls import ( from synapse.api.urls import CONTENT_REPO_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX
CONTENT_REPO_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX
)
from synapse.app import _base from synapse.app import _base
from synapse.config._base import ConfigError from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging from synapse.config.logger import setup_logging
from synapse.crypto import context_factory from synapse.crypto import context_factory
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@ -42,8 +44,6 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.media_repository") logger = logging.getLogger("synapse.app.media_repository")
@ -73,7 +73,7 @@ class MediaRepositoryServer(HomeServer):
for res in listener_config["resources"]: for res in listener_config["resources"]:
for name in res["names"]: for name in res["names"]:
if name == "metrics": if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self) resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "media": elif name == "media":
media_repo = self.get_media_repository_resource() media_repo = self.get_media_repository_resource()
resources.update({ resources.update({
@ -114,6 +114,13 @@ class MediaRepositoryServer(HomeServer):
globals={"hs": self}, globals={"hs": self},
) )
) )
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else: else:
logger.warn("Unrecognized listener type: %s", listener["type"]) logger.warn("Unrecognized listener type: %s", listener["type"])

View file

@ -16,6 +16,9 @@
import logging import logging
import sys import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse import synapse
from synapse import events from synapse import events
from synapse.app import _base from synapse.app import _base
@ -23,6 +26,7 @@ from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.events import SlavedEventStore from synapse.replication.slave.storage.events import SlavedEventStore
@ -36,8 +40,6 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, run_in_background from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.pusher") logger = logging.getLogger("synapse.app.pusher")
@ -92,7 +94,7 @@ class PusherServer(HomeServer):
for res in listener_config["resources"]: for res in listener_config["resources"]:
for name in res["names"]: for name in res["names"]:
if name == "metrics": if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self) resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
root_resource = create_resource_tree(resources, NoResource()) root_resource = create_resource_tree(resources, NoResource())
@ -124,6 +126,13 @@ class PusherServer(HomeServer):
globals={"hs": self}, globals={"hs": self},
) )
) )
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else: else:
logger.warn("Unrecognized listener type: %s", listener["type"]) logger.warn("Unrecognized listener type: %s", listener["type"])

View file

@ -17,6 +17,11 @@ import contextlib
import logging import logging
import sys import sys
from six import iteritems
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse import synapse
from synapse.api.constants import EventTypes from synapse.api.constants import EventTypes
from synapse.app import _base from synapse.app import _base
@ -26,6 +31,7 @@ from synapse.config.logger import setup_logging
from synapse.handlers.presence import PresenceHandler, get_interested_parties from synapse.handlers.presence import PresenceHandler, get_interested_parties
from synapse.http.server import JsonResource from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
@ -35,12 +41,12 @@ from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
from synapse.replication.tcp.client import ReplicationClientHandler from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1 import events from synapse.rest.client.v1 import events
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
@ -55,10 +61,6 @@ from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.stringutils import random_string from synapse.util.stringutils import random_string
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
from six import iteritems
logger = logging.getLogger("synapse.app.synchrotron") logger = logging.getLogger("synapse.app.synchrotron")
@ -257,7 +259,7 @@ class SynchrotronServer(HomeServer):
for res in listener_config["resources"]: for res in listener_config["resources"]:
for name in res["names"]: for name in res["names"]:
if name == "metrics": if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self) resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client": elif name == "client":
resource = JsonResource(self, canonical_json=False) resource = JsonResource(self, canonical_json=False)
sync.register_servlets(self, resource) sync.register_servlets(self, resource)
@ -301,6 +303,13 @@ class SynchrotronServer(HomeServer):
globals={"hs": self}, globals={"hs": self},
) )
) )
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else: else:
logger.warn("Unrecognized listener type: %s", listener["type"]) logger.warn("Unrecognized listener type: %s", listener["type"])

View file

@ -16,16 +16,17 @@
import argparse import argparse
import collections import collections
import errno
import glob import glob
import os import os
import os.path import os.path
import signal import signal
import subprocess import subprocess
import sys import sys
import yaml
import errno
import time import time
import yaml
SYNAPSE = [sys.executable, "-B", "-m", "synapse.app.homeserver"] SYNAPSE = [sys.executable, "-B", "-m", "synapse.app.homeserver"]
GREEN = "\x1b[1;32m" GREEN = "\x1b[1;32m"
@ -171,6 +172,10 @@ def main():
if cache_factor: if cache_factor:
os.environ["SYNAPSE_CACHE_FACTOR"] = str(cache_factor) os.environ["SYNAPSE_CACHE_FACTOR"] = str(cache_factor)
cache_factors = config.get("synctl_cache_factors", {})
for cache_name, factor in cache_factors.iteritems():
os.environ["SYNAPSE_CACHE_FACTOR_" + cache_name.upper()] = str(factor)
worker_configfiles = [] worker_configfiles = []
if options.worker: if options.worker:
start_stop_synapse = False start_stop_synapse = False

View file

@ -17,6 +17,9 @@
import logging import logging
import sys import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse import synapse
from synapse import events from synapse import events
from synapse.app import _base from synapse.app import _base
@ -26,6 +29,7 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory from synapse.crypto import context_factory
from synapse.http.server import JsonResource from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@ -42,8 +46,6 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, run_in_background from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor, defer
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.user_dir") logger = logging.getLogger("synapse.app.user_dir")
@ -105,7 +107,7 @@ class UserDirectoryServer(HomeServer):
for res in listener_config["resources"]: for res in listener_config["resources"]:
for name in res["names"]: for name in res["names"]:
if name == "metrics": if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self) resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client": elif name == "client":
resource = JsonResource(self, canonical_json=False) resource = JsonResource(self, canonical_json=False)
user_directory.register_servlets(self, resource) user_directory.register_servlets(self, resource)
@ -146,6 +148,13 @@ class UserDirectoryServer(HomeServer):
globals={"hs": self}, globals={"hs": self},
) )
) )
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else: else:
logger.warn("Unrecognized listener type: %s", listener["type"]) logger.warn("Unrecognized listener type: %s", listener["type"])

View file

@ -12,17 +12,17 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from synapse.api.constants import EventTypes
from synapse.util.caches.descriptors import cachedInlineCallbacks
from synapse.types import GroupID, get_domain_from_id
from twisted.internet import defer
import logging import logging
import re import re
from six import string_types from six import string_types
from twisted.internet import defer
from synapse.api.constants import EventTypes
from synapse.types import GroupID, get_domain_from_id
from synapse.util.caches.descriptors import cachedInlineCallbacks
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -85,7 +85,8 @@ class ApplicationService(object):
NS_LIST = [NS_USERS, NS_ALIASES, NS_ROOMS] NS_LIST = [NS_USERS, NS_ALIASES, NS_ROOMS]
def __init__(self, token, hostname, url=None, namespaces=None, hs_token=None, def __init__(self, token, hostname, url=None, namespaces=None, hs_token=None,
sender=None, id=None, protocols=None, rate_limited=True): sender=None, id=None, protocols=None, rate_limited=True,
ip_range_whitelist=None):
self.token = token self.token = token
self.url = url self.url = url
self.hs_token = hs_token self.hs_token = hs_token
@ -93,6 +94,7 @@ class ApplicationService(object):
self.server_name = hostname self.server_name = hostname
self.namespaces = self._check_namespaces(namespaces) self.namespaces = self._check_namespaces(namespaces)
self.id = id self.id = id
self.ip_range_whitelist = ip_range_whitelist
if "|" in self.id: if "|" in self.id:
raise Exception("application service ID cannot contain '|' character") raise Exception("application service ID cannot contain '|' character")
@ -292,4 +294,8 @@ class ApplicationService(object):
return self.rate_limited return self.rate_limited
def __str__(self): def __str__(self):
return "ApplicationService: %s" % (self.__dict__,) # copy dictionary and redact token fields so they don't get logged
dict_copy = self.__dict__.copy()
dict_copy["token"] = "<redacted>"
dict_copy["hs_token"] = "<redacted>"
return "ApplicationService: %s" % (dict_copy,)

View file

@ -12,20 +12,39 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import logging
import urllib
from prometheus_client import Counter
from twisted.internet import defer from twisted.internet import defer
from synapse.api.constants import ThirdPartyEntityKind from synapse.api.constants import ThirdPartyEntityKind
from synapse.api.errors import CodeMessageException from synapse.api.errors import CodeMessageException
from synapse.http.client import SimpleHttpClient
from synapse.events.utils import serialize_event from synapse.events.utils import serialize_event
from synapse.util.caches.response_cache import ResponseCache from synapse.http.client import SimpleHttpClient
from synapse.types import ThirdPartyInstanceID from synapse.types import ThirdPartyInstanceID
from synapse.util.caches.response_cache import ResponseCache
import logging
import urllib
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
sent_transactions_counter = Counter(
"synapse_appservice_api_sent_transactions",
"Number of /transactions/ requests sent",
["service"]
)
failed_transactions_counter = Counter(
"synapse_appservice_api_failed_transactions",
"Number of /transactions/ requests that failed to send",
["service"]
)
sent_events_counter = Counter(
"synapse_appservice_api_sent_events",
"Number of events sent to the AS",
["service"]
)
HOUR_IN_MS = 60 * 60 * 1000 HOUR_IN_MS = 60 * 60 * 1000
@ -219,12 +238,15 @@ class ApplicationServiceApi(SimpleHttpClient):
args={ args={
"access_token": service.hs_token "access_token": service.hs_token
}) })
sent_transactions_counter.labels(service.id).inc()
sent_events_counter.labels(service.id).inc(len(events))
defer.returnValue(True) defer.returnValue(True)
return return
except CodeMessageException as e: except CodeMessageException as e:
logger.warning("push_bulk to %s received %s", uri, e.code) logger.warning("push_bulk to %s received %s", uri, e.code)
except Exception as ex: except Exception as ex:
logger.warning("push_bulk to %s threw exception %s", uri, ex) logger.warning("push_bulk to %s threw exception %s", uri, ex)
failed_transactions_counter.labels(service.id).inc()
defer.returnValue(False) defer.returnValue(False)
def _serialize(self, events): def _serialize(self, events):

View file

@ -48,14 +48,14 @@ UP & quit +---------- YES SUCCESS
This is all tied together by the AppServiceScheduler which DIs the required This is all tied together by the AppServiceScheduler which DIs the required
components. components.
""" """
import logging
from twisted.internet import defer from twisted.internet import defer
from synapse.appservice import ApplicationServiceState from synapse.appservice import ApplicationServiceState
from synapse.util.logcontext import run_in_background from synapse.util.logcontext import run_in_background
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -16,11 +16,12 @@
import argparse import argparse
import errno import errno
import os import os
import yaml
from textwrap import dedent from textwrap import dedent
from six import integer_types from six import integer_types
import yaml
class ConfigError(Exception): class ConfigError(Exception):
pass pass

View file

@ -12,10 +12,10 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from ._base import Config
from synapse.api.constants import EventTypes from synapse.api.constants import EventTypes
from ._base import Config
class ApiConfig(Config): class ApiConfig(Config):

View file

@ -12,17 +12,19 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from ._base import Config, ConfigError
from synapse.appservice import ApplicationService
from synapse.types import UserID
import yaml
import logging import logging
from six import string_types from six import string_types
from six.moves.urllib import parse as urlparse from six.moves.urllib import parse as urlparse
import yaml
from netaddr import IPSet
from synapse.appservice import ApplicationService
from synapse.types import UserID
from ._base import Config, ConfigError
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -154,6 +156,13 @@ def _load_appservice(hostname, as_info, config_filename):
" will not receive events or queries.", " will not receive events or queries.",
config_filename, config_filename,
) )
ip_range_whitelist = None
if as_info.get('ip_range_whitelist'):
ip_range_whitelist = IPSet(
as_info.get('ip_range_whitelist')
)
return ApplicationService( return ApplicationService(
token=as_info["as_token"], token=as_info["as_token"],
hostname=hostname, hostname=hostname,
@ -163,5 +172,6 @@ def _load_appservice(hostname, as_info, config_filename):
sender=user_id, sender=user_id,
id=as_info["id"], id=as_info["id"],
protocols=protocols, protocols=protocols,
rate_limited=rate_limited rate_limited=rate_limited,
ip_range_whitelist=ip_range_whitelist,
) )

View file

@ -18,6 +18,9 @@ from ._base import Config
DEFAULT_CONFIG = """\ DEFAULT_CONFIG = """\
# User Consent configuration # User Consent configuration
# #
# for detailed instructions, see
# https://github.com/matrix-org/synapse/blob/master/docs/consent_tracking.md
#
# Parts of this section are required if enabling the 'consent' resource under # Parts of this section are required if enabling the 'consent' resource under
# 'listeners', in particular 'template_dir' and 'version'. # 'listeners', in particular 'template_dir' and 'version'.
# #

View file

@ -13,32 +13,32 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from .tls import TlsConfig
from .server import ServerConfig
from .logger import LoggingConfig
from .database import DatabaseConfig
from .ratelimiting import RatelimitConfig
from .repository import ContentRepositoryConfig
from .captcha import CaptchaConfig
from .voip import VoipConfig
from .registration import RegistrationConfig
from .metrics import MetricsConfig
from .api import ApiConfig from .api import ApiConfig
from .appservice import AppServiceConfig from .appservice import AppServiceConfig
from .key import KeyConfig from .captcha import CaptchaConfig
from .saml2 import SAML2Config
from .cas import CasConfig from .cas import CasConfig
from .password import PasswordConfig
from .jwt import JWTConfig
from .password_auth_providers import PasswordAuthProviderConfig
from .emailconfig import EmailConfig
from .workers import WorkerConfig
from .push import PushConfig
from .spam_checker import SpamCheckerConfig
from .groups import GroupsConfig
from .user_directory import UserDirectoryConfig
from .consent_config import ConsentConfig from .consent_config import ConsentConfig
from .database import DatabaseConfig
from .emailconfig import EmailConfig
from .groups import GroupsConfig
from .jwt import JWTConfig
from .key import KeyConfig
from .logger import LoggingConfig
from .metrics import MetricsConfig
from .password import PasswordConfig
from .password_auth_providers import PasswordAuthProviderConfig
from .push import PushConfig
from .ratelimiting import RatelimitConfig
from .registration import RegistrationConfig
from .repository import ContentRepositoryConfig
from .saml2 import SAML2Config
from .server import ServerConfig
from .server_notices_config import ServerNoticesConfig from .server_notices_config import ServerNoticesConfig
from .spam_checker import SpamCheckerConfig
from .tls import TlsConfig
from .user_directory import UserDirectoryConfig
from .voip import VoipConfig
from .workers import WorkerConfig
class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig, class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,

View file

@ -15,7 +15,6 @@
from ._base import Config, ConfigError from ._base import Config, ConfigError
MISSING_JWT = ( MISSING_JWT = (
"""Missing jwt library. This is required for jwt login. """Missing jwt library. This is required for jwt login.

View file

@ -13,21 +13,24 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from ._base import Config, ConfigError
from synapse.util.stringutils import random_string
from signedjson.key import (
generate_signing_key, is_signing_algorithm_supported,
decode_signing_key_base64, decode_verify_key_bytes,
read_signing_keys, write_signing_keys, NACL_ED25519
)
from unpaddedbase64 import decode_base64
from synapse.util.stringutils import random_string_with_symbols
import os
import hashlib import hashlib
import logging import logging
import os
from signedjson.key import (
NACL_ED25519,
decode_signing_key_base64,
decode_verify_key_bytes,
generate_signing_key,
is_signing_algorithm_supported,
read_signing_keys,
write_signing_keys,
)
from unpaddedbase64 import decode_base64
from synapse.util.stringutils import random_string, random_string_with_symbols
from ._base import Config, ConfigError
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -12,17 +12,22 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from ._base import Config
from synapse.util.logcontext import LoggingContextFilter
from twisted.logger import globalLogBeginner, STDLibLogObserver
import logging import logging
import logging.config import logging.config
import yaml
from string import Template
import os import os
import signal import signal
import sys
from string import Template
import yaml
from twisted.logger import STDLibLogObserver, globalLogBeginner
import synapse
from synapse.util.logcontext import LoggingContextFilter
from synapse.util.versionstring import get_version_string
from ._base import Config
DEFAULT_LOG_CONFIG = Template(""" DEFAULT_LOG_CONFIG = Template("""
version: 1 version: 1
@ -202,6 +207,15 @@ def setup_logging(config, use_worker_options=False):
if getattr(signal, "SIGHUP"): if getattr(signal, "SIGHUP"):
signal.signal(signal.SIGHUP, sighup) signal.signal(signal.SIGHUP, sighup)
# make sure that the first thing we log is a thing we can grep backwards
# for
logging.warn("***** STARTING SERVER *****")
logging.warn(
"Server %s version %s",
sys.argv[0], get_version_string(synapse),
)
logging.info("Server hostname: %s", config.server_name)
# It's critical to point twisted's internal logging somewhere, otherwise it # It's critical to point twisted's internal logging somewhere, otherwise it
# stacks up and leaks kup to 64K object; # stacks up and leaks kup to 64K object;
# see: https://twistedmatrix.com/trac/ticket/8164 # see: https://twistedmatrix.com/trac/ticket/8164

View file

@ -13,10 +13,10 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from ._base import Config
from synapse.util.module_loader import load_module from synapse.util.module_loader import load_module
from ._base import Config
LDAP_PROVIDER = 'ldap_auth_provider.LdapAuthProvider' LDAP_PROVIDER = 'ldap_auth_provider.LdapAuthProvider'

View file

@ -13,11 +13,11 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from ._base import Config from distutils.util import strtobool
from synapse.util.stringutils import random_string_with_symbols from synapse.util.stringutils import random_string_with_symbols
from distutils.util import strtobool from ._base import Config
class RegistrationConfig(Config): class RegistrationConfig(Config):

View file

@ -13,11 +13,11 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from ._base import Config, ConfigError
from collections import namedtuple from collections import namedtuple
from synapse.util.module_loader import load_module from synapse.util.module_loader import load_module
from ._base import Config, ConfigError
MISSING_NETADDR = ( MISSING_NETADDR = (
"Missing netaddr library. This is required for URL preview API." "Missing netaddr library. This is required for URL preview API."
@ -250,6 +250,9 @@ class ContentRepositoryConfig(Config):
# - '192.168.0.0/16' # - '192.168.0.0/16'
# - '100.64.0.0/10' # - '100.64.0.0/10'
# - '169.254.0.0/16' # - '169.254.0.0/16'
# - '::1/128'
# - 'fe80::/64'
# - 'fc00::/7'
# #
# List of IP address CIDR ranges that the URL preview spider is allowed # List of IP address CIDR ranges that the URL preview spider is allowed
# to access even if they are specified in url_preview_ip_range_blacklist. # to access even if they are specified in url_preview_ip_range_blacklist.

View file

@ -14,13 +14,25 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import logging
from synapse.http.endpoint import parse_and_validate_server_name
from ._base import Config, ConfigError from ._base import Config, ConfigError
logger = logging.Logger(__name__)
class ServerConfig(Config): class ServerConfig(Config):
def read_config(self, config): def read_config(self, config):
self.server_name = config["server_name"] self.server_name = config["server_name"]
try:
parse_and_validate_server_name(self.server_name)
except ValueError as e:
raise ConfigError(str(e))
self.pid_file = self.abspath(config.get("pid_file")) self.pid_file = self.abspath(config.get("pid_file"))
self.web_client = config["web_client"] self.web_client = config["web_client"]
self.web_client_location = config.get("web_client_location", None) self.web_client_location = config.get("web_client_location", None)
@ -138,6 +150,12 @@ class ServerConfig(Config):
metrics_port = config.get("metrics_port") metrics_port = config.get("metrics_port")
if metrics_port: if metrics_port:
logger.warn(
("The metrics_port configuration option is deprecated in Synapse 0.31 "
"in favour of a listener. Please see "
"http://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.rst"
" on how to configure the new listener."))
self.listeners.append({ self.listeners.append({
"port": metrics_port, "port": metrics_port,
"bind_addresses": [config.get("metrics_bind_host", "127.0.0.1")], "bind_addresses": [config.get("metrics_bind_host", "127.0.0.1")],
@ -152,8 +170,8 @@ class ServerConfig(Config):
}) })
def default_config(self, server_name, **kwargs): def default_config(self, server_name, **kwargs):
if ":" in server_name: _, bind_port = parse_and_validate_server_name(server_name)
bind_port = int(server_name.split(":")[1]) if bind_port is not None:
unsecure_port = bind_port - 400 unsecure_port = bind_port - 400
else: else:
bind_port = 8448 bind_port = 8448

View file

@ -12,9 +12,10 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from ._base import Config
from synapse.types import UserID from synapse.types import UserID
from ._base import Config
DEFAULT_CONFIG = """\ DEFAULT_CONFIG = """\
# Server Notices room configuration # Server Notices room configuration
# #

View file

@ -13,14 +13,15 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from ._base import Config import os
import subprocess
from hashlib import sha256
from unpaddedbase64 import encode_base64
from OpenSSL import crypto from OpenSSL import crypto
import subprocess
import os
from hashlib import sha256 from ._base import Config
from unpaddedbase64 import encode_base64
GENERATE_DH_PARAMS = False GENERATE_DH_PARAMS = False

View file

@ -12,12 +12,12 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from twisted.internet import ssl
from OpenSSL import SSL, crypto
from twisted.internet._sslverify import _defaultCurveName
import logging import logging
from OpenSSL import SSL, crypto
from twisted.internet import ssl
from twisted.internet._sslverify import _defaultCurveName
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -15,16 +15,16 @@
# limitations under the License. # limitations under the License.
from synapse.api.errors import SynapseError, Codes
from synapse.events.utils import prune_event
from canonicaljson import encode_canonical_json
from unpaddedbase64 import encode_base64, decode_base64
from signedjson.sign import sign_json
import hashlib import hashlib
import logging import logging
from canonicaljson import encode_canonical_json
from signedjson.sign import sign_json
from unpaddedbase64 import decode_base64, encode_base64
from synapse.api.errors import Codes, SynapseError
from synapse.events.utils import prune_event
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -13,14 +13,16 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from synapse.util import logcontext
from twisted.web.http import HTTPClient
from twisted.internet.protocol import Factory
from twisted.internet import defer, reactor
from synapse.http.endpoint import matrix_federation_endpoint
import simplejson as json
import logging import logging
from canonicaljson import json
from twisted.internet import defer, reactor
from twisted.internet.protocol import Factory
from twisted.web.http import HTTPClient
from synapse.http.endpoint import matrix_federation_endpoint
from synapse.util import logcontext
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -14,9 +14,31 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import hashlib
import logging
import urllib
from collections import namedtuple
from signedjson.key import (
decode_verify_key_bytes,
encode_verify_key_base64,
is_signing_algorithm_supported,
)
from signedjson.sign import (
SignatureVerifyException,
encode_canonical_json,
sign_json,
signature_ids,
verify_signed_json,
)
from unpaddedbase64 import decode_base64, encode_base64
from OpenSSL import crypto
from twisted.internet import defer
from synapse.api.errors import Codes, SynapseError
from synapse.crypto.keyclient import fetch_server_key from synapse.crypto.keyclient import fetch_server_key
from synapse.api.errors import SynapseError, Codes from synapse.util import logcontext, unwrapFirstError
from synapse.util import unwrapFirstError, logcontext
from synapse.util.logcontext import ( from synapse.util.logcontext import (
PreserveLoggingContext, PreserveLoggingContext,
preserve_fn, preserve_fn,
@ -24,24 +46,6 @@ from synapse.util.logcontext import (
) )
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
from twisted.internet import defer
from signedjson.sign import (
verify_signed_json, signature_ids, sign_json, encode_canonical_json
)
from signedjson.key import (
is_signing_algorithm_supported, decode_verify_key_bytes
)
from unpaddedbase64 import decode_base64, encode_base64
from OpenSSL import crypto
from collections import namedtuple
import urllib
import hashlib
import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -56,7 +60,7 @@ Attributes:
key_ids(set(str)): The set of key_ids to that could be used to verify the key_ids(set(str)): The set of key_ids to that could be used to verify the
JSON object JSON object
json_object(dict): The JSON object to verify. json_object(dict): The JSON object to verify.
deferred(twisted.internet.defer.Deferred): deferred(Deferred[str, str, nacl.signing.VerifyKey]):
A deferred (server_name, key_id, verify_key) tuple that resolves when A deferred (server_name, key_id, verify_key) tuple that resolves when
a verify key has been fetched. The deferreds' callbacks are run with no a verify key has been fetched. The deferreds' callbacks are run with no
logcontext. logcontext.
@ -736,6 +740,17 @@ class Keyring(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def _handle_key_deferred(verify_request): def _handle_key_deferred(verify_request):
"""Waits for the key to become available, and then performs a verification
Args:
verify_request (VerifyKeyRequest):
Returns:
Deferred[None]
Raises:
SynapseError if there was a problem performing the verification
"""
server_name = verify_request.server_name server_name = verify_request.server_name
try: try:
with PreserveLoggingContext(): with PreserveLoggingContext():
@ -768,11 +783,17 @@ def _handle_key_deferred(verify_request):
)) ))
try: try:
verify_signed_json(json_object, server_name, verify_key) verify_signed_json(json_object, server_name, verify_key)
except Exception: except SignatureVerifyException as e:
logger.debug(
"Error verifying signature for %s:%s:%s with key %s: %s",
server_name, verify_key.alg, verify_key.version,
encode_verify_key_base64(verify_key),
str(e),
)
raise SynapseError( raise SynapseError(
401, 401,
"Invalid signature for server %s with key %s:%s" % ( "Invalid signature for server %s with key %s:%s: %s" % (
server_name, verify_key.alg, verify_key.version server_name, verify_key.alg, verify_key.version, str(e),
), ),
Codes.UNAUTHORIZED, Codes.UNAUTHORIZED,
) )

View file

@ -17,11 +17,11 @@ import logging
from canonicaljson import encode_canonical_json from canonicaljson import encode_canonical_json
from signedjson.key import decode_verify_key_bytes from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json, SignatureVerifyException from signedjson.sign import SignatureVerifyException, verify_signed_json
from unpaddedbase64 import decode_base64 from unpaddedbase64 import decode_base64
from synapse.api.constants import EventTypes, Membership, JoinRules from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.api.errors import AuthError, SynapseError, EventSizeError from synapse.api.errors import AuthError, EventSizeError, SynapseError
from synapse.types import UserID, get_domain_from_id from synapse.types import UserID, get_domain_from_id
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -34,9 +34,11 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
event: the event being checked. event: the event being checked.
auth_events (dict: event-key -> event): the existing room state. auth_events (dict: event-key -> event): the existing room state.
Raises:
AuthError if the checks fail
Returns: Returns:
True if the auth checks pass. if the auth checks pass.
""" """
if do_size_check: if do_size_check:
_check_size_limits(event) _check_size_limits(event)
@ -71,9 +73,10 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
# Oh, we don't know what the state of the room was, so we # Oh, we don't know what the state of the room was, so we
# are trusting that this is allowed (at least for now) # are trusting that this is allowed (at least for now)
logger.warn("Trusting event: %s", event.event_id) logger.warn("Trusting event: %s", event.event_id)
return True return
if event.type == EventTypes.Create: if event.type == EventTypes.Create:
sender_domain = get_domain_from_id(event.sender)
room_id_domain = get_domain_from_id(event.room_id) room_id_domain = get_domain_from_id(event.room_id)
if room_id_domain != sender_domain: if room_id_domain != sender_domain:
raise AuthError( raise AuthError(
@ -81,7 +84,8 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
"Creation event's room_id domain does not match sender's" "Creation event's room_id domain does not match sender's"
) )
# FIXME # FIXME
return True logger.debug("Allowing! %s", event)
return
creation_event = auth_events.get((EventTypes.Create, ""), None) creation_event = auth_events.get((EventTypes.Create, ""), None)
@ -118,7 +122,8 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
403, 403,
"Alias event's state_key does not match sender's domain" "Alias event's state_key does not match sender's domain"
) )
return True logger.debug("Allowing! %s", event)
return
if logger.isEnabledFor(logging.DEBUG): if logger.isEnabledFor(logging.DEBUG):
logger.debug( logger.debug(
@ -127,14 +132,9 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
) )
if event.type == EventTypes.Member: if event.type == EventTypes.Member:
allowed = _is_membership_change_allowed( _is_membership_change_allowed(event, auth_events)
event, auth_events logger.debug("Allowing! %s", event)
) return
if allowed:
logger.debug("Allowing! %s", event)
else:
logger.debug("Denying! %s", event)
return allowed
_check_event_sender_in_room(event, auth_events) _check_event_sender_in_room(event, auth_events)
@ -153,7 +153,8 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
) )
) )
else: else:
return True logger.debug("Allowing! %s", event)
return
_can_send_event(event, auth_events) _can_send_event(event, auth_events)
@ -200,7 +201,7 @@ def _is_membership_change_allowed(event, auth_events):
create = auth_events.get(key) create = auth_events.get(key)
if create and event.prev_events[0][0] == create.event_id: if create and event.prev_events[0][0] == create.event_id:
if create.content["creator"] == event.state_key: if create.content["creator"] == event.state_key:
return True return
target_user_id = event.state_key target_user_id = event.state_key
@ -265,13 +266,13 @@ def _is_membership_change_allowed(event, auth_events):
raise AuthError( raise AuthError(
403, "%s is banned from the room" % (target_user_id,) 403, "%s is banned from the room" % (target_user_id,)
) )
return True return
if Membership.JOIN != membership: if Membership.JOIN != membership:
if (caller_invited if (caller_invited
and Membership.LEAVE == membership and Membership.LEAVE == membership
and target_user_id == event.user_id): and target_user_id == event.user_id):
return True return
if not caller_in_room: # caller isn't joined if not caller_in_room: # caller isn't joined
raise AuthError( raise AuthError(
@ -334,8 +335,6 @@ def _is_membership_change_allowed(event, auth_events):
else: else:
raise AuthError(500, "Unknown membership %s" % membership) raise AuthError(500, "Unknown membership %s" % membership)
return True
def _check_event_sender_in_room(event, auth_events): def _check_event_sender_in_room(event, auth_events):
key = (EventTypes.Member, event.user_id, ) key = (EventTypes.Member, event.user_id, )
@ -355,35 +354,46 @@ def _check_joined_room(member, user_id, room_id):
)) ))
def get_send_level(etype, state_key, auth_events): def get_send_level(etype, state_key, power_levels_event):
key = (EventTypes.PowerLevels, "", ) """Get the power level required to send an event of a given type
send_level_event = auth_events.get(key)
send_level = None
if send_level_event:
send_level = send_level_event.content.get("events", {}).get(
etype
)
if send_level is None:
if state_key is not None:
send_level = send_level_event.content.get(
"state_default", 50
)
else:
send_level = send_level_event.content.get(
"events_default", 0
)
if send_level: The federation spec [1] refers to this as "Required Power Level".
send_level = int(send_level)
https://matrix.org/docs/spec/server_server/unstable.html#definitions
Args:
etype (str): type of event
state_key (str|None): state_key of state event, or None if it is not
a state event.
power_levels_event (synapse.events.EventBase|None): power levels event
in force at this point in the room
Returns:
int: power level required to send this event.
"""
if power_levels_event:
power_levels_content = power_levels_event.content
else: else:
send_level = 0 power_levels_content = {}
return send_level # see if we have a custom level for this event type
send_level = power_levels_content.get("events", {}).get(etype)
# otherwise, fall back to the state_default/events_default.
if send_level is None:
if state_key is not None:
send_level = power_levels_content.get("state_default", 50)
else:
send_level = power_levels_content.get("events_default", 0)
return int(send_level)
def _can_send_event(event, auth_events): def _can_send_event(event, auth_events):
power_levels_event = _get_power_level_event(auth_events)
send_level = get_send_level( send_level = get_send_level(
event.type, event.get("state_key", None), auth_events event.type, event.get("state_key"), power_levels_event,
) )
user_level = get_user_power_level(event.user_id, auth_events) user_level = get_user_power_level(event.user_id, auth_events)
@ -471,14 +481,14 @@ def _check_power_levels(event, auth_events):
] ]
old_list = current_state.content.get("users", {}) old_list = current_state.content.get("users", {})
for user in set(old_list.keys() + user_list.keys()): for user in set(list(old_list) + list(user_list)):
levels_to_check.append( levels_to_check.append(
(user, "users") (user, "users")
) )
old_list = current_state.content.get("events", {}) old_list = current_state.content.get("events", {})
new_list = event.content.get("events", {}) new_list = event.content.get("events", {})
for ev_id in set(old_list.keys() + new_list.keys()): for ev_id in set(list(old_list) + list(new_list)):
levels_to_check.append( levels_to_check.append(
(ev_id, "events") (ev_id, "events")
) )
@ -515,7 +525,11 @@ def _check_power_levels(event, auth_events):
"to your own" "to your own"
) )
if old_level > user_level or new_level > user_level: # Check if the old and new levels are greater than the user level
# (if defined)
old_level_too_big = old_level is not None and old_level > user_level
new_level_too_big = new_level is not None and new_level > user_level
if old_level_too_big or new_level_too_big:
raise AuthError( raise AuthError(
403, 403,
"You don't have permission to add ops level greater " "You don't have permission to add ops level greater "
@ -524,13 +538,22 @@ def _check_power_levels(event, auth_events):
def _get_power_level_event(auth_events): def _get_power_level_event(auth_events):
key = (EventTypes.PowerLevels, "", ) return auth_events.get((EventTypes.PowerLevels, ""))
return auth_events.get(key)
def get_user_power_level(user_id, auth_events): def get_user_power_level(user_id, auth_events):
power_level_event = _get_power_level_event(auth_events) """Get a user's power level
Args:
user_id (str): user's id to look up in power_levels
auth_events (dict[(str, str), synapse.events.EventBase]):
state in force at this point in the room (or rather, a subset of
it including at least the create event and power levels event.
Returns:
int: the user's power level in this room.
"""
power_level_event = _get_power_level_event(auth_events)
if power_level_event: if power_level_event:
level = power_level_event.content.get("users", {}).get(user_id) level = power_level_event.content.get("users", {}).get(user_id)
if not level: if not level:
@ -541,6 +564,11 @@ def get_user_power_level(user_id, auth_events):
else: else:
return int(level) return int(level)
else: else:
# if there is no power levels event, the creator gets 100 and everyone
# else gets 0.
# some things which call this don't pass the create event: hack around
# that.
key = (EventTypes.Create, "", ) key = (EventTypes.Create, "", )
create_event = auth_events.get(key) create_event = auth_events.get(key)
if (create_event is not None and if (create_event is not None and

View file

@ -13,9 +13,8 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from synapse.util.frozenutils import freeze
from synapse.util.caches import intern_dict from synapse.util.caches import intern_dict
from synapse.util.frozenutils import freeze
# Whether we should use frozen_dict in FrozenEvent. Using frozen_dicts prevents # Whether we should use frozen_dict in FrozenEvent. Using frozen_dicts prevents
# bugs where we accidentally share e.g. signature dicts. However, converting # bugs where we accidentally share e.g. signature dicts. However, converting
@ -146,7 +145,7 @@ class EventBase(object):
return field in self._event_dict return field in self._event_dict
def items(self): def items(self):
return self._event_dict.items() return list(self._event_dict.items())
class FrozenEvent(EventBase): class FrozenEvent(EventBase):

View file

@ -13,13 +13,12 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from . import EventBase, FrozenEvent, _event_dict_property import copy
from synapse.types import EventID from synapse.types import EventID
from synapse.util.stringutils import random_string from synapse.util.stringutils import random_string
import copy from . import EventBase, FrozenEvent, _event_dict_property
class EventBuilder(EventBase): class EventBuilder(EventBase):

View file

@ -13,10 +13,10 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from twisted.internet import defer
from frozendict import frozendict from frozendict import frozendict
from twisted.internet import defer
class EventContext(object): class EventContext(object):
""" """

View file

@ -13,15 +13,16 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from synapse.api.constants import EventTypes
from . import EventBase
from frozendict import frozendict
import re import re
from six import string_types from six import string_types
from frozendict import frozendict
from synapse.api.constants import EventTypes
from . import EventBase
# Split strings on "." but not "\." This uses a negative lookbehind assertion for '\' # Split strings on "." but not "\." This uses a negative lookbehind assertion for '\'
# (?<!stuff) matches if the current position in the string is not preceded # (?<!stuff) matches if the current position in the string is not preceded
# by a match for 'stuff'. # by a match for 'stuff'.

View file

@ -13,12 +13,12 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from synapse.types import EventID, RoomID, UserID
from synapse.api.errors import SynapseError
from synapse.api.constants import EventTypes, Membership
from six import string_types from six import string_types
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import SynapseError
from synapse.types import EventID, RoomID, UserID
class EventValidator(object): class EventValidator(object):

View file

@ -16,14 +16,15 @@ import logging
import six import six
from twisted.internet import defer
from synapse.api.constants import MAX_DEPTH from synapse.api.constants import MAX_DEPTH
from synapse.api.errors import SynapseError, Codes from synapse.api.errors import Codes, SynapseError
from synapse.crypto.event_signing import check_event_content_hash from synapse.crypto.event_signing import check_event_content_hash
from synapse.events import FrozenEvent from synapse.events import FrozenEvent
from synapse.events.utils import prune_event from synapse.events.utils import prune_event
from synapse.http.servlet import assert_params_in_request from synapse.http.servlet import assert_params_in_request
from synapse.util import unwrapFirstError, logcontext from synapse.util import logcontext, unwrapFirstError
from twisted.internet import defer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -21,25 +21,25 @@ import random
from six.moves import range from six.moves import range
from prometheus_client import Counter
from twisted.internet import defer from twisted.internet import defer
from synapse.api.constants import Membership from synapse.api.constants import Membership
from synapse.api.errors import ( from synapse.api.errors import (
CodeMessageException, HttpResponseException, SynapseError, FederationDeniedError CodeMessageException,
FederationDeniedError,
HttpResponseException,
SynapseError,
) )
from synapse.events import builder from synapse.events import builder
from synapse.federation.federation_base import ( from synapse.federation.federation_base import FederationBase, event_from_pdu_json
FederationBase,
event_from_pdu_json,
)
from synapse.util import logcontext, unwrapFirstError from synapse.util import logcontext, unwrapFirstError
from synapse.util.caches.expiringcache import ExpiringCache from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logcontext import make_deferred_yieldable, run_in_background from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
from synapse.util.retryutils import NotRetryingDestination from synapse.util.retryutils import NotRetryingDestination
from prometheus_client import Counter
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
sent_queries_counter = Counter("synapse_federation_client_sent_queries", "", ["type"]) sent_queries_counter = Counter("synapse_federation_client_sent_queries", "", ["type"])
@ -391,7 +391,7 @@ class FederationClient(FederationBase):
""" """
if return_local: if return_local:
seen_events = yield self.store.get_events(event_ids, allow_rejected=True) seen_events = yield self.store.get_events(event_ids, allow_rejected=True)
signed_events = seen_events.values() signed_events = list(seen_events.values())
else: else:
seen_events = yield self.store.have_seen_events(event_ids) seen_events = yield self.store.have_seen_events(event_ids)
signed_events = [] signed_events = []
@ -589,7 +589,7 @@ class FederationClient(FederationBase):
} }
valid_pdus = yield self._check_sigs_and_hash_and_fetch( valid_pdus = yield self._check_sigs_and_hash_and_fetch(
destination, pdus.values(), destination, list(pdus.values()),
outlier=True, outlier=True,
) )

View file

@ -14,28 +14,29 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import logging import logging
import re
import six
from six import iteritems
from canonicaljson import json
from prometheus_client import Counter
import simplejson as json
from twisted.internet import defer from twisted.internet import defer
from twisted.internet.abstract import isIPAddress
from synapse.api.errors import AuthError, FederationError, SynapseError, NotFoundError from synapse.api.constants import EventTypes
from synapse.api.errors import AuthError, FederationError, NotFoundError, SynapseError
from synapse.crypto.event_signing import compute_event_signature from synapse.crypto.event_signing import compute_event_signature
from synapse.federation.federation_base import ( from synapse.federation.federation_base import FederationBase, event_from_pdu_json
FederationBase,
event_from_pdu_json,
)
from synapse.federation.persistence import TransactionActions from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction from synapse.federation.units import Edu, Transaction
from synapse.http.endpoint import parse_server_name
from synapse.types import get_domain_from_id from synapse.types import get_domain_from_id
from synapse.util import async from synapse.util import async
from synapse.util.caches.response_cache import ResponseCache from synapse.util.caches.response_cache import ResponseCache
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
from prometheus_client import Counter
from six import iteritems
# when processing incoming transactions, we try to handle multiple rooms in # when processing incoming transactions, we try to handle multiple rooms in
# parallel, up to this limit. # parallel, up to this limit.
TRANSACTION_CONCURRENCY_LIMIT = 10 TRANSACTION_CONCURRENCY_LIMIT = 10
@ -74,6 +75,9 @@ class FederationServer(FederationBase):
@log_function @log_function
def on_backfill_request(self, origin, room_id, versions, limit): def on_backfill_request(self, origin, room_id, versions, limit):
with (yield self._server_linearizer.queue((origin, room_id))): with (yield self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
pdus = yield self.handler.on_backfill_request( pdus = yield self.handler.on_backfill_request(
origin, room_id, versions, limit origin, room_id, versions, limit
) )
@ -134,6 +138,8 @@ class FederationServer(FederationBase):
received_pdus_counter.inc(len(transaction.pdus)) received_pdus_counter.inc(len(transaction.pdus))
origin_host, _ = parse_server_name(transaction.origin)
pdus_by_room = {} pdus_by_room = {}
for p in transaction.pdus: for p in transaction.pdus:
@ -154,9 +160,21 @@ class FederationServer(FederationBase):
# we can process different rooms in parallel (which is useful if they # we can process different rooms in parallel (which is useful if they
# require callouts to other servers to fetch missing events), but # require callouts to other servers to fetch missing events), but
# impose a limit to avoid going too crazy with ram/cpu. # impose a limit to avoid going too crazy with ram/cpu.
@defer.inlineCallbacks @defer.inlineCallbacks
def process_pdus_for_room(room_id): def process_pdus_for_room(room_id):
logger.debug("Processing PDUs for %s", room_id) logger.debug("Processing PDUs for %s", room_id)
try:
yield self.check_server_matches_acl(origin_host, room_id)
except AuthError as e:
logger.warn(
"Ignoring PDUs for room %s from banned server", room_id,
)
for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id
pdu_results[event_id] = e.error_dict()
return
for pdu in pdus_by_room[room_id]: for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id event_id = pdu.event_id
try: try:
@ -211,6 +229,9 @@ class FederationServer(FederationBase):
if not event_id: if not event_id:
raise NotImplementedError("Specify an event") raise NotImplementedError("Specify an event")
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
in_room = yield self.auth.check_host_in_room(room_id, origin) in_room = yield self.auth.check_host_in_room(room_id, origin)
if not in_room: if not in_room:
raise AuthError(403, "Host not in room.") raise AuthError(403, "Host not in room.")
@ -234,6 +255,9 @@ class FederationServer(FederationBase):
if not event_id: if not event_id:
raise NotImplementedError("Specify an event") raise NotImplementedError("Specify an event")
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
in_room = yield self.auth.check_host_in_room(room_id, origin) in_room = yield self.auth.check_host_in_room(room_id, origin)
if not in_room: if not in_room:
raise AuthError(403, "Host not in room.") raise AuthError(403, "Host not in room.")
@ -277,7 +301,7 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def on_pdu_request(self, origin, event_id): def on_pdu_request(self, origin, event_id):
pdu = yield self._get_persisted_pdu(origin, event_id) pdu = yield self.handler.get_persisted_pdu(origin, event_id)
if pdu: if pdu:
defer.returnValue( defer.returnValue(
@ -298,7 +322,9 @@ class FederationServer(FederationBase):
defer.returnValue((200, resp)) defer.returnValue((200, resp))
@defer.inlineCallbacks @defer.inlineCallbacks
def on_make_join_request(self, room_id, user_id): def on_make_join_request(self, origin, room_id, user_id):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
pdu = yield self.handler.on_make_join_request(room_id, user_id) pdu = yield self.handler.on_make_join_request(room_id, user_id)
time_now = self._clock.time_msec() time_now = self._clock.time_msec()
defer.returnValue({"event": pdu.get_pdu_json(time_now)}) defer.returnValue({"event": pdu.get_pdu_json(time_now)})
@ -306,6 +332,8 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks @defer.inlineCallbacks
def on_invite_request(self, origin, content): def on_invite_request(self, origin, content):
pdu = event_from_pdu_json(content) pdu = event_from_pdu_json(content)
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, pdu.room_id)
ret_pdu = yield self.handler.on_invite_request(origin, pdu) ret_pdu = yield self.handler.on_invite_request(origin, pdu)
time_now = self._clock.time_msec() time_now = self._clock.time_msec()
defer.returnValue((200, {"event": ret_pdu.get_pdu_json(time_now)})) defer.returnValue((200, {"event": ret_pdu.get_pdu_json(time_now)}))
@ -314,6 +342,10 @@ class FederationServer(FederationBase):
def on_send_join_request(self, origin, content): def on_send_join_request(self, origin, content):
logger.debug("on_send_join_request: content: %s", content) logger.debug("on_send_join_request: content: %s", content)
pdu = event_from_pdu_json(content) pdu = event_from_pdu_json(content)
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures) logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures)
res_pdus = yield self.handler.on_send_join_request(origin, pdu) res_pdus = yield self.handler.on_send_join_request(origin, pdu)
time_now = self._clock.time_msec() time_now = self._clock.time_msec()
@ -325,7 +357,9 @@ class FederationServer(FederationBase):
})) }))
@defer.inlineCallbacks @defer.inlineCallbacks
def on_make_leave_request(self, room_id, user_id): def on_make_leave_request(self, origin, room_id, user_id):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
pdu = yield self.handler.on_make_leave_request(room_id, user_id) pdu = yield self.handler.on_make_leave_request(room_id, user_id)
time_now = self._clock.time_msec() time_now = self._clock.time_msec()
defer.returnValue({"event": pdu.get_pdu_json(time_now)}) defer.returnValue({"event": pdu.get_pdu_json(time_now)})
@ -334,6 +368,10 @@ class FederationServer(FederationBase):
def on_send_leave_request(self, origin, content): def on_send_leave_request(self, origin, content):
logger.debug("on_send_leave_request: content: %s", content) logger.debug("on_send_leave_request: content: %s", content)
pdu = event_from_pdu_json(content) pdu = event_from_pdu_json(content)
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures) logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures)
yield self.handler.on_send_leave_request(origin, pdu) yield self.handler.on_send_leave_request(origin, pdu)
defer.returnValue((200, {})) defer.returnValue((200, {}))
@ -341,6 +379,9 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks @defer.inlineCallbacks
def on_event_auth(self, origin, room_id, event_id): def on_event_auth(self, origin, room_id, event_id):
with (yield self._server_linearizer.queue((origin, room_id))): with (yield self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
time_now = self._clock.time_msec() time_now = self._clock.time_msec()
auth_pdus = yield self.handler.on_event_auth(event_id) auth_pdus = yield self.handler.on_event_auth(event_id)
res = { res = {
@ -369,6 +410,9 @@ class FederationServer(FederationBase):
Deferred: Results in `dict` with the same format as `content` Deferred: Results in `dict` with the same format as `content`
""" """
with (yield self._server_linearizer.queue((origin, room_id))): with (yield self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
auth_chain = [ auth_chain = [
event_from_pdu_json(e) event_from_pdu_json(e)
for e in content["auth_chain"] for e in content["auth_chain"]
@ -442,6 +486,9 @@ class FederationServer(FederationBase):
def on_get_missing_events(self, origin, room_id, earliest_events, def on_get_missing_events(self, origin, room_id, earliest_events,
latest_events, limit, min_depth): latest_events, limit, min_depth):
with (yield self._server_linearizer.queue((origin, room_id))): with (yield self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
logger.info( logger.info(
"on_get_missing_events: earliest_events: %r, latest_events: %r," "on_get_missing_events: earliest_events: %r, latest_events: %r,"
" limit: %d, min_depth: %d", " limit: %d, min_depth: %d",
@ -470,17 +517,6 @@ class FederationServer(FederationBase):
ts_now_ms = self._clock.time_msec() ts_now_ms = self._clock.time_msec()
return self.store.get_user_id_for_open_id_token(token, ts_now_ms) return self.store.get_user_id_for_open_id_token(token, ts_now_ms)
@log_function
def _get_persisted_pdu(self, origin, event_id, do_auth=True):
""" Get a PDU from the database with given origin and id.
Returns:
Deferred: Results in a `Pdu`.
"""
return self.handler.get_persisted_pdu(
origin, event_id, do_auth=do_auth
)
def _transaction_from_pdus(self, pdu_list): def _transaction_from_pdus(self, pdu_list):
"""Returns a new Transaction containing the given PDUs suitable for """Returns a new Transaction containing the given PDUs suitable for
transmission. transmission.
@ -560,7 +596,9 @@ class FederationServer(FederationBase):
affected=pdu.event_id, affected=pdu.event_id,
) )
yield self.handler.on_receive_pdu(origin, pdu, get_missing=True) yield self.handler.on_receive_pdu(
origin, pdu, get_missing=True, sent_to_us_directly=True,
)
def __str__(self): def __str__(self):
return "<ReplicationLayer(%s)>" % self.server_name return "<ReplicationLayer(%s)>" % self.server_name
@ -588,6 +626,101 @@ class FederationServer(FederationBase):
) )
defer.returnValue(ret) defer.returnValue(ret)
@defer.inlineCallbacks
def check_server_matches_acl(self, server_name, room_id):
"""Check if the given server is allowed by the server ACLs in the room
Args:
server_name (str): name of server, *without any port part*
room_id (str): ID of the room to check
Raises:
AuthError if the server does not match the ACL
"""
state_ids = yield self.store.get_current_state_ids(room_id)
acl_event_id = state_ids.get((EventTypes.ServerACL, ""))
if not acl_event_id:
return
acl_event = yield self.store.get_event(acl_event_id)
if server_matches_acl_event(server_name, acl_event):
return
raise AuthError(code=403, msg="Server is banned from room")
def server_matches_acl_event(server_name, acl_event):
"""Check if the given server is allowed by the ACL event
Args:
server_name (str): name of server, without any port part
acl_event (EventBase): m.room.server_acl event
Returns:
bool: True if this server is allowed by the ACLs
"""
logger.debug("Checking %s against acl %s", server_name, acl_event.content)
# first of all, check if literal IPs are blocked, and if so, whether the
# server name is a literal IP
allow_ip_literals = acl_event.content.get("allow_ip_literals", True)
if not isinstance(allow_ip_literals, bool):
logger.warn("Ignorning non-bool allow_ip_literals flag")
allow_ip_literals = True
if not allow_ip_literals:
# check for ipv6 literals. These start with '['.
if server_name[0] == '[':
return False
# check for ipv4 literals. We can just lift the routine from twisted.
if isIPAddress(server_name):
return False
# next, check the deny list
deny = acl_event.content.get("deny", [])
if not isinstance(deny, (list, tuple)):
logger.warn("Ignorning non-list deny ACL %s", deny)
deny = []
for e in deny:
if _acl_entry_matches(server_name, e):
# logger.info("%s matched deny rule %s", server_name, e)
return False
# then the allow list.
allow = acl_event.content.get("allow", [])
if not isinstance(allow, (list, tuple)):
logger.warn("Ignorning non-list allow ACL %s", allow)
allow = []
for e in allow:
if _acl_entry_matches(server_name, e):
# logger.info("%s matched allow rule %s", server_name, e)
return True
# everything else should be rejected.
# logger.info("%s fell through", server_name)
return False
def _acl_entry_matches(server_name, acl_entry):
if not isinstance(acl_entry, six.string_types):
logger.warn("Ignoring non-str ACL entry '%s' (is %s)", acl_entry, type(acl_entry))
return False
regex = _glob_to_regex(acl_entry)
return regex.match(server_name)
def _glob_to_regex(glob):
res = ''
for c in glob:
if c == '*':
res = res + '.*'
elif c == '?':
res = res + '.'
else:
res = res + re.escape(c)
return re.compile(res + "\\Z", re.IGNORECASE)
class FederationHandlerRegistry(object): class FederationHandlerRegistry(object):
"""Allows classes to register themselves as handlers for a given EDU or """Allows classes to register themselves as handlers for a given EDU or

View file

@ -19,13 +19,12 @@ package.
These actions are mostly only used by the :py:mod:`.replication` module. These actions are mostly only used by the :py:mod:`.replication` module.
""" """
import logging
from twisted.internet import defer from twisted.internet import defer
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -29,18 +29,18 @@ dead worker doesn't cause the queues to grow limitlessly.
Events are replicated via a separate events stream. Events are replicated via a separate events stream.
""" """
from .units import Edu import logging
from synapse.storage.presence import UserPresenceState
from synapse.util.metrics import Measure
from synapse.metrics import LaterGauge
from blist import sorteddict
from collections import namedtuple from collections import namedtuple
import logging from six import iteritems, itervalues
from six import itervalues, iteritems from sortedcontainers import SortedDict
from synapse.metrics import LaterGauge
from synapse.storage.presence import UserPresenceState
from synapse.util.metrics import Measure
from .units import Edu
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -55,19 +55,19 @@ class FederationRemoteSendQueue(object):
self.is_mine_id = hs.is_mine_id self.is_mine_id = hs.is_mine_id
self.presence_map = {} # Pending presence map user_id -> UserPresenceState self.presence_map = {} # Pending presence map user_id -> UserPresenceState
self.presence_changed = sorteddict() # Stream position -> user_id self.presence_changed = SortedDict() # Stream position -> user_id
self.keyed_edu = {} # (destination, key) -> EDU self.keyed_edu = {} # (destination, key) -> EDU
self.keyed_edu_changed = sorteddict() # stream position -> (destination, key) self.keyed_edu_changed = SortedDict() # stream position -> (destination, key)
self.edus = sorteddict() # stream position -> Edu self.edus = SortedDict() # stream position -> Edu
self.failures = sorteddict() # stream position -> (destination, Failure) self.failures = SortedDict() # stream position -> (destination, Failure)
self.device_messages = sorteddict() # stream position -> destination self.device_messages = SortedDict() # stream position -> destination
self.pos = 1 self.pos = 1
self.pos_time = sorteddict() self.pos_time = SortedDict()
# EVERYTHING IS SAD. In particular, python only makes new scopes when # EVERYTHING IS SAD. In particular, python only makes new scopes when
# we make a new function, so we need to make a new function so the inner # we make a new function, so we need to make a new function so the inner
@ -75,7 +75,7 @@ class FederationRemoteSendQueue(object):
# changes. ARGH. # changes. ARGH.
def register(name, queue): def register(name, queue):
LaterGauge("synapse_federation_send_queue_%s_size" % (queue_name,), LaterGauge("synapse_federation_send_queue_%s_size" % (queue_name,),
"", lambda: len(queue)) "", [], lambda: len(queue))
for queue_name in [ for queue_name in [
"presence_map", "presence_changed", "keyed_edu", "keyed_edu_changed", "presence_map", "presence_changed", "keyed_edu", "keyed_edu_changed",
@ -98,7 +98,7 @@ class FederationRemoteSendQueue(object):
now = self.clock.time_msec() now = self.clock.time_msec()
keys = self.pos_time.keys() keys = self.pos_time.keys()
time = keys.bisect_left(now - FIVE_MINUTES_AGO) time = self.pos_time.bisect_left(now - FIVE_MINUTES_AGO)
if not keys[:time]: if not keys[:time]:
return return
@ -113,7 +113,7 @@ class FederationRemoteSendQueue(object):
with Measure(self.clock, "send_queue._clear"): with Measure(self.clock, "send_queue._clear"):
# Delete things out of presence maps # Delete things out of presence maps
keys = self.presence_changed.keys() keys = self.presence_changed.keys()
i = keys.bisect_left(position_to_delete) i = self.presence_changed.bisect_left(position_to_delete)
for key in keys[:i]: for key in keys[:i]:
del self.presence_changed[key] del self.presence_changed[key]
@ -131,7 +131,7 @@ class FederationRemoteSendQueue(object):
# Delete things out of keyed edus # Delete things out of keyed edus
keys = self.keyed_edu_changed.keys() keys = self.keyed_edu_changed.keys()
i = keys.bisect_left(position_to_delete) i = self.keyed_edu_changed.bisect_left(position_to_delete)
for key in keys[:i]: for key in keys[:i]:
del self.keyed_edu_changed[key] del self.keyed_edu_changed[key]
@ -145,19 +145,19 @@ class FederationRemoteSendQueue(object):
# Delete things out of edu map # Delete things out of edu map
keys = self.edus.keys() keys = self.edus.keys()
i = keys.bisect_left(position_to_delete) i = self.edus.bisect_left(position_to_delete)
for key in keys[:i]: for key in keys[:i]:
del self.edus[key] del self.edus[key]
# Delete things out of failure map # Delete things out of failure map
keys = self.failures.keys() keys = self.failures.keys()
i = keys.bisect_left(position_to_delete) i = self.failures.bisect_left(position_to_delete)
for key in keys[:i]: for key in keys[:i]:
del self.failures[key] del self.failures[key]
# Delete things out of device map # Delete things out of device map
keys = self.device_messages.keys() keys = self.device_messages.keys()
i = keys.bisect_left(position_to_delete) i = self.device_messages.bisect_left(position_to_delete)
for key in keys[:i]: for key in keys[:i]:
del self.device_messages[key] del self.device_messages[key]
@ -197,7 +197,7 @@ class FederationRemoteSendQueue(object):
# We only want to send presence for our own users, so lets always just # We only want to send presence for our own users, so lets always just
# filter here just in case. # filter here just in case.
local_states = filter(lambda s: self.is_mine_id(s.user_id), states) local_states = list(filter(lambda s: self.is_mine_id(s.user_id), states))
self.presence_map.update({state.user_id: state for state in local_states}) self.presence_map.update({state.user_id: state for state in local_states})
self.presence_changed[pos] = [state.user_id for state in local_states] self.presence_changed[pos] = [state.user_id for state in local_states]
@ -250,13 +250,12 @@ class FederationRemoteSendQueue(object):
self._clear_queue_before_pos(federation_ack) self._clear_queue_before_pos(federation_ack)
# Fetch changed presence # Fetch changed presence
keys = self.presence_changed.keys() i = self.presence_changed.bisect_right(from_token)
i = keys.bisect_right(from_token) j = self.presence_changed.bisect_right(to_token) + 1
j = keys.bisect_right(to_token) + 1
dest_user_ids = [ dest_user_ids = [
(pos, user_id) (pos, user_id)
for pos in keys[i:j] for pos, user_id_list in self.presence_changed.items()[i:j]
for user_id in self.presence_changed[pos] for user_id in user_id_list
] ]
for (key, user_id) in dest_user_ids: for (key, user_id) in dest_user_ids:
@ -265,13 +264,12 @@ class FederationRemoteSendQueue(object):
))) )))
# Fetch changes keyed edus # Fetch changes keyed edus
keys = self.keyed_edu_changed.keys() i = self.keyed_edu_changed.bisect_right(from_token)
i = keys.bisect_right(from_token) j = self.keyed_edu_changed.bisect_right(to_token) + 1
j = keys.bisect_right(to_token) + 1
# We purposefully clobber based on the key here, python dict comprehensions # We purposefully clobber based on the key here, python dict comprehensions
# always use the last value, so this will correctly point to the last # always use the last value, so this will correctly point to the last
# stream position. # stream position.
keyed_edus = {self.keyed_edu_changed[k]: k for k in keys[i:j]} keyed_edus = {v: k for k, v in self.keyed_edu_changed.items()[i:j]}
for ((destination, edu_key), pos) in iteritems(keyed_edus): for ((destination, edu_key), pos) in iteritems(keyed_edus):
rows.append((pos, KeyedEduRow( rows.append((pos, KeyedEduRow(
@ -280,19 +278,17 @@ class FederationRemoteSendQueue(object):
))) )))
# Fetch changed edus # Fetch changed edus
keys = self.edus.keys() i = self.edus.bisect_right(from_token)
i = keys.bisect_right(from_token) j = self.edus.bisect_right(to_token) + 1
j = keys.bisect_right(to_token) + 1 edus = self.edus.items()[i:j]
edus = ((k, self.edus[k]) for k in keys[i:j])
for (pos, edu) in edus: for (pos, edu) in edus:
rows.append((pos, EduRow(edu))) rows.append((pos, EduRow(edu)))
# Fetch changed failures # Fetch changed failures
keys = self.failures.keys() i = self.failures.bisect_right(from_token)
i = keys.bisect_right(from_token) j = self.failures.bisect_right(to_token) + 1
j = keys.bisect_right(to_token) + 1 failures = self.failures.items()[i:j]
failures = ((k, self.failures[k]) for k in keys[i:j])
for (pos, (destination, failure)) in failures: for (pos, (destination, failure)) in failures:
rows.append((pos, FailureRow( rows.append((pos, FailureRow(
@ -301,10 +297,9 @@ class FederationRemoteSendQueue(object):
))) )))
# Fetch changed device messages # Fetch changed device messages
keys = self.device_messages.keys() i = self.device_messages.bisect_right(from_token)
i = keys.bisect_right(from_token) j = self.device_messages.bisect_right(to_token) + 1
j = keys.bisect_right(to_token) + 1 device_messages = {v: k for k, v in self.device_messages.items()[i:j]}
device_messages = {self.device_messages[k]: k for k in keys[i:j]}
for (destination, pos) in iteritems(device_messages): for (destination, pos) in iteritems(device_messages):
rows.append((pos, DeviceRow( rows.append((pos, DeviceRow(

View file

@ -13,35 +13,37 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import datetime import datetime
import logging
from twisted.internet import defer from six import itervalues
from .persistence import TransactionActions
from .units import Transaction, Edu
from synapse.api.errors import HttpResponseException, FederationDeniedError
from synapse.util import logcontext, PreserveLoggingContext
from synapse.util.async import run_on_reactor
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
from synapse.util.metrics import measure_func
from synapse.handlers.presence import format_user_presence_state, get_interested_remotes
import synapse.metrics
from synapse.metrics import LaterGauge
from synapse.metrics import (
sent_edus_counter,
sent_transactions_counter,
events_processed_counter,
)
from prometheus_client import Counter from prometheus_client import Counter
import logging from twisted.internet import defer
import synapse.metrics
from synapse.api.errors import FederationDeniedError, HttpResponseException
from synapse.handlers.presence import format_user_presence_state, get_interested_remotes
from synapse.metrics import (
LaterGauge,
events_processed_counter,
sent_edus_counter,
sent_transactions_counter,
)
from synapse.util import PreserveLoggingContext, logcontext
from synapse.util.metrics import measure_func
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
from .persistence import TransactionActions
from .units import Edu, Transaction
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
sent_pdus_destination_dist = Counter( sent_pdus_destination_dist_count = Counter(
"synapse_federation_transaction_queue_sent_pdu_destinations", "" "synapse_federation_client_sent_pdu_destinations:count", ""
)
sent_pdus_destination_dist_total = Counter(
"synapse_federation_client_sent_pdu_destinations:total", ""
) )
@ -234,7 +236,7 @@ class TransactionQueue(object):
yield logcontext.make_deferred_yieldable(defer.gatherResults( yield logcontext.make_deferred_yieldable(defer.gatherResults(
[ [
logcontext.run_in_background(handle_room_events, evs) logcontext.run_in_background(handle_room_events, evs)
for evs in events_by_room.itervalues() for evs in itervalues(events_by_room)
], ],
consumeErrors=True consumeErrors=True
)) ))
@ -278,7 +280,8 @@ class TransactionQueue(object):
if not destinations: if not destinations:
return return
sent_pdus_destination_dist.inc(len(destinations)) sent_pdus_destination_dist_total.inc(len(destinations))
sent_pdus_destination_dist_count.inc()
for destination in destinations: for destination in destinations:
self.pending_pdus_by_dest.setdefault(destination, []).append( self.pending_pdus_by_dest.setdefault(destination, []).append(
@ -325,7 +328,7 @@ class TransactionQueue(object):
if not states_map: if not states_map:
break break
yield self._process_presence_inner(states_map.values()) yield self._process_presence_inner(list(states_map.values()))
except Exception: except Exception:
logger.exception("Error sending presence states to servers") logger.exception("Error sending presence states to servers")
finally: finally:
@ -449,9 +452,6 @@ class TransactionQueue(object):
# hence why we throw the result away. # hence why we throw the result away.
yield get_retry_limiter(destination, self.clock, self.store) yield get_retry_limiter(destination, self.clock, self.store)
# XXX: what's this for?
yield run_on_reactor()
pending_pdus = [] pending_pdus = []
while True: while True:
device_message_edus, device_stream_id, dev_list_id = ( device_message_edus, device_stream_id, dev_list_id = (

View file

@ -14,15 +14,14 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from twisted.internet import defer
from synapse.api.constants import Membership
from synapse.api.urls import FEDERATION_PREFIX as PREFIX
from synapse.util.logutils import log_function
import logging import logging
import urllib import urllib
from twisted.internet import defer
from synapse.api.constants import Membership
from synapse.api.urls import FEDERATION_PREFIX as PREFIX
from synapse.util.logutils import log_function
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -14,25 +14,27 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from twisted.internet import defer
from synapse.api.urls import FEDERATION_PREFIX as PREFIX
from synapse.api.errors import Codes, SynapseError, FederationDeniedError
from synapse.http.server import JsonResource
from synapse.http.servlet import (
parse_json_object_from_request, parse_integer_from_args, parse_string_from_args,
parse_boolean_from_args,
)
from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.versionstring import get_version_string
from synapse.util.logcontext import run_in_background
from synapse.types import ThirdPartyInstanceID, get_domain_from_id
import functools import functools
import logging import logging
import re import re
import synapse
from twisted.internet import defer
import synapse
from synapse.api.errors import Codes, FederationDeniedError, SynapseError
from synapse.api.urls import FEDERATION_PREFIX as PREFIX
from synapse.http.endpoint import parse_and_validate_server_name
from synapse.http.server import JsonResource
from synapse.http.servlet import (
parse_boolean_from_args,
parse_integer_from_args,
parse_json_object_from_request,
parse_string_from_args,
)
from synapse.types import ThirdPartyInstanceID, get_domain_from_id
from synapse.util.logcontext import run_in_background
from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.versionstring import get_version_string
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -99,26 +101,6 @@ class Authenticator(object):
origin = None origin = None
def parse_auth_header(header_str):
try:
params = auth.split(" ")[1].split(",")
param_dict = dict(kv.split("=") for kv in params)
def strip_quotes(value):
if value.startswith("\""):
return value[1:-1]
else:
return value
origin = strip_quotes(param_dict["origin"])
key = strip_quotes(param_dict["key"])
sig = strip_quotes(param_dict["sig"])
return (origin, key, sig)
except Exception:
raise AuthenticationError(
400, "Malformed Authorization header", Codes.UNAUTHORIZED
)
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization") auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
if not auth_headers: if not auth_headers:
@ -127,8 +109,8 @@ class Authenticator(object):
) )
for auth in auth_headers: for auth in auth_headers:
if auth.startswith("X-Matrix"): if auth.startswith(b"X-Matrix"):
(origin, key, sig) = parse_auth_header(auth) (origin, key, sig) = _parse_auth_header(auth)
json_request["origin"] = origin json_request["origin"] = origin
json_request["signatures"].setdefault(origin, {})[key] = sig json_request["signatures"].setdefault(origin, {})[key] = sig
@ -165,6 +147,48 @@ class Authenticator(object):
logger.exception("Error resetting retry timings on %s", origin) logger.exception("Error resetting retry timings on %s", origin)
def _parse_auth_header(header_bytes):
"""Parse an X-Matrix auth header
Args:
header_bytes (bytes): header value
Returns:
Tuple[str, str, str]: origin, key id, signature.
Raises:
AuthenticationError if the header could not be parsed
"""
try:
header_str = header_bytes.decode('utf-8')
params = header_str.split(" ")[1].split(",")
param_dict = dict(kv.split("=") for kv in params)
def strip_quotes(value):
if value.startswith(b"\""):
return value[1:-1]
else:
return value
origin = strip_quotes(param_dict["origin"])
# ensure that the origin is a valid server name
parse_and_validate_server_name(origin)
key = strip_quotes(param_dict["key"])
sig = strip_quotes(param_dict["sig"])
return origin, key, sig
except Exception as e:
logger.warn(
"Error parsing auth header '%s': %s",
header_bytes.decode('ascii', 'replace'),
e,
)
raise AuthenticationError(
400, "Malformed Authorization header", Codes.UNAUTHORIZED,
)
class BaseFederationServlet(object): class BaseFederationServlet(object):
REQUIRE_AUTH = True REQUIRE_AUTH = True
@ -362,7 +386,9 @@ class FederationMakeJoinServlet(BaseFederationServlet):
@defer.inlineCallbacks @defer.inlineCallbacks
def on_GET(self, origin, content, query, context, user_id): def on_GET(self, origin, content, query, context, user_id):
content = yield self.handler.on_make_join_request(context, user_id) content = yield self.handler.on_make_join_request(
origin, context, user_id,
)
defer.returnValue((200, content)) defer.returnValue((200, content))
@ -371,7 +397,9 @@ class FederationMakeLeaveServlet(BaseFederationServlet):
@defer.inlineCallbacks @defer.inlineCallbacks
def on_GET(self, origin, content, query, context, user_id): def on_GET(self, origin, content, query, context, user_id):
content = yield self.handler.on_make_leave_request(context, user_id) content = yield self.handler.on_make_leave_request(
origin, context, user_id,
)
defer.returnValue((200, content)) defer.returnValue((200, content))

View file

@ -17,10 +17,9 @@
server protocol. server protocol.
""" """
from synapse.util.jsonobject import JsonEncodedObject
import logging import logging
from synapse.util.jsonobject import JsonEncodedObject
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -23,9 +23,9 @@ If a user leaves (or gets kicked out of) a group, either side can still use
their attestation to "prove" their membership, until the attestation expires. their attestation to "prove" their membership, until the attestation expires.
Therefore attestations shouldn't be relied on to prove membership in important Therefore attestations shouldn't be relied on to prove membership in important
cases, but can for less important situtations, e.g. showing a users membership cases, but can for less important situtations, e.g. showing a users membership
of groups on their profile, showing flairs, etc.abs of groups on their profile, showing flairs, etc.
An attestsation is a signed blob of json that looks like: An attestation is a signed blob of json that looks like:
{ {
"user_id": "@foo:a.example.com", "user_id": "@foo:a.example.com",
@ -38,15 +38,14 @@ An attestsation is a signed blob of json that looks like:
import logging import logging
import random import random
from signedjson.sign import sign_json
from twisted.internet import defer from twisted.internet import defer
from synapse.api.errors import SynapseError from synapse.api.errors import SynapseError
from synapse.types import get_domain_from_id from synapse.types import get_domain_from_id
from synapse.util.logcontext import run_in_background from synapse.util.logcontext import run_in_background
from signedjson.sign import sign_json
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -16,11 +16,12 @@
import logging import logging
from synapse.api.errors import SynapseError from six import string_types
from synapse.types import GroupID, RoomID, UserID, get_domain_from_id
from twisted.internet import defer from twisted.internet import defer
from six import string_types from synapse.api.errors import SynapseError
from synapse.types import GroupID, RoomID, UserID, get_domain_from_id
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -13,13 +13,13 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from .admin import AdminHandler
from .directory import DirectoryHandler
from .federation import FederationHandler
from .identity import IdentityHandler
from .message import MessageHandler
from .register import RegistrationHandler from .register import RegistrationHandler
from .room import RoomContextHandler from .room import RoomContextHandler
from .message import MessageHandler
from .federation import FederationHandler
from .directory import DirectoryHandler
from .admin import AdminHandler
from .identity import IdentityHandler
from .search import SearchHandler from .search import SearchHandler

View file

@ -18,11 +18,10 @@ import logging
from twisted.internet import defer from twisted.internet import defer
import synapse.types import synapse.types
from synapse.api.constants import Membership, EventTypes from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import LimitExceededError from synapse.api.errors import LimitExceededError
from synapse.types import UserID from synapse.types import UserID
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -114,14 +113,14 @@ class BaseHandler(object):
if guest_access != "can_join": if guest_access != "can_join":
if context: if context:
current_state = yield self.store.get_events( current_state = yield self.store.get_events(
context.current_state_ids.values() list(context.current_state_ids.values())
) )
else: else:
current_state = yield self.state_handler.get_current_state( current_state = yield self.state_handler.get_current_state(
event.room_id event.room_id
) )
current_state = current_state.values() current_state = list(current_state.values())
logger.info("maybe_kick_guest_users %r", current_state) logger.info("maybe_kick_guest_users %r", current_state)
yield self.kick_guest_users(current_state) yield self.kick_guest_users(current_state)

View file

@ -13,12 +13,12 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import logging
from twisted.internet import defer from twisted.internet import defer
from ._base import BaseHandler from ._base import BaseHandler
import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -13,17 +13,18 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import logging
from six import itervalues
from prometheus_client import Counter
from twisted.internet import defer from twisted.internet import defer
import synapse import synapse
from synapse.api.constants import EventTypes from synapse.api.constants import EventTypes
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
from synapse.util.logcontext import (
make_deferred_yieldable, run_in_background,
)
from prometheus_client import Counter
import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -119,7 +120,7 @@ class ApplicationServicesHandler(object):
yield make_deferred_yieldable(defer.gatherResults([ yield make_deferred_yieldable(defer.gatherResults([
run_in_background(handle_room_events, evs) run_in_background(handle_room_events, evs)
for evs in events_by_room.itervalues() for evs in itervalues(events_by_room)
], consumeErrors=True)) ], consumeErrors=True))
yield self.store.set_appservice_last_pos(upper_bound) yield self.store.set_appservice_last_pos(upper_bound)

View file

@ -13,29 +13,33 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from twisted.internet import defer, threads
from ._base import BaseHandler import logging
import attr
import bcrypt
import pymacaroons
from canonicaljson import json
from twisted.internet import defer, threads
from twisted.web.client import PartialDownloadError
import synapse.util.stringutils as stringutils
from synapse.api.constants import LoginType from synapse.api.constants import LoginType
from synapse.api.errors import ( from synapse.api.errors import (
AuthError, Codes, InteractiveAuthIncompleteError, LoginError, StoreError, AuthError,
Codes,
InteractiveAuthIncompleteError,
LoginError,
StoreError,
SynapseError, SynapseError,
) )
from synapse.module_api import ModuleApi from synapse.module_api import ModuleApi
from synapse.types import UserID from synapse.types import UserID
from synapse.util.async import run_on_reactor
from synapse.util.caches.expiringcache import ExpiringCache from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logcontext import make_deferred_yieldable from synapse.util.logcontext import make_deferred_yieldable
from twisted.web.client import PartialDownloadError from ._base import BaseHandler
import logging
import bcrypt
import pymacaroons
import simplejson
import synapse.util.stringutils as stringutils
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -249,7 +253,7 @@ class AuthHandler(BaseHandler):
errordict = e.error_dict() errordict = e.error_dict()
for f in flows: for f in flows:
if len(set(f) - set(creds.keys())) == 0: if len(set(f) - set(creds)) == 0:
# it's very useful to know what args are stored, but this can # it's very useful to know what args are stored, but this can
# include the password in the case of registering, so only log # include the password in the case of registering, so only log
# the keys (confusingly, clientdict may contain a password # the keys (confusingly, clientdict may contain a password
@ -257,12 +261,12 @@ class AuthHandler(BaseHandler):
# and is not sensitive). # and is not sensitive).
logger.info( logger.info(
"Auth completed with creds: %r. Client dict has keys: %r", "Auth completed with creds: %r. Client dict has keys: %r",
creds, clientdict.keys() creds, list(clientdict)
) )
defer.returnValue((creds, clientdict, session['id'])) defer.returnValue((creds, clientdict, session['id']))
ret = self._auth_dict_for_flows(flows, session) ret = self._auth_dict_for_flows(flows, session)
ret['completed'] = creds.keys() ret['completed'] = list(creds)
ret.update(errordict) ret.update(errordict)
raise InteractiveAuthIncompleteError( raise InteractiveAuthIncompleteError(
ret, ret,
@ -402,7 +406,7 @@ class AuthHandler(BaseHandler):
except PartialDownloadError as pde: except PartialDownloadError as pde:
# Twisted is silly # Twisted is silly
data = pde.response data = pde.response
resp_body = simplejson.loads(data) resp_body = json.loads(data)
if 'success' in resp_body: if 'success' in resp_body:
# Note that we do NOT check the hostname here: we explicitly # Note that we do NOT check the hostname here: we explicitly
@ -423,15 +427,11 @@ class AuthHandler(BaseHandler):
def _check_msisdn(self, authdict, _): def _check_msisdn(self, authdict, _):
return self._check_threepid('msisdn', authdict) return self._check_threepid('msisdn', authdict)
@defer.inlineCallbacks
def _check_dummy_auth(self, authdict, _): def _check_dummy_auth(self, authdict, _):
yield run_on_reactor() return defer.succeed(True)
defer.returnValue(True)
@defer.inlineCallbacks @defer.inlineCallbacks
def _check_threepid(self, medium, authdict): def _check_threepid(self, medium, authdict):
yield run_on_reactor()
if 'threepid_creds' not in authdict: if 'threepid_creds' not in authdict:
raise LoginError(400, "Missing threepid_creds", Codes.MISSING_PARAM) raise LoginError(400, "Missing threepid_creds", Codes.MISSING_PARAM)
@ -825,6 +825,15 @@ class AuthHandler(BaseHandler):
if medium == 'email': if medium == 'email':
address = address.lower() address = address.lower()
identity_handler = self.hs.get_handlers().identity_handler
yield identity_handler.unbind_threepid(
user_id,
{
'medium': medium,
'address': address,
},
)
ret = yield self.store.user_delete_threepid( ret = yield self.store.user_delete_threepid(
user_id, medium, address, user_id, medium, address,
) )
@ -849,7 +858,11 @@ class AuthHandler(BaseHandler):
return bcrypt.hashpw(password.encode('utf8') + self.hs.config.password_pepper, return bcrypt.hashpw(password.encode('utf8') + self.hs.config.password_pepper,
bcrypt.gensalt(self.bcrypt_rounds)) bcrypt.gensalt(self.bcrypt_rounds))
return make_deferred_yieldable(threads.deferToThread(_do_hash)) return make_deferred_yieldable(
threads.deferToThreadPool(
self.hs.get_reactor(), self.hs.get_reactor().getThreadPool(), _do_hash
),
)
def validate_hash(self, password, stored_hash): def validate_hash(self, password, stored_hash):
"""Validates that self.hash(password) == stored_hash. """Validates that self.hash(password) == stored_hash.
@ -869,16 +882,21 @@ class AuthHandler(BaseHandler):
) )
if stored_hash: if stored_hash:
return make_deferred_yieldable(threads.deferToThread(_do_validate_hash)) return make_deferred_yieldable(
threads.deferToThreadPool(
self.hs.get_reactor(),
self.hs.get_reactor().getThreadPool(),
_do_validate_hash,
),
)
else: else:
return defer.succeed(False) return defer.succeed(False)
class MacaroonGeneartor(object): @attr.s
def __init__(self, hs): class MacaroonGenerator(object):
self.clock = hs.get_clock()
self.server_name = hs.config.server_name hs = attr.ib()
self.macaroon_secret_key = hs.config.macaroon_secret_key
def generate_access_token(self, user_id, extra_caveats=None): def generate_access_token(self, user_id, extra_caveats=None):
extra_caveats = extra_caveats or [] extra_caveats = extra_caveats or []
@ -896,7 +914,7 @@ class MacaroonGeneartor(object):
def generate_short_term_login_token(self, user_id, duration_in_ms=(2 * 60 * 1000)): def generate_short_term_login_token(self, user_id, duration_in_ms=(2 * 60 * 1000)):
macaroon = self._generate_base_macaroon(user_id) macaroon = self._generate_base_macaroon(user_id)
macaroon.add_first_party_caveat("type = login") macaroon.add_first_party_caveat("type = login")
now = self.clock.time_msec() now = self.hs.get_clock().time_msec()
expiry = now + duration_in_ms expiry = now + duration_in_ms
macaroon.add_first_party_caveat("time < %d" % (expiry,)) macaroon.add_first_party_caveat("time < %d" % (expiry,))
return macaroon.serialize() return macaroon.serialize()
@ -908,9 +926,9 @@ class MacaroonGeneartor(object):
def _generate_base_macaroon(self, user_id): def _generate_base_macaroon(self, user_id):
macaroon = pymacaroons.Macaroon( macaroon = pymacaroons.Macaroon(
location=self.server_name, location=self.hs.config.server_name,
identifier="key", identifier="key",
key=self.macaroon_secret_key) key=self.hs.config.macaroon_secret_key)
macaroon.add_first_party_caveat("gen = 1") macaroon.add_first_party_caveat("gen = 1")
macaroon.add_first_party_caveat("user_id = %s" % (user_id,)) macaroon.add_first_party_caveat("user_id = %s" % (user_id,))
return macaroon return macaroon

View file

@ -12,13 +12,15 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from twisted.internet import defer, reactor import logging
from ._base import BaseHandler from twisted.internet import defer
from synapse.api.errors import SynapseError
from synapse.types import UserID, create_requester from synapse.types import UserID, create_requester
from synapse.util.logcontext import run_in_background from synapse.util.logcontext import run_in_background
import logging from ._base import BaseHandler
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -30,6 +32,7 @@ class DeactivateAccountHandler(BaseHandler):
self._auth_handler = hs.get_auth_handler() self._auth_handler = hs.get_auth_handler()
self._device_handler = hs.get_device_handler() self._device_handler = hs.get_device_handler()
self._room_member_handler = hs.get_room_member_handler() self._room_member_handler = hs.get_room_member_handler()
self._identity_handler = hs.get_handlers().identity_handler
self.user_directory_handler = hs.get_user_directory_handler() self.user_directory_handler = hs.get_user_directory_handler()
# Flag that indicates whether the process to part users from rooms is running # Flag that indicates whether the process to part users from rooms is running
@ -37,14 +40,15 @@ class DeactivateAccountHandler(BaseHandler):
# Start the user parter loop so it can resume parting users from rooms where # Start the user parter loop so it can resume parting users from rooms where
# it left off (if it has work left to do). # it left off (if it has work left to do).
reactor.callWhenRunning(self._start_user_parting) hs.get_reactor().callWhenRunning(self._start_user_parting)
@defer.inlineCallbacks @defer.inlineCallbacks
def deactivate_account(self, user_id): def deactivate_account(self, user_id, erase_data):
"""Deactivate a user's account """Deactivate a user's account
Args: Args:
user_id (str): ID of user to be deactivated user_id (str): ID of user to be deactivated
erase_data (bool): whether to GDPR-erase the user's data
Returns: Returns:
Deferred Deferred
@ -52,14 +56,35 @@ class DeactivateAccountHandler(BaseHandler):
# FIXME: Theoretically there is a race here wherein user resets # FIXME: Theoretically there is a race here wherein user resets
# password using threepid. # password using threepid.
# first delete any devices belonging to the user, which will also # delete threepids first. We remove these from the IS so if this fails,
# leave the user still active so they can try again.
# Ideally we would prevent password resets and then do this in the
# background thread.
threepids = yield self.store.user_get_threepids(user_id)
for threepid in threepids:
try:
yield self._identity_handler.unbind_threepid(
user_id,
{
'medium': threepid['medium'],
'address': threepid['address'],
},
)
except Exception:
# Do we want this to be a fatal error or should we carry on?
logger.exception("Failed to remove threepid from ID server")
raise SynapseError(400, "Failed to remove threepid from ID server")
yield self.store.user_delete_threepid(
user_id, threepid['medium'], threepid['address'],
)
# delete any devices belonging to the user, which will also
# delete corresponding access tokens. # delete corresponding access tokens.
yield self._device_handler.delete_all_devices_for_user(user_id) yield self._device_handler.delete_all_devices_for_user(user_id)
# then delete any remaining access tokens which weren't associated with # then delete any remaining access tokens which weren't associated with
# a device. # a device.
yield self._auth_handler.delete_access_tokens_for_user(user_id) yield self._auth_handler.delete_access_tokens_for_user(user_id)
yield self.store.user_delete_threepids(user_id)
yield self.store.user_set_password_hash(user_id, None) yield self.store.user_set_password_hash(user_id, None)
# Add the user to a table of users pending deactivation (ie. # Add the user to a table of users pending deactivation (ie.
@ -69,6 +94,11 @@ class DeactivateAccountHandler(BaseHandler):
# delete from user directory # delete from user directory
yield self.user_directory_handler.handle_user_deactivated(user_id) yield self.user_directory_handler.handle_user_deactivated(user_id)
# Mark the user as erased, if they asked for that
if erase_data:
logger.info("Marking %s as erased", user_id)
yield self.store.mark_user_erased(user_id)
# Now start the process that goes through that list and # Now start the process that goes through that list and
# parts users from rooms (if it isn't already running) # parts users from rooms (if it isn't already running)
self._start_user_parting() self._start_user_parting()

View file

@ -12,22 +12,24 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import logging
from six import iteritems, itervalues
from twisted.internet import defer
from synapse.api import errors from synapse.api import errors
from synapse.api.constants import EventTypes from synapse.api.constants import EventTypes
from synapse.api.errors import FederationDeniedError from synapse.api.errors import FederationDeniedError
from synapse.types import RoomStreamToken, get_domain_from_id
from synapse.util import stringutils from synapse.util import stringutils
from synapse.util.async import Linearizer from synapse.util.async import Linearizer
from synapse.util.caches.expiringcache import ExpiringCache from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.retryutils import NotRetryingDestination
from synapse.util.metrics import measure_func from synapse.util.metrics import measure_func
from synapse.types import get_domain_from_id, RoomStreamToken from synapse.util.retryutils import NotRetryingDestination
from twisted.internet import defer
from ._base import BaseHandler from ._base import BaseHandler
import logging
from six import itervalues, iteritems
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -114,7 +116,7 @@ class DeviceHandler(BaseHandler):
user_id, device_id=None user_id, device_id=None
) )
devices = device_map.values() devices = list(device_map.values())
for device in devices: for device in devices:
_update_device_from_client_ips(device, ips) _update_device_from_client_ips(device, ips)
@ -187,7 +189,7 @@ class DeviceHandler(BaseHandler):
defer.Deferred: defer.Deferred:
""" """
device_map = yield self.store.get_devices_by_user(user_id) device_map = yield self.store.get_devices_by_user(user_id)
device_ids = device_map.keys() device_ids = list(device_map)
if except_device_id is not None: if except_device_id is not None:
device_ids = [d for d in device_ids if d != except_device_id] device_ids = [d for d in device_ids if d != except_device_id]
yield self.delete_devices(user_id, device_ids) yield self.delete_devices(user_id, device_ids)
@ -537,7 +539,7 @@ class DeviceListEduUpdater(object):
yield self.device_handler.notify_device_update(user_id, device_ids) yield self.device_handler.notify_device_update(user_id, device_ids)
else: else:
# Simply update the single device, since we know that is the only # Simply update the single device, since we know that is the only
# change (becuase of the single prev_id matching the current cache) # change (because of the single prev_id matching the current cache)
for device_id, stream_id, prev_ids, content in pending_updates: for device_id, stream_id, prev_ids, content in pending_updates:
yield self.store.update_remote_device_list_cache_entry( yield self.store.update_remote_device_list_cache_entry(
user_id, device_id, content, stream_id, user_id, device_id, content, stream_id,

View file

@ -18,10 +18,9 @@ import logging
from twisted.internet import defer from twisted.internet import defer
from synapse.api.errors import SynapseError from synapse.api.errors import SynapseError
from synapse.types import get_domain_from_id, UserID from synapse.types import UserID, get_domain_from_id
from synapse.util.stringutils import random_string from synapse.util.stringutils import random_string
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -14,16 +14,17 @@
# limitations under the License. # limitations under the License.
from twisted.internet import defer
from ._base import BaseHandler
from synapse.api.errors import SynapseError, Codes, CodeMessageException, AuthError
from synapse.api.constants import EventTypes
from synapse.types import RoomAlias, UserID, get_domain_from_id
import logging import logging
import string import string
from twisted.internet import defer
from synapse.api.constants import EventTypes
from synapse.api.errors import AuthError, CodeMessageException, Codes, SynapseError
from synapse.types import RoomAlias, UserID, get_domain_from_id
from ._base import BaseHandler
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -14,17 +14,16 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import simplejson as json
import logging import logging
from canonicaljson import encode_canonical_json
from twisted.internet import defer
from six import iteritems from six import iteritems
from synapse.api.errors import ( from canonicaljson import encode_canonical_json, json
SynapseError, CodeMessageException, FederationDeniedError,
) from twisted.internet import defer
from synapse.types import get_domain_from_id, UserID
from synapse.api.errors import CodeMessageException, FederationDeniedError, SynapseError
from synapse.types import UserID, get_domain_from_id
from synapse.util.logcontext import make_deferred_yieldable, run_in_background from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.util.retryutils import NotRetryingDestination from synapse.util.retryutils import NotRetryingDestination
@ -80,7 +79,7 @@ class E2eKeysHandler(object):
else: else:
remote_queries[user_id] = device_ids remote_queries[user_id] = device_ids
# Firt get local devices. # First get local devices.
failures = {} failures = {}
results = {} results = {}
if local_query: if local_query:
@ -357,7 +356,7 @@ def _exception_to_failure(e):
# include ConnectionRefused and other errors # include ConnectionRefused and other errors
# #
# Note that some Exceptions (notably twisted's ResponseFailed etc) don't # Note that some Exceptions (notably twisted's ResponseFailed etc) don't
# give a string for e.message, which simplejson then fails to serialize. # give a string for e.message, which json then fails to serialize.
return { return {
"status": 503, "message": str(e.message), "status": 503, "message": str(e.message),
} }

View file

@ -13,19 +13,18 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from twisted.internet import defer
from synapse.util.logutils import log_function
from synapse.types import UserID
from synapse.events.utils import serialize_event
from synapse.api.constants import Membership, EventTypes
from synapse.events import EventBase
from ._base import BaseHandler
import logging import logging
import random import random
from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.events import EventBase
from synapse.events.utils import serialize_event
from synapse.types import UserID
from synapse.util.logutils import log_function
from ._base import BaseHandler
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -20,38 +20,42 @@ import itertools
import logging import logging
import sys import sys
import six
from six import iteritems
from six.moves import http_client
from signedjson.key import decode_verify_key_bytes from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json from signedjson.sign import verify_signed_json
import six
from six.moves import http_client
from six import iteritems
from twisted.internet import defer
from unpaddedbase64 import decode_base64 from unpaddedbase64 import decode_base64
from ._base import BaseHandler from twisted.internet import defer
from synapse.api.errors import (
AuthError, FederationError, StoreError, CodeMessageException, SynapseError,
FederationDeniedError,
)
from synapse.api.constants import EventTypes, Membership, RejectedReason from synapse.api.constants import EventTypes, Membership, RejectedReason
from synapse.events.validator import EventValidator from synapse.api.errors import (
from synapse.util import unwrapFirstError, logcontext AuthError,
from synapse.util.metrics import measure_func CodeMessageException,
from synapse.util.logutils import log_function FederationDeniedError,
from synapse.util.async import run_on_reactor, Linearizer FederationError,
from synapse.util.frozenutils import unfreeze StoreError,
from synapse.crypto.event_signing import ( SynapseError,
compute_event_signature, add_hashes_and_signatures, )
from synapse.crypto.event_signing import (
add_hashes_and_signatures,
compute_event_signature,
) )
from synapse.types import UserID, get_domain_from_id
from synapse.events.utils import prune_event from synapse.events.utils import prune_event
from synapse.events.validator import EventValidator
from synapse.state import resolve_events_with_factory
from synapse.types import UserID, get_domain_from_id
from synapse.util import logcontext, unwrapFirstError
from synapse.util.async import Linearizer
from synapse.util.distributor import user_joined_room
from synapse.util.frozenutils import unfreeze
from synapse.util.logutils import log_function
from synapse.util.metrics import measure_func
from synapse.util.retryutils import NotRetryingDestination from synapse.util.retryutils import NotRetryingDestination
from synapse.util.distributor import user_joined_room from ._base import BaseHandler
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -90,7 +94,9 @@ class FederationHandler(BaseHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def on_receive_pdu(self, origin, pdu, get_missing=True): def on_receive_pdu(
self, origin, pdu, get_missing=True, sent_to_us_directly=False,
):
""" Process a PDU received via a federation /send/ transaction, or """ Process a PDU received via a federation /send/ transaction, or
via backfill of missing prev_events via backfill of missing prev_events
@ -104,8 +110,10 @@ class FederationHandler(BaseHandler):
""" """
# We reprocess pdus when we have seen them only as outliers # We reprocess pdus when we have seen them only as outliers
existing = yield self.get_persisted_pdu( existing = yield self.store.get_event(
origin, pdu.event_id, do_auth=False pdu.event_id,
allow_none=True,
allow_rejected=True,
) )
# FIXME: Currently we fetch an event again when we already have it # FIXME: Currently we fetch an event again when we already have it
@ -162,14 +170,11 @@ class FederationHandler(BaseHandler):
"Ignoring PDU %s for room %s from %s as we've left the room!", "Ignoring PDU %s for room %s from %s as we've left the room!",
pdu.event_id, pdu.room_id, origin, pdu.event_id, pdu.room_id, origin,
) )
return defer.returnValue(None)
state = None state = None
auth_chain = [] auth_chain = []
fetch_state = False
# Get missing pdus if necessary. # Get missing pdus if necessary.
if not pdu.internal_metadata.is_outlier(): if not pdu.internal_metadata.is_outlier():
# We only backfill backwards to the min depth. # We only backfill backwards to the min depth.
@ -224,26 +229,60 @@ class FederationHandler(BaseHandler):
list(prevs - seen)[:5], list(prevs - seen)[:5],
) )
if prevs - seen: if sent_to_us_directly and prevs - seen:
logger.info( # If they have sent it to us directly, and the server
"Still missing %d events for room %r: %r...", # isn't telling us about the auth events that it's
len(prevs - seen), pdu.room_id, list(prevs - seen)[:5] # made a message referencing, we explode
raise FederationError(
"ERROR",
403,
(
"Your server isn't divulging details about prev_events "
"referenced in this event."
),
affected=pdu.event_id,
) )
fetch_state = True elif prevs - seen:
# Calculate the state of the previous events, and
# de-conflict them to find the current state.
state_groups = []
auth_chains = set()
try:
# Get the state of the events we know about
ours = yield self.store.get_state_groups(pdu.room_id, list(seen))
state_groups.append(ours)
if fetch_state: # Ask the remote server for the states we don't
# We need to get the state at this event, since we haven't # know about
# processed all the prev events. for p in prevs - seen:
logger.debug( state, got_auth_chain = (
"_handle_new_pdu getting state for %s", yield self.replication_layer.get_state_for_room(
pdu.room_id origin, pdu.room_id, p
) )
try: )
state, auth_chain = yield self.replication_layer.get_state_for_room( auth_chains.update(got_auth_chain)
origin, pdu.room_id, pdu.event_id, state_group = {(x.type, x.state_key): x.event_id for x in state}
) state_groups.append(state_group)
except Exception:
logger.exception("Failed to get state for event: %s", pdu.event_id) # Resolve any conflicting state
def fetch(ev_ids):
return self.store.get_events(
ev_ids, get_prev_content=False, check_redacted=False
)
state_map = yield resolve_events_with_factory(
state_groups, {pdu.event_id: pdu}, fetch
)
state = (yield self.store.get_events(state_map.values())).values()
auth_chain = list(auth_chains)
except Exception:
raise FederationError(
"ERROR",
403,
"We can't get valid state history.",
affected=pdu.event_id,
)
yield self._process_received_pdu( yield self._process_received_pdu(
origin, origin,
@ -321,11 +360,17 @@ class FederationHandler(BaseHandler):
for e in missing_events: for e in missing_events:
logger.info("Handling found event %s", e.event_id) logger.info("Handling found event %s", e.event_id)
yield self.on_receive_pdu( try:
origin, yield self.on_receive_pdu(
e, origin,
get_missing=False e,
) get_missing=False
)
except FederationError as e:
if e.code == 403:
logger.warn("Event %s failed history check.")
else:
raise
@log_function @log_function
@defer.inlineCallbacks @defer.inlineCallbacks
@ -459,6 +504,47 @@ class FederationHandler(BaseHandler):
@measure_func("_filter_events_for_server") @measure_func("_filter_events_for_server")
@defer.inlineCallbacks @defer.inlineCallbacks
def _filter_events_for_server(self, server_name, room_id, events): def _filter_events_for_server(self, server_name, room_id, events):
"""Filter the given events for the given server, redacting those the
server can't see.
Assumes the server is currently in the room.
Returns
list[FrozenEvent]
"""
# First lets check to see if all the events have a history visibility
# of "shared" or "world_readable". If thats the case then we don't
# need to check membership (as we know the server is in the room).
event_to_state_ids = yield self.store.get_state_ids_for_events(
frozenset(e.event_id for e in events),
types=(
(EventTypes.RoomHistoryVisibility, ""),
)
)
visibility_ids = set()
for sids in event_to_state_ids.itervalues():
hist = sids.get((EventTypes.RoomHistoryVisibility, ""))
if hist:
visibility_ids.add(hist)
# If we failed to find any history visibility events then the default
# is "shared" visiblity.
if not visibility_ids:
defer.returnValue(events)
event_map = yield self.store.get_events(visibility_ids)
all_open = all(
e.content.get("history_visibility") in (None, "shared", "world_readable")
for e in event_map.itervalues()
)
if all_open:
defer.returnValue(events)
# Ok, so we're dealing with events that have non-trivial visibility
# rules, so we need to also get the memberships of the room.
event_to_state_ids = yield self.store.get_state_ids_for_events( event_to_state_ids = yield self.store.get_state_ids_for_events(
frozenset(e.event_id for e in events), frozenset(e.event_id for e in events),
types=( types=(
@ -480,8 +566,8 @@ class FederationHandler(BaseHandler):
# to get all state ids that we're interested in. # to get all state ids that we're interested in.
event_map = yield self.store.get_events([ event_map = yield self.store.get_events([
e_id e_id
for key_to_eid in event_to_state_ids.itervalues() for key_to_eid in list(event_to_state_ids.values())
for key, e_id in key_to_eid.iteritems() for key, e_id in key_to_eid.items()
if key[0] != EventTypes.Member or check_match(key[1]) if key[0] != EventTypes.Member or check_match(key[1])
]) ])
@ -494,7 +580,20 @@ class FederationHandler(BaseHandler):
for e_id, key_to_eid in event_to_state_ids.iteritems() for e_id, key_to_eid in event_to_state_ids.iteritems()
} }
erased_senders = yield self.store.are_users_erased(
e.sender for e in events,
)
def redact_disallowed(event, state): def redact_disallowed(event, state):
# if the sender has been gdpr17ed, always return a redacted
# copy of the event.
if erased_senders[event.sender]:
logger.info(
"Sender of %s has been erased, redacting",
event.event_id,
)
return prune_event(event)
if not state: if not state:
return event return event
@ -1149,13 +1248,13 @@ class FederationHandler(BaseHandler):
user = UserID.from_string(event.state_key) user = UserID.from_string(event.state_key)
yield user_joined_room(self.distributor, user, event.room_id) yield user_joined_room(self.distributor, user, event.room_id)
state_ids = context.prev_state_ids.values() state_ids = list(context.prev_state_ids.values())
auth_chain = yield self.store.get_auth_chain(state_ids) auth_chain = yield self.store.get_auth_chain(state_ids)
state = yield self.store.get_events(context.prev_state_ids.values()) state = yield self.store.get_events(list(context.prev_state_ids.values()))
defer.returnValue({ defer.returnValue({
"state": state.values(), "state": list(state.values()),
"auth_chain": auth_chain, "auth_chain": auth_chain,
}) })
@ -1382,8 +1481,6 @@ class FederationHandler(BaseHandler):
def get_state_for_pdu(self, room_id, event_id): def get_state_for_pdu(self, room_id, event_id):
"""Returns the state at the event. i.e. not including said event. """Returns the state at the event. i.e. not including said event.
""" """
yield run_on_reactor()
state_groups = yield self.store.get_state_groups( state_groups = yield self.store.get_state_groups(
room_id, [event_id] room_id, [event_id]
) )
@ -1405,7 +1502,7 @@ class FederationHandler(BaseHandler):
else: else:
del results[(event.type, event.state_key)] del results[(event.type, event.state_key)]
res = results.values() res = list(results.values())
for event in res: for event in res:
# We sign these again because there was a bug where we # We sign these again because there was a bug where we
# incorrectly signed things the first time round # incorrectly signed things the first time round
@ -1426,8 +1523,6 @@ class FederationHandler(BaseHandler):
def get_state_ids_for_pdu(self, room_id, event_id): def get_state_ids_for_pdu(self, room_id, event_id):
"""Returns the state at the event. i.e. not including said event. """Returns the state at the event. i.e. not including said event.
""" """
yield run_on_reactor()
state_groups = yield self.store.get_state_groups_ids( state_groups = yield self.store.get_state_groups_ids(
room_id, [event_id] room_id, [event_id]
) )
@ -1446,7 +1541,7 @@ class FederationHandler(BaseHandler):
else: else:
results.pop((event.type, event.state_key), None) results.pop((event.type, event.state_key), None)
defer.returnValue(results.values()) defer.returnValue(list(results.values()))
else: else:
defer.returnValue([]) defer.returnValue([])
@ -1469,11 +1564,20 @@ class FederationHandler(BaseHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def get_persisted_pdu(self, origin, event_id, do_auth=True): def get_persisted_pdu(self, origin, event_id):
""" Get a PDU from the database with given origin and id. """Get an event from the database for the given server.
Args:
origin [str]: hostname of server which is requesting the event; we
will check that the server is allowed to see it.
event_id [str]: id of the event being requested
Returns: Returns:
Deferred: Results in a `Pdu`. Deferred[EventBase|None]: None if we know nothing about the event;
otherwise the (possibly-redacted) event.
Raises:
AuthError if the server is not currently in the room
""" """
event = yield self.store.get_event( event = yield self.store.get_event(
event_id, event_id,
@ -1494,20 +1598,17 @@ class FederationHandler(BaseHandler):
) )
) )
if do_auth: in_room = yield self.auth.check_host_in_room(
in_room = yield self.auth.check_host_in_room( event.room_id,
event.room_id, origin
origin )
) if not in_room:
if not in_room: raise AuthError(403, "Host not in room.")
raise AuthError(403, "Host not in room.")
events = yield self._filter_events_for_server(
origin, event.room_id, [event]
)
event = events[0]
events = yield self._filter_events_for_server(
origin, event.room_id, [event]
)
event = events[0]
defer.returnValue(event) defer.returnValue(event)
else: else:
defer.returnValue(None) defer.returnValue(None)
@ -1795,6 +1896,10 @@ class FederationHandler(BaseHandler):
min_depth=min_depth, min_depth=min_depth,
) )
missing_events = yield self._filter_events_for_server(
origin, room_id, missing_events,
)
defer.returnValue(missing_events) defer.returnValue(missing_events)
@defer.inlineCallbacks @defer.inlineCallbacks
@ -1915,7 +2020,7 @@ class FederationHandler(BaseHandler):
}) })
new_state = self.state_handler.resolve_events( new_state = self.state_handler.resolve_events(
[local_view.values(), remote_view.values()], [list(local_view.values()), list(remote_view.values())],
event event
) )

View file

@ -14,14 +14,15 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from twisted.internet import defer import logging
from six import iteritems from six import iteritems
from twisted.internet import defer
from synapse.api.errors import SynapseError from synapse.api.errors import SynapseError
from synapse.types import get_domain_from_id from synapse.types import get_domain_from_id
import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -1,6 +1,7 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd # Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2017 Vector Creations Ltd # Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -18,16 +19,18 @@
import logging import logging
import simplejson as json from canonicaljson import json
from twisted.internet import defer from twisted.internet import defer
from synapse.api.errors import ( from synapse.api.errors import (
MatrixCodeMessageException, CodeMessageException CodeMessageException,
Codes,
MatrixCodeMessageException,
SynapseError,
) )
from ._base import BaseHandler from ._base import BaseHandler
from synapse.util.async import run_on_reactor
from synapse.api.errors import SynapseError, Codes
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -38,6 +41,7 @@ class IdentityHandler(BaseHandler):
super(IdentityHandler, self).__init__(hs) super(IdentityHandler, self).__init__(hs)
self.http_client = hs.get_simple_http_client() self.http_client = hs.get_simple_http_client()
self.federation_http_client = hs.get_http_client()
self.trusted_id_servers = set(hs.config.trusted_third_party_id_servers) self.trusted_id_servers = set(hs.config.trusted_third_party_id_servers)
self.trust_any_id_server_just_for_testing_do_not_use = ( self.trust_any_id_server_just_for_testing_do_not_use = (
@ -60,8 +64,6 @@ class IdentityHandler(BaseHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def threepid_from_creds(self, creds): def threepid_from_creds(self, creds):
yield run_on_reactor()
if 'id_server' in creds: if 'id_server' in creds:
id_server = creds['id_server'] id_server = creds['id_server']
elif 'idServer' in creds: elif 'idServer' in creds:
@ -104,7 +106,6 @@ class IdentityHandler(BaseHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def bind_threepid(self, creds, mxid): def bind_threepid(self, creds, mxid):
yield run_on_reactor()
logger.debug("binding threepid %r to %s", creds, mxid) logger.debug("binding threepid %r to %s", creds, mxid)
data = None data = None
@ -139,9 +140,53 @@ class IdentityHandler(BaseHandler):
defer.returnValue(data) defer.returnValue(data)
@defer.inlineCallbacks @defer.inlineCallbacks
def requestEmailToken(self, id_server, email, client_secret, send_attempt, **kwargs): def unbind_threepid(self, mxid, threepid):
yield run_on_reactor() """
Removes a binding from an identity server
Args:
mxid (str): Matrix user ID of binding to be removed
threepid (dict): Dict with medium & address of binding to be removed
Returns:
Deferred[bool]: True on success, otherwise False
"""
logger.debug("unbinding threepid %r from %s", threepid, mxid)
if not self.trusted_id_servers:
logger.warn("Can't unbind threepid: no trusted ID servers set in config")
defer.returnValue(False)
# We don't track what ID server we added 3pids on (perhaps we ought to)
# but we assume that any of the servers in the trusted list are in the
# same ID server federation, so we can pick any one of them to send the
# deletion request to.
id_server = next(iter(self.trusted_id_servers))
url = "https://%s/_matrix/identity/api/v1/3pid/unbind" % (id_server,)
content = {
"mxid": mxid,
"threepid": threepid,
}
headers = {}
# we abuse the federation http client to sign the request, but we have to send it
# using the normal http client since we don't want the SRV lookup and want normal
# 'browser-like' HTTPS.
self.federation_http_client.sign_request(
destination=None,
method='POST',
url_bytes='/_matrix/identity/api/v1/3pid/unbind'.encode('ascii'),
headers_dict=headers,
content=content,
destination_is=id_server,
)
yield self.http_client.post_json_get_json(
url,
content,
headers,
)
defer.returnValue(True)
@defer.inlineCallbacks
def requestEmailToken(self, id_server, email, client_secret, send_attempt, **kwargs):
if not self._should_trust_id_server(id_server): if not self._should_trust_id_server(id_server):
raise SynapseError( raise SynapseError(
400, "Untrusted ID server '%s'" % id_server, 400, "Untrusted ID server '%s'" % id_server,
@ -176,8 +221,6 @@ class IdentityHandler(BaseHandler):
self, id_server, country, phone_number, self, id_server, country, phone_number,
client_secret, send_attempt, **kwargs client_secret, send_attempt, **kwargs
): ):
yield run_on_reactor()
if not self._should_trust_id_server(id_server): if not self._should_trust_id_server(id_server):
raise SynapseError( raise SynapseError(
400, "Untrusted ID server '%s'" % id_server, 400, "Untrusted ID server '%s'" % id_server,

View file

@ -13,6 +13,8 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import logging
from twisted.internet import defer from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership from synapse.api.constants import EventTypes, Membership
@ -21,9 +23,7 @@ from synapse.events.utils import serialize_event
from synapse.events.validator import EventValidator from synapse.events.validator import EventValidator
from synapse.handlers.presence import format_user_presence_state from synapse.handlers.presence import format_user_presence_state
from synapse.streams.config import PaginationConfig from synapse.streams.config import PaginationConfig
from synapse.types import ( from synapse.types import StreamToken, UserID
UserID, StreamToken,
)
from synapse.util import unwrapFirstError from synapse.util import unwrapFirstError
from synapse.util.async import concurrently_execute from synapse.util.async import concurrently_execute
from synapse.util.caches.snapshot_cache import SnapshotCache from synapse.util.caches.snapshot_cache import SnapshotCache
@ -32,9 +32,6 @@ from synapse.visibility import filter_events_for_client
from ._base import BaseHandler from ._base import BaseHandler
import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -14,35 +14,31 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import logging import logging
import simplejson
import sys import sys
from canonicaljson import encode_canonical_json
import six import six
from six import string_types, itervalues, iteritems from six import iteritems, itervalues, string_types
from twisted.internet import defer, reactor
from canonicaljson import encode_canonical_json, json
from twisted.internet import defer
from twisted.internet.defer import succeed from twisted.internet.defer import succeed
from twisted.python.failure import Failure from twisted.python.failure import Failure
from synapse.api.constants import EventTypes, Membership, MAX_DEPTH from synapse.api.constants import MAX_DEPTH, EventTypes, Membership
from synapse.api.errors import ( from synapse.api.errors import AuthError, Codes, ConsentNotGivenError, SynapseError
AuthError, Codes, SynapseError,
ConsentNotGivenError,
)
from synapse.api.urls import ConsentURIBuilder from synapse.api.urls import ConsentURIBuilder
from synapse.crypto.event_signing import add_hashes_and_signatures from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.events.utils import serialize_event from synapse.events.utils import serialize_event
from synapse.events.validator import EventValidator from synapse.events.validator import EventValidator
from synapse.types import ( from synapse.replication.http.send_event import send_event_to_master
UserID, RoomAlias, RoomStreamToken, from synapse.types import RoomAlias, RoomStreamToken, UserID
) from synapse.util.async import Limiter, ReadWriteLock
from synapse.util.async import run_on_reactor, ReadWriteLock, Limiter from synapse.util.frozenutils import frozendict_json_encoder
from synapse.util.logcontext import run_in_background from synapse.util.logcontext import run_in_background
from synapse.util.metrics import measure_func from synapse.util.metrics import measure_func
from synapse.util.frozenutils import frozendict_json_encoder
from synapse.util.stringutils import random_string from synapse.util.stringutils import random_string
from synapse.visibility import filter_events_for_client from synapse.visibility import filter_events_for_client
from synapse.replication.http.send_event import send_event_to_master
from ._base import BaseHandler from ._base import BaseHandler
@ -157,7 +153,7 @@ class MessageHandler(BaseHandler):
# remove the purge from the list 24 hours after it completes # remove the purge from the list 24 hours after it completes
def clear_purge(): def clear_purge():
del self._purges_by_id[purge_id] del self._purges_by_id[purge_id]
reactor.callLater(24 * 3600, clear_purge) self.hs.get_reactor().callLater(24 * 3600, clear_purge)
def get_purge_status(self, purge_id): def get_purge_status(self, purge_id):
"""Get the current status of an active purge """Get the current status of an active purge
@ -388,7 +384,7 @@ class MessageHandler(BaseHandler):
users_with_profile = yield self.state.get_current_user_in_room(room_id) users_with_profile = yield self.state.get_current_user_in_room(room_id)
# If this is an AS, double check that they are allowed to see the members. # If this is an AS, double check that they are allowed to see the members.
# This can either be because the AS user is in the room or becuase there # This can either be because the AS user is in the room or because there
# is a user in the room that the AS is "interested in" # is a user in the room that the AS is "interested in"
if requester.app_service and user_id not in users_with_profile: if requester.app_service and user_id not in users_with_profile:
for uid in users_with_profile: for uid in users_with_profile:
@ -491,7 +487,7 @@ class EventCreationHandler(object):
target, e target, e
) )
is_exempt = yield self._is_exempt_from_privacy_policy(builder) is_exempt = yield self._is_exempt_from_privacy_policy(builder, requester)
if not is_exempt: if not is_exempt:
yield self.assert_accepted_privacy_policy(requester) yield self.assert_accepted_privacy_policy(requester)
@ -509,12 +505,13 @@ class EventCreationHandler(object):
defer.returnValue((event, context)) defer.returnValue((event, context))
def _is_exempt_from_privacy_policy(self, builder): def _is_exempt_from_privacy_policy(self, builder, requester):
""""Determine if an event to be sent is exempt from having to consent """"Determine if an event to be sent is exempt from having to consent
to the privacy policy to the privacy policy
Args: Args:
builder (synapse.events.builder.EventBuilder): event being created builder (synapse.events.builder.EventBuilder): event being created
requester (Requster): user requesting this event
Returns: Returns:
Deferred[bool]: true if the event can be sent without the user Deferred[bool]: true if the event can be sent without the user
@ -525,6 +522,9 @@ class EventCreationHandler(object):
membership = builder.content.get("membership", None) membership = builder.content.get("membership", None)
if membership == Membership.JOIN: if membership == Membership.JOIN:
return self._is_server_notices_room(builder.room_id) return self._is_server_notices_room(builder.room_id)
elif membership == Membership.LEAVE:
# the user is always allowed to leave (but not kick people)
return builder.state_key == requester.user.to_string()
return succeed(False) return succeed(False)
@defer.inlineCallbacks @defer.inlineCallbacks
@ -793,7 +793,7 @@ class EventCreationHandler(object):
# Ensure that we can round trip before trying to persist in db # Ensure that we can round trip before trying to persist in db
try: try:
dump = frozendict_json_encoder.encode(event.content) dump = frozendict_json_encoder.encode(event.content)
simplejson.loads(dump) json.loads(dump)
except Exception: except Exception:
logger.exception("Failed to encode content: %r", event.content) logger.exception("Failed to encode content: %r", event.content)
raise raise
@ -806,6 +806,7 @@ class EventCreationHandler(object):
# If we're a worker we need to hit out to the master. # If we're a worker we need to hit out to the master.
if self.config.worker_app: if self.config.worker_app:
yield send_event_to_master( yield send_event_to_master(
self.hs.get_clock(),
self.http_client, self.http_client,
host=self.config.worker_replication_host, host=self.config.worker_replication_host,
port=self.config.worker_replication_http_port, port=self.config.worker_replication_http_port,
@ -959,9 +960,7 @@ class EventCreationHandler(object):
event_stream_id, max_stream_id event_stream_id, max_stream_id
) )
@defer.inlineCallbacks
def _notify(): def _notify():
yield run_on_reactor()
try: try:
self.notifier.on_new_room_event( self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id, event, event_stream_id, max_stream_id,

View file

@ -22,27 +22,26 @@ The methods that define policy are:
- should_notify - should_notify
""" """
from twisted.internet import defer, reactor import logging
from contextlib import contextmanager from contextlib import contextmanager
from six import itervalues, iteritems from six import iteritems, itervalues
from prometheus_client import Counter
from twisted.internet import defer
from synapse.api.errors import SynapseError
from synapse.api.constants import PresenceState from synapse.api.constants import PresenceState
from synapse.api.errors import SynapseError
from synapse.metrics import LaterGauge
from synapse.storage.presence import UserPresenceState from synapse.storage.presence import UserPresenceState
from synapse.types import UserID, get_domain_from_id
from synapse.util.caches.descriptors import cachedInlineCallbacks
from synapse.util.async import Linearizer from synapse.util.async import Linearizer
from synapse.util.caches.descriptors import cachedInlineCallbacks
from synapse.util.logcontext import run_in_background from synapse.util.logcontext import run_in_background
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
from synapse.util.wheel_timer import WheelTimer from synapse.util.wheel_timer import WheelTimer
from synapse.types import UserID, get_domain_from_id
from synapse.metrics import LaterGauge
import logging
from prometheus_client import Counter
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -179,7 +178,7 @@ class PresenceHandler(object):
# have not yet been persisted # have not yet been persisted
self.unpersisted_users_changes = set() self.unpersisted_users_changes = set()
reactor.addSystemEventTrigger("before", "shutdown", self._on_shutdown) hs.get_reactor().addSystemEventTrigger("before", "shutdown", self._on_shutdown)
self.serial_to_user = {} self.serial_to_user = {}
self._next_serial = 1 self._next_serial = 1
@ -325,7 +324,7 @@ class PresenceHandler(object):
if to_notify: if to_notify:
notified_presence_counter.inc(len(to_notify)) notified_presence_counter.inc(len(to_notify))
yield self._persist_and_notify(to_notify.values()) yield self._persist_and_notify(list(to_notify.values()))
self.unpersisted_users_changes |= set(s.user_id for s in new_states) self.unpersisted_users_changes |= set(s.user_id for s in new_states)
self.unpersisted_users_changes -= set(to_notify.keys()) self.unpersisted_users_changes -= set(to_notify.keys())
@ -687,7 +686,7 @@ class PresenceHandler(object):
""" """
updates = yield self.current_state_for_users(target_user_ids) updates = yield self.current_state_for_users(target_user_ids)
updates = updates.values() updates = list(updates.values())
for user_id in set(target_user_ids) - set(u.user_id for u in updates): for user_id in set(target_user_ids) - set(u.user_id for u in updates):
updates.append(UserPresenceState.default(user_id)) updates.append(UserPresenceState.default(user_id))
@ -753,11 +752,11 @@ class PresenceHandler(object):
self._push_to_remotes([state]) self._push_to_remotes([state])
else: else:
user_ids = yield self.store.get_users_in_room(room_id) user_ids = yield self.store.get_users_in_room(room_id)
user_ids = filter(self.is_mine_id, user_ids) user_ids = list(filter(self.is_mine_id, user_ids))
states = yield self.current_state_for_users(user_ids) states = yield self.current_state_for_users(user_ids)
self._push_to_remotes(states.values()) self._push_to_remotes(list(states.values()))
@defer.inlineCallbacks @defer.inlineCallbacks
def get_presence_list(self, observer_user, accepted=None): def get_presence_list(self, observer_user, accepted=None):
@ -1051,7 +1050,7 @@ class PresenceEventSource(object):
updates = yield presence.current_state_for_users(user_ids_changed) updates = yield presence.current_state_for_users(user_ids_changed)
if include_offline: if include_offline:
defer.returnValue((updates.values(), max_token)) defer.returnValue((list(updates.values()), max_token))
else: else:
defer.returnValue(([ defer.returnValue(([
s for s in itervalues(updates) s for s in itervalues(updates)
@ -1112,7 +1111,7 @@ def handle_timeouts(user_states, is_mine_fn, syncing_user_ids, now):
if new_state: if new_state:
changes[state.user_id] = new_state changes[state.user_id] = new_state
return changes.values() return list(changes.values())
def handle_timeout(state, is_mine, syncing_user_ids, now): def handle_timeout(state, is_mine, syncing_user_ids, now):

View file

@ -17,8 +17,9 @@ import logging
from twisted.internet import defer from twisted.internet import defer
from synapse.api.errors import SynapseError, AuthError, CodeMessageException from synapse.api.errors import AuthError, CodeMessageException, SynapseError
from synapse.types import UserID, get_domain_from_id from synapse.types import UserID, get_domain_from_id
from ._base import BaseHandler from ._base import BaseHandler
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -13,13 +13,14 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from ._base import BaseHandler import logging
from twisted.internet import defer from twisted.internet import defer
from synapse.util.async import Linearizer from synapse.util.async import Linearizer
import logging from ._base import BaseHandler
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -12,17 +12,15 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from synapse.util import logcontext import logging
from ._base import BaseHandler
from twisted.internet import defer from twisted.internet import defer
from synapse.util.logcontext import PreserveLoggingContext
from synapse.types import get_domain_from_id from synapse.types import get_domain_from_id
from synapse.util import logcontext
from synapse.util.logcontext import PreserveLoggingContext
import logging from ._base import BaseHandler
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View file

@ -18,14 +18,19 @@ import logging
from twisted.internet import defer from twisted.internet import defer
from synapse import types
from synapse.api.errors import ( from synapse.api.errors import (
AuthError, Codes, SynapseError, RegistrationError, InvalidCaptchaError AuthError,
Codes,
InvalidCaptchaError,
RegistrationError,
SynapseError,
) )
from synapse.http.client import CaptchaServerHttpClient from synapse.http.client import CaptchaServerHttpClient
from synapse import types from synapse.types import RoomAlias, RoomID, UserID, create_requester
from synapse.types import UserID, create_requester, RoomID, RoomAlias from synapse.util.async import Linearizer
from synapse.util.async import run_on_reactor, Linearizer
from synapse.util.threepids import check_3pid_allowed from synapse.util.threepids import check_3pid_allowed
from ._base import BaseHandler from ._base import BaseHandler
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -139,7 +144,6 @@ class RegistrationHandler(BaseHandler):
Raises: Raises:
RegistrationError if there was a problem registering. RegistrationError if there was a problem registering.
""" """
yield run_on_reactor()
password_hash = None password_hash = None
if password: if password:
password_hash = yield self.auth_handler().hash(password) password_hash = yield self.auth_handler().hash(password)
@ -431,8 +435,6 @@ class RegistrationHandler(BaseHandler):
Raises: Raises:
RegistrationError if there was a problem registering. RegistrationError if there was a problem registering.
""" """
yield run_on_reactor()
if localpart is None: if localpart is None:
raise SynapseError(400, "Request must include user id") raise SynapseError(400, "Request must include user id")

View file

@ -15,23 +15,20 @@
# limitations under the License. # limitations under the License.
"""Contains functions for performing events on rooms.""" """Contains functions for performing events on rooms."""
from twisted.internet import defer
from ._base import BaseHandler
from synapse.types import UserID, RoomAlias, RoomID, RoomStreamToken
from synapse.api.constants import (
EventTypes, JoinRules, RoomCreationPreset
)
from synapse.api.errors import AuthError, StoreError, SynapseError
from synapse.util import stringutils
from synapse.visibility import filter_events_for_client
from collections import OrderedDict
import logging import logging
import math import math
import string import string
from collections import OrderedDict
from twisted.internet import defer
from synapse.api.constants import EventTypes, JoinRules, RoomCreationPreset
from synapse.api.errors import AuthError, Codes, StoreError, SynapseError
from synapse.types import RoomAlias, RoomID, RoomStreamToken, UserID
from synapse.util import stringutils
from synapse.visibility import filter_events_for_client
from ._base import BaseHandler
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -115,7 +112,11 @@ class RoomCreationHandler(BaseHandler):
) )
if mapping: if mapping:
raise SynapseError(400, "Room alias already taken") raise SynapseError(
400,
"Room alias already taken",
Codes.ROOM_IN_USE
)
else: else:
room_alias = None room_alias = None
@ -455,7 +456,7 @@ class RoomContextHandler(BaseHandler):
state = yield self.store.get_state_for_events( state = yield self.store.get_state_for_events(
[last_event_id], None [last_event_id], None
) )
results["state"] = state[last_event_id].values() results["state"] = list(state[last_event_id].values())
results["start"] = now_token.copy_and_replace( results["start"] = now_token.copy_and_replace(
"room_key", results["start"] "room_key", results["start"]

Some files were not shown because too many files have changed in this diff Show more