0
0
Fork 1
mirror of https://mau.dev/maunium/synapse.git synced 2024-09-29 13:08:57 +02:00

Merge branch 'develop' into travis/fix-federated-group-requests

This commit is contained in:
Travis Ralston 2018-10-12 14:49:58 -06:00
commit e3586f7c06
120 changed files with 1166 additions and 923 deletions

View file

@ -1,5 +1,27 @@
version: 2 version: 2
jobs: jobs:
dockerhubuploadrelease:
machine: true
steps:
- checkout
- run: docker build -f docker/Dockerfile -t matrixdotorg/synapse:${CIRCLE_TAG} .
- run: docker build -f docker/Dockerfile -t matrixdotorg/synapse:${CIRCLE_TAG}-py3 --build-arg PYTHON_VERSION=3.6 .
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- run: docker push matrixdotorg/synapse:${CIRCLE_TAG}
- run: docker push matrixdotorg/synapse:${CIRCLE_TAG}-py3
dockerhubuploadlatest:
machine: true
steps:
- checkout
- run: docker build -f docker/Dockerfile -t matrixdotorg/synapse:${CIRCLE_SHA1} .
- run: docker build -f docker/Dockerfile -t matrixdotorg/synapse:${CIRCLE_SHA1}-py3 --build-arg PYTHON_VERSION=3.6 .
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- run: docker tag matrixdotorg/synapse:${CIRCLE_SHA1} matrixdotorg/synapse:latest
- run: docker tag matrixdotorg/synapse:${CIRCLE_SHA1}-py3 matrixdotorg/synapse:latest-py3
- run: docker push matrixdotorg/synapse:${CIRCLE_SHA1}
- run: docker push matrixdotorg/synapse:${CIRCLE_SHA1}-py3
- run: docker push matrixdotorg/synapse:latest
- run: docker push matrixdotorg/synapse:latest-py3
sytestpy2: sytestpy2:
machine: true machine: true
steps: steps:
@ -131,3 +153,13 @@ workflows:
filters: filters:
branches: branches:
ignore: /develop|master|release-.*/ ignore: /develop|master|release-.*/
- dockerhubuploadrelease:
filters:
tags:
only: /v[0-9].[0-9]+.[0-9]+.*/
branches:
ignore: /.*/
- dockerhubuploadlatest:
filters:
branches:
only: master

View file

@ -9,12 +9,15 @@ source $BASH_ENV
if [[ -z "${CIRCLE_PR_NUMBER}" ]] if [[ -z "${CIRCLE_PR_NUMBER}" ]]
then then
echo "Can't figure out what the PR number is!" echo "Can't figure out what the PR number is! Assuming merge target is develop."
exit 1
fi
# Get the reference, using the GitHub API # It probably hasn't had a PR opened yet. Since all PRs land on develop, we
GITBASE=`curl -q https://api.github.com/repos/matrix-org/synapse/pulls/${CIRCLE_PR_NUMBER} | jq -r '.base.ref'` # can probably assume it's based on it and will be merged into it.
GITBASE="develop"
else
# Get the reference, using the GitHub API
GITBASE=`curl -q https://api.github.com/repos/matrix-org/synapse/pulls/${CIRCLE_PR_NUMBER} | jq -r '.base.ref'`
fi
# Show what we are before # Show what we are before
git show -s git show -s

View file

@ -20,6 +20,9 @@ matrix:
- python: 2.7 - python: 2.7
env: TOX_ENV=py27 env: TOX_ENV=py27
- python: 2.7
env: TOX_ENV=py27-old
- python: 2.7 - python: 2.7
env: TOX_ENV=py27-postgres TRIAL_FLAGS="-j 4" env: TOX_ENV=py27-postgres TRIAL_FLAGS="-j 4"
services: services:

View file

@ -1,11 +1,91 @@
Synapse 0.33.6 (2018-10-04)
===========================
Internal Changes
----------------
- Pin to prometheus_client<0.4 to avoid renaming all of our metrics ([\#4002](https://github.com/matrix-org/synapse/issues/4002))
Synapse 0.33.6rc1 (2018-10-03)
==============================
Features
--------
- Adding the ability to change MAX_UPLOAD_SIZE for the docker container variables. ([\#3883](https://github.com/matrix-org/synapse/issues/3883))
- Report "python_version" in the phone home stats ([\#3894](https://github.com/matrix-org/synapse/issues/3894))
- Always LL ourselves if we're in a room ([\#3916](https://github.com/matrix-org/synapse/issues/3916))
- Include eventid in log lines when processing incoming federation transactions ([\#3959](https://github.com/matrix-org/synapse/issues/3959))
- Remove spurious check which made 'localhost' servers not work ([\#3964](https://github.com/matrix-org/synapse/issues/3964))
Bugfixes
--------
- Fix problem when playing media from Chrome using direct URL (thanks @remjey!) ([\#3578](https://github.com/matrix-org/synapse/issues/3578))
- support registering regular users non-interactively with register_new_matrix_user script ([\#3836](https://github.com/matrix-org/synapse/issues/3836))
- Fix broken invite email links for self hosted riots ([\#3868](https://github.com/matrix-org/synapse/issues/3868))
- Don't ratelimit autojoins ([\#3879](https://github.com/matrix-org/synapse/issues/3879))
- Fix 500 error when deleting unknown room alias ([\#3889](https://github.com/matrix-org/synapse/issues/3889))
- Fix some b'abcd' noise in logs and metrics ([\#3892](https://github.com/matrix-org/synapse/issues/3892), [\#3895](https://github.com/matrix-org/synapse/issues/3895))
- When we join a room, always try the server we used for the alias lookup first, to avoid unresponsive and out-of-date servers. ([\#3899](https://github.com/matrix-org/synapse/issues/3899))
- Fix incorrect server-name indication for outgoing federation requests ([\#3907](https://github.com/matrix-org/synapse/issues/3907))
- Fix adding client IPs to the database failing on Python 3. ([\#3908](https://github.com/matrix-org/synapse/issues/3908))
- Fix bug where things occaisonally were not being timed out correctly. ([\#3910](https://github.com/matrix-org/synapse/issues/3910))
- Fix bug where outbound federation would stop talking to some servers when using workers ([\#3914](https://github.com/matrix-org/synapse/issues/3914))
- Fix some instances of ExpiringCache not expiring cache items ([\#3932](https://github.com/matrix-org/synapse/issues/3932), [\#3980](https://github.com/matrix-org/synapse/issues/3980))
- Fix out-of-bounds error when LLing yourself ([\#3936](https://github.com/matrix-org/synapse/issues/3936))
- Sending server notices regarding user consent now works on Python 3. ([\#3938](https://github.com/matrix-org/synapse/issues/3938))
- Fix exceptions from metrics handler ([\#3956](https://github.com/matrix-org/synapse/issues/3956))
- Fix error message for events with m.room.create missing from auth_events ([\#3960](https://github.com/matrix-org/synapse/issues/3960))
- Fix errors due to concurrent monthly_active_user upserts ([\#3961](https://github.com/matrix-org/synapse/issues/3961))
- Fix exceptions when processing incoming events over federation ([\#3968](https://github.com/matrix-org/synapse/issues/3968))
- Replaced all occurences of e.message with str(e). Contributed by Schnuffle ([\#3970](https://github.com/matrix-org/synapse/issues/3970))
- Fix lazy loaded sync in the presence of rejected state events ([\#3986](https://github.com/matrix-org/synapse/issues/3986))
- Fix error when logging incomplete HTTP requests ([\#3990](https://github.com/matrix-org/synapse/issues/3990))
Internal Changes
----------------
- Unit tests can now be run under PostgreSQL in Docker using ``test_postgresql.sh``. ([\#3699](https://github.com/matrix-org/synapse/issues/3699))
- Speed up calculation of typing updates for replication ([\#3794](https://github.com/matrix-org/synapse/issues/3794))
- Remove documentation regarding installation on Cygwin, the use of WSL is recommended instead. ([\#3873](https://github.com/matrix-org/synapse/issues/3873))
- Fix typo in README, synaspse -> synapse ([\#3897](https://github.com/matrix-org/synapse/issues/3897))
- Increase the timeout when filling missing events in federation requests ([\#3903](https://github.com/matrix-org/synapse/issues/3903))
- Improve the logging when handling a federation transaction ([\#3904](https://github.com/matrix-org/synapse/issues/3904), [\#3966](https://github.com/matrix-org/synapse/issues/3966))
- Improve logging of outbound federation requests ([\#3906](https://github.com/matrix-org/synapse/issues/3906), [\#3909](https://github.com/matrix-org/synapse/issues/3909))
- Fix the docker image building on python 3 ([\#3911](https://github.com/matrix-org/synapse/issues/3911))
- Add a regression test for logging failed HTTP requests on Python 3. ([\#3912](https://github.com/matrix-org/synapse/issues/3912))
- Comments and interface cleanup for on_receive_pdu ([\#3924](https://github.com/matrix-org/synapse/issues/3924))
- Fix spurious exceptions when remote http client closes conncetion ([\#3925](https://github.com/matrix-org/synapse/issues/3925))
- Log exceptions thrown by background tasks ([\#3927](https://github.com/matrix-org/synapse/issues/3927))
- Add a cache to get_destination_retry_timings ([\#3933](https://github.com/matrix-org/synapse/issues/3933), [\#3991](https://github.com/matrix-org/synapse/issues/3991))
- Automate pushes to docker hub ([\#3946](https://github.com/matrix-org/synapse/issues/3946))
- Require attrs 16.0.0 or later ([\#3947](https://github.com/matrix-org/synapse/issues/3947))
- Fix incompatibility with python3 on alpine ([\#3948](https://github.com/matrix-org/synapse/issues/3948))
- Run the test suite on the oldest supported versions of our dependencies in CI. ([\#3952](https://github.com/matrix-org/synapse/issues/3952))
- CircleCI now only runs merged jobs on PRs, and commit jobs on develop, master, and release branches. ([\#3957](https://github.com/matrix-org/synapse/issues/3957))
- Fix docstrings and add tests for state store methods ([\#3958](https://github.com/matrix-org/synapse/issues/3958))
- fix docstring for FederationClient.get_state_for_room ([\#3963](https://github.com/matrix-org/synapse/issues/3963))
- Run notify_app_services as a bg process ([\#3965](https://github.com/matrix-org/synapse/issues/3965))
- Clarifications in FederationHandler ([\#3967](https://github.com/matrix-org/synapse/issues/3967))
- Further reduce the docker image size ([\#3972](https://github.com/matrix-org/synapse/issues/3972))
- Build py3 docker images for docker hub too ([\#3976](https://github.com/matrix-org/synapse/issues/3976))
- Updated the installation instructions to point to the matrix-synapse package on PyPI. ([\#3985](https://github.com/matrix-org/synapse/issues/3985))
- Disable USE_FROZEN_DICTS for unittests by default. ([\#3987](https://github.com/matrix-org/synapse/issues/3987))
- Remove unused Jenkins and development related files from the repo. ([\#3988](https://github.com/matrix-org/synapse/issues/3988))
- Improve stacktraces in certain exceptions in the logs ([\#3989](https://github.com/matrix-org/synapse/issues/3989))
Synapse 0.33.5.1 (2018-09-25) Synapse 0.33.5.1 (2018-09-25)
============================= =============================
Internal Changes Internal Changes
---------------- ----------------
- Fix incompatibility with older Twisted version in tests. Thanks - Fix incompatibility with older Twisted version in tests. Thanks @OlegGirko! ([\#3940](https://github.com/matrix-org/synapse/issues/3940))
@OlegGirko! ([\#3940](https://github.com/matrix-org/synapse/issues/3940))
Synapse 0.33.5 (2018-09-24) Synapse 0.33.5 (2018-09-24)

View file

@ -23,13 +23,9 @@ recursive-include synapse/static *.gif
recursive-include synapse/static *.html recursive-include synapse/static *.html
recursive-include synapse/static *.js recursive-include synapse/static *.js
exclude jenkins.sh
exclude jenkins*.sh
exclude jenkins*
exclude Dockerfile exclude Dockerfile
exclude .dockerignore exclude .dockerignore
exclude test_postgresql.sh exclude test_postgresql.sh
recursive-exclude jenkins *.sh
include pyproject.toml include pyproject.toml
recursive-include changelog.d * recursive-include changelog.d *
@ -38,3 +34,6 @@ prune .github
prune demo/etc prune demo/etc
prune docker prune docker
prune .circleci prune .circleci
exclude jenkins*
recursive-exclude jenkins *.sh

35
MAP.rst
View file

@ -1,35 +0,0 @@
Directory Structure
===================
Warning: this may be a bit stale...
::
.
├── cmdclient Basic CLI python Matrix client
├── demo Scripts for running standalone Matrix demos
├── docs All doc, including the draft Matrix API spec
│   ├── client-server The client-server Matrix API spec
│   ├── model Domain-specific elements of the Matrix API spec
│   ├── server-server The server-server model of the Matrix API spec
│   └── sphinx The internal API doc of the Synapse homeserver
├── experiments Early experiments of using Synapse's internal APIs
├── graph Visualisation of Matrix's distributed message store
├── synapse The reference Matrix homeserver implementation
│   ├── api Common building blocks for the APIs
│   │   ├── events Definition of state representation Events
│   │   └── streams Definition of streamable Event objects
│   ├── app The __main__ entry point for the homeserver
│   ├── crypto The PKI client/server used for secure federation
│   │   └── resource PKI helper objects (e.g. keys)
│   ├── federation Server-server state replication logic
│   ├── handlers The main business logic of the homeserver
│   ├── http Wrappers around Twisted's HTTP server & client
│   ├── rest Servlet-style RESTful API
│   ├── storage Persistence subsystem (currently only sqlite3)
│   │   └── schema sqlite persistence schema
│   └── util Synapse-specific utilities
├── tests Unit tests for the Synapse homeserver
└── webclient Basic AngularJS Matrix web client

View file

@ -81,7 +81,7 @@ Thanks for using Matrix!
Synapse Installation Synapse Installation
==================== ====================
Synapse is the reference python/twisted Matrix homeserver implementation. Synapse is the reference Python/Twisted Matrix homeserver implementation.
System requirements: System requirements:
@ -91,12 +91,13 @@ System requirements:
Installing from source Installing from source
---------------------- ----------------------
(Prebuilt packages are available for some platforms - see `Platform-Specific (Prebuilt packages are available for some platforms - see `Platform-Specific
Instructions`_.) Instructions`_.)
Synapse is written in python but some of the libraries it uses are written in Synapse is written in Python but some of the libraries it uses are written in
C. So before we can install synapse itself we need a working C compiler and the C. So before we can install Synapse itself we need a working C compiler and the
header files for python C extensions. header files for Python C extensions.
Installing prerequisites on Ubuntu or Debian:: Installing prerequisites on Ubuntu or Debian::
@ -143,18 +144,24 @@ Installing prerequisites on OpenBSD::
doas pkg_add python libffi py-pip py-setuptools sqlite3 py-virtualenv \ doas pkg_add python libffi py-pip py-setuptools sqlite3 py-virtualenv \
libxslt libxslt
To install the synapse homeserver run:: To install the Synapse homeserver run::
virtualenv -p python2.7 ~/.synapse virtualenv -p python2.7 ~/.synapse
source ~/.synapse/bin/activate source ~/.synapse/bin/activate
pip install --upgrade pip pip install --upgrade pip
pip install --upgrade setuptools pip install --upgrade setuptools
pip install https://github.com/matrix-org/synapse/tarball/master pip install matrix-synapse
This installs synapse, along with the libraries it uses, into a virtual This installs Synapse, along with the libraries it uses, into a virtual
environment under ``~/.synapse``. Feel free to pick a different directory environment under ``~/.synapse``. Feel free to pick a different directory
if you prefer. if you prefer.
This Synapse installation can then be later upgraded by using pip again with the
update flag::
source ~/.synapse/bin/activate
pip install -U matrix-synapse
In case of problems, please see the _`Troubleshooting` section below. In case of problems, please see the _`Troubleshooting` section below.
There is an offical synapse image available at There is an offical synapse image available at
@ -167,7 +174,7 @@ Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a
Dockerfile to automate a synapse server in a single Docker image, at Dockerfile to automate a synapse server in a single Docker image, at
https://hub.docker.com/r/avhost/docker-matrix/tags/ https://hub.docker.com/r/avhost/docker-matrix/tags/
Configuring synapse Configuring Synapse
------------------- -------------------
Before you can start Synapse, you will need to generate a configuration Before you can start Synapse, you will need to generate a configuration
@ -249,26 +256,6 @@ Setting up a TURN server
For reliable VoIP calls to be routed via this homeserver, you MUST configure For reliable VoIP calls to be routed via this homeserver, you MUST configure
a TURN server. See `<docs/turn-howto.rst>`_ for details. a TURN server. See `<docs/turn-howto.rst>`_ for details.
IPv6
----
As of Synapse 0.19 we finally support IPv6, many thanks to @kyrias and @glyph
for providing PR #1696.
However, for federation to work on hosts with IPv6 DNS servers you **must**
be running Twisted 17.1.0 or later - see https://github.com/matrix-org/synapse/issues/1002
for details. We can't make Synapse depend on Twisted 17.1 by default
yet as it will break most older distributions (see https://github.com/matrix-org/synapse/pull/1909)
so if you are using operating system dependencies you'll have to install your
own Twisted 17.1 package via pip or backports etc.
If you're running in a virtualenv then pip should have installed the newest
Twisted automatically, but if your virtualenv is old you will need to manually
upgrade to a newer Twisted dependency via:
pip install Twisted>=17.1.0
Running Synapse Running Synapse
=============== ===============
@ -444,8 +431,7 @@ settings require a slightly more difficult installation process.
using the ``.`` command, rather than ``bash``'s ``source``. using the ``.`` command, rather than ``bash``'s ``source``.
5) Optionally, use ``pip`` to install ``lxml``, which Synapse needs to parse 5) Optionally, use ``pip`` to install ``lxml``, which Synapse needs to parse
webpages for their titles. webpages for their titles.
6) Use ``pip`` to install this repository: ``pip install 6) Use ``pip`` to install this repository: ``pip install matrix-synapse``
https://github.com/matrix-org/synapse/tarball/master``
7) Optionally, change ``_synapse``'s shell to ``/bin/false`` to reduce the 7) Optionally, change ``_synapse``'s shell to ``/bin/false`` to reduce the
chance of a compromised Synapse server being used to take over your box. chance of a compromised Synapse server being used to take over your box.
@ -473,7 +459,7 @@ Troubleshooting
Troubleshooting Installation Troubleshooting Installation
---------------------------- ----------------------------
Synapse requires pip 1.7 or later, so if your OS provides too old a version you Synapse requires pip 8 or later, so if your OS provides too old a version you
may need to manually upgrade it:: may need to manually upgrade it::
sudo pip install --upgrade pip sudo pip install --upgrade pip
@ -508,28 +494,6 @@ failing, e.g.::
pip install twisted pip install twisted
On OS X, if you encounter clang: error: unknown argument: '-mno-fused-madd' you
will need to export CFLAGS=-Qunused-arguments.
Troubleshooting Running
-----------------------
If synapse fails with ``missing "sodium.h"`` crypto errors, you may need
to manually upgrade PyNaCL, as synapse uses NaCl (https://nacl.cr.yp.to/) for
encryption and digital signatures.
Unfortunately PyNACL currently has a few issues
(https://github.com/pyca/pynacl/issues/53) and
(https://github.com/pyca/pynacl/issues/79) that mean it may not install
correctly, causing all tests to fail with errors about missing "sodium.h". To
fix try re-installing from PyPI or directly from
(https://github.com/pyca/pynacl)::
# Install from PyPI
pip install --user --upgrade --force pynacl
# Install from github
pip install --user https://github.com/pyca/pynacl/tarball/master
Running out of File Handles Running out of File Handles
~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~

View file

@ -18,7 +18,7 @@ instructions that may be required are listed later in this document.
.. code:: bash .. code:: bash
pip install --upgrade --process-dependency-links https://github.com/matrix-org/synapse/tarball/master pip install --upgrade --process-dependency-links matrix-synapse
# restart synapse # restart synapse
synctl restart synctl restart
@ -48,11 +48,11 @@ returned by the Client-Server API:
# configured on port 443. # configured on port 443.
curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:" curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:"
Upgrading to $NEXT_VERSION Upgrading to v0.27.3
==================== ====================
This release expands the anonymous usage stats sent if the opt-in This release expands the anonymous usage stats sent if the opt-in
``report_stats`` configuration is set to ``true``. We now capture RSS memory ``report_stats`` configuration is set to ``true``. We now capture RSS memory
and cpu use at a very coarse level. This requires administrators to install and cpu use at a very coarse level. This requires administrators to install
the optional ``psutil`` python module. the optional ``psutil`` python module.

View file

@ -1 +0,0 @@
Fix problem when playing media from Chrome using direct URL (thanks @remjey!)

View file

@ -1,2 +0,0 @@
Unit tests can now be run under PostgreSQL in Docker using
``test_postgresql.sh``.

View file

@ -1 +0,0 @@
Fix broken invite email links for self hosted riots

View file

@ -1,2 +0,0 @@
Remove documentation regarding installation on Cygwin, the use of WSL is
recommended instead.

View file

@ -1 +0,0 @@
Don't ratelimit autojoins

View file

@ -1 +0,0 @@
Adding the ability to change MAX_UPLOAD_SIZE for the docker container variables.

View file

@ -1 +0,0 @@
Fix 500 error when deleting unknown room alias

View file

@ -1 +0,0 @@
Fix some b'abcd' noise in logs and metrics

View file

@ -1 +0,0 @@
Report "python_version" in the phone home stats

View file

@ -1 +0,0 @@
Fix some b'abcd' noise in logs and metrics

View file

@ -1 +0,0 @@
Fix typo in README, synaspse -> synapse

View file

@ -1 +0,0 @@
When we join a room, always try the server we used for the alias lookup first, to avoid unresponsive and out-of-date servers.

View file

@ -1 +0,0 @@
Increase the timeout when filling missing events in federation requests

View file

@ -1 +0,0 @@
Improve the logging when handling a federation transaction

View file

@ -1 +0,0 @@
Improve logging of outbound federation requests

View file

@ -1 +0,0 @@
Fix incorrect server-name indication for outgoing federation requests

View file

@ -1 +0,0 @@
Fix adding client IPs to the database failing on Python 3.

View file

@ -1 +0,0 @@
Improve logging of outbound federation requests

View file

@ -1 +0,0 @@
Fix bug where things occaisonally were not being timed out correctly.

View file

@ -1 +0,0 @@
Fix the docker image building on python 3

View file

@ -1 +0,0 @@
Add a regression test for logging failed HTTP requests on Python 3.

View file

@ -1 +0,0 @@
Fix bug where outbound federation would stop talking to some servers when using workers

View file

@ -1 +0,0 @@
Always LL ourselves if we're in a room

View file

@ -1 +0,0 @@
Comments and interface cleanup for on_receive_pdu

View file

@ -1 +0,0 @@
Fix spurious exceptions when remote http client closes conncetion

View file

@ -1 +0,0 @@
Log exceptions thrown by background tasks

View file

@ -1 +0,0 @@
Fix some instances of ExpiringCache not expiring cache items

View file

@ -1 +0,0 @@
Fix out-of-bounds error when LLing yourself

View file

@ -1 +0,0 @@
Require attrs 16.0.0 or later

View file

@ -1 +0,0 @@
Fix incompatibility with python3 on alpine

View file

@ -1 +0,0 @@
Fix exceptions from metrics handler

View file

@ -1 +0,0 @@
CircleCI now only runs merged jobs on PRs, and commit jobs on develop, master, and release branches.

1
changelog.d/3995.bugfix Normal file
View file

@ -0,0 +1 @@
Fix bug in event persistence logic which caused 'NoneType is not iterable'

1
changelog.d/3996.bugfix Normal file
View file

@ -0,0 +1 @@
Fix exception in background metrics collection

1
changelog.d/3997.bugfix Normal file
View file

@ -0,0 +1 @@
Fix exception handling in fetching remote profiles

1
changelog.d/3999.bugfix Normal file
View file

@ -0,0 +1 @@
Fix handling of rejected threepid invites

1
changelog.d/4008.misc Normal file
View file

@ -0,0 +1 @@
Log exceptions in looping calls

1
changelog.d/4017.misc Normal file
View file

@ -0,0 +1 @@
Optimisation for serving federation requests

1
changelog.d/4022.misc Normal file
View file

@ -0,0 +1 @@
Add metric to count number of non-empty sync responses

1
changelog.d/4027.bugfix Normal file
View file

@ -0,0 +1 @@
Workers now start on Python 3.

View file

@ -1,9 +1,13 @@
ARG PYTHON_VERSION=2 ARG PYTHON_VERSION=2
FROM docker.io/python:${PYTHON_VERSION}-alpine3.8
COPY . /synapse ###
### Stage 0: builder
###
FROM docker.io/python:${PYTHON_VERSION}-alpine3.8 as builder
RUN apk add --no-cache --virtual .build_deps \ # install the OS build deps
RUN apk add \
build-base \ build-base \
libffi-dev \ libffi-dev \
libjpeg-turbo-dev \ libjpeg-turbo-dev \
@ -11,30 +15,47 @@ RUN apk add --no-cache --virtual .build_deps \
libxslt-dev \ libxslt-dev \
linux-headers \ linux-headers \
postgresql-dev \ postgresql-dev \
zlib-dev \ zlib-dev
&& cd /synapse \
&& apk add --no-cache --virtual .runtime_deps \ # build things which have slow build steps, before we copy synapse, so that
libffi \ # the layer can be cached.
libjpeg-turbo \ #
libressl \ # (we really just care about caching a wheel here, as the "pip install" below
libxslt \ # will install them again.)
libpq \
zlib \ RUN pip install --prefix="/install" --no-warn-script-location \
su-exec \ cryptography \
&& pip install --upgrade \ msgpack-python \
pillow \
pynacl
# now install synapse and all of the python deps to /install.
COPY . /synapse
RUN pip install --prefix="/install" --no-warn-script-location \
lxml \ lxml \
pip \
psycopg2 \ psycopg2 \
setuptools \ /synapse
&& mkdir -p /synapse/cache \
&& pip install -f /synapse/cache --upgrade --process-dependency-links . \ ###
&& mv /synapse/docker/start.py /synapse/docker/conf / \ ### Stage 1: runtime
&& rm -rf \ ###
setup.cfg \
setup.py \ FROM docker.io/python:${PYTHON_VERSION}-alpine3.8
synapse \
&& apk del .build_deps RUN apk add --no-cache --virtual .runtime_deps \
libffi \
libjpeg-turbo \
libressl \
libxslt \
libpq \
zlib \
su-exec
COPY --from=builder /install /usr/local
COPY ./docker/start.py /start.py
COPY ./docker/conf /conf
VOLUME ["/data"] VOLUME ["/data"]
EXPOSE 8008/tcp 8448/tcp EXPOSE 8008/tcp 8448/tcp

View file

@ -1,23 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export WORKSPACE
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
export HAPROXY_BIN=/home/haproxy/haproxy-1.6.11/haproxy
./jenkins/prepare_synapse.sh
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
./jenkins/clone.sh dendron https://github.com/matrix-org/dendron.git
./dendron/jenkins/build_dendron.sh
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \
--dendron $WORKSPACE/dendron/bin/dendron \
--haproxy \

View file

@ -1,20 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export WORKSPACE
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
./jenkins/prepare_synapse.sh
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
./jenkins/clone.sh dendron https://github.com/matrix-org/dendron.git
./dendron/jenkins/build_dendron.sh
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \
--dendron $WORKSPACE/dendron/bin/dendron \

View file

@ -1,22 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
# Output test results as junit xml
export TRIAL_FLAGS="--reporter=subunit"
export TOXSUFFIX="| subunit-1to2 | subunit2junitxml --no-passthrough --output-to=results.xml"
# Write coverage reports to a separate file for each process
export COVERAGE_OPTS="-p"
export DUMP_COVERAGE_COMMAND="coverage help"
# Output flake8 violations to violations.flake8.log
export PEP8SUFFIX="--output-file=violations.flake8.log"
rm .coverage* || echo "No coverage files to remove"
tox -e packaging -e pep8

View file

@ -1,18 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export WORKSPACE
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
./jenkins/prepare_synapse.sh
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \

View file

@ -1,16 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export WORKSPACE
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
./jenkins/prepare_synapse.sh
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \

View file

@ -1,30 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
# Output test results as junit xml
export TRIAL_FLAGS="--reporter=subunit"
export TOXSUFFIX="| subunit-1to2 | subunit2junitxml --no-passthrough --output-to=results.xml"
# Write coverage reports to a separate file for each process
export COVERAGE_OPTS="-p"
export DUMP_COVERAGE_COMMAND="coverage help"
# Output flake8 violations to violations.flake8.log
# Don't exit with non-0 status code on Jenkins,
# so that the build steps continue and a later step can decided whether to
# UNSTABLE or FAILURE this build.
export PEP8SUFFIX="--output-file=violations.flake8.log || echo flake8 finished with status code \$?"
rm .coverage* || echo "No coverage files to remove"
tox --notest -e py27
TOX_BIN=$WORKSPACE/.tox/py27/bin
python synapse/python_dependencies.py | xargs -n1 $TOX_BIN/pip install
$TOX_BIN/pip install lxml
tox -e py27

View file

@ -1,44 +0,0 @@
#! /bin/bash
# This clones a project from github into a named subdirectory
# If the project has a branch with the same name as this branch
# then it will checkout that branch after cloning.
# Otherwise it will checkout "origin/develop."
# The first argument is the name of the directory to checkout
# the branch into.
# The second argument is the URL of the remote repository to checkout.
# Usually something like https://github.com/matrix-org/sytest.git
set -eux
NAME=$1
PROJECT=$2
BASE=".$NAME-base"
# Update our mirror.
if [ ! -d ".$NAME-base" ]; then
# Create a local mirror of the source repository.
# This saves us from having to download the entire repository
# when this script is next run.
git clone "$PROJECT" "$BASE" --mirror
else
# Fetch any updates from the source repository.
(cd "$BASE"; git fetch -p)
fi
# Remove the existing repository so that we have a clean copy
rm -rf "$NAME"
# Cloning with --shared means that we will share portions of the
# .git directory with our local mirror.
git clone "$BASE" "$NAME" --shared
# Jenkins may have supplied us with the name of the branch in the
# environment. Otherwise we will have to guess based on the current
# commit.
: ${GIT_BRANCH:="origin/$(git rev-parse --abbrev-ref HEAD)"}
cd "$NAME"
# check out the relevant branch
git checkout "${GIT_BRANCH}" || (
echo >&2 "No ref ${GIT_BRANCH} found, falling back to develop"
git checkout "origin/develop"
)

View file

@ -1,33 +0,0 @@
#!/usr/bin/perl -pi
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
$copyright = <<EOT;
/* Copyright 2016 OpenMarket Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
EOT
s/^(# -\*- coding: utf-8 -\*-\n)?/$1$copyright/ if ($. == 1);

View file

@ -1,33 +0,0 @@
#!/usr/bin/perl -pi
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
$copyright = <<EOT;
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
EOT
s/^(# -\*- coding: utf-8 -\*-\n)?/$1$copyright/ if ($. == 1);

View file

@ -21,4 +21,4 @@ try:
verifier.verify(macaroon, key) verifier.verify(macaroon, key)
print "Signature is correct" print "Signature is correct"
except Exception as e: except Exception as e:
print e.message print str(e)

View file

@ -133,7 +133,7 @@ def register_new_user(user, password, server_location, shared_secret, admin):
print "Passwords do not match" print "Passwords do not match"
sys.exit(1) sys.exit(1)
if not admin: if admin is None:
admin = raw_input("Make admin [no]: ") admin = raw_input("Make admin [no]: ")
if admin in ("y", "yes", "true"): if admin in ("y", "yes", "true"):
admin = True admin = True
@ -160,10 +160,16 @@ if __name__ == "__main__":
default=None, default=None,
help="New password for user. Will prompt if omitted.", help="New password for user. Will prompt if omitted.",
) )
parser.add_argument( admin_group = parser.add_mutually_exclusive_group()
admin_group.add_argument(
"-a", "--admin", "-a", "--admin",
action="store_true", action="store_true",
help="Register new user as an admin. Will prompt if omitted.", help="Register new user as an admin. Will prompt if --no-admin is not set either.",
)
admin_group.add_argument(
"--no-admin",
action="store_true",
help="Register new user as a regular user. Will prompt if --admin is not set either.",
) )
group = parser.add_mutually_exclusive_group(required=True) group = parser.add_mutually_exclusive_group(required=True)
@ -197,4 +203,8 @@ if __name__ == "__main__":
else: else:
secret = args.shared_secret secret = args.shared_secret
register_new_user(args.user, args.password, args.server_url, secret, args.admin) admin = None
if args.admin or args.no_admin:
admin = args.admin
register_new_user(args.user, args.password, args.server_url, secret, admin)

View file

@ -27,4 +27,4 @@ try:
except ImportError: except ImportError:
pass pass
__version__ = "0.33.5.1" __version__ = "0.33.6"

View file

@ -226,7 +226,7 @@ class Filtering(object):
jsonschema.validate(user_filter_json, USER_FILTER_SCHEMA, jsonschema.validate(user_filter_json, USER_FILTER_SCHEMA,
format_checker=FormatChecker()) format_checker=FormatChecker())
except jsonschema.ValidationError as e: except jsonschema.ValidationError as e:
raise SynapseError(400, e.message) raise SynapseError(400, str(e))
class FilterCollection(object): class FilterCollection(object):

View file

@ -64,7 +64,7 @@ class ConsentURIBuilder(object):
""" """
mac = hmac.new( mac = hmac.new(
key=self._hmac_secret, key=self._hmac_secret,
msg=user_id, msg=user_id.encode('ascii'),
digestmod=sha256, digestmod=sha256,
).hexdigest() ).hexdigest()
consent_uri = "%s_matrix/consent?%s" % ( consent_uri = "%s_matrix/consent?%s" % (

View file

@ -24,7 +24,7 @@ try:
python_dependencies.check_requirements() python_dependencies.check_requirements()
except python_dependencies.MissingRequirementError as e: except python_dependencies.MissingRequirementError as e:
message = "\n".join([ message = "\n".join([
"Missing Requirement: %s" % (e.message,), "Missing Requirement: %s" % (str(e),),
"To install run:", "To install run:",
" pip install --upgrade --force \"%s\"" % (e.dependency,), " pip install --upgrade --force \"%s\"" % (e.dependency,),
"", "",

View file

@ -17,6 +17,7 @@ import gc
import logging import logging
import sys import sys
import psutil
from daemonize import Daemonize from daemonize import Daemonize
from twisted.internet import error, reactor from twisted.internet import error, reactor
@ -24,12 +25,6 @@ from twisted.internet import error, reactor
from synapse.util import PreserveLoggingContext from synapse.util import PreserveLoggingContext
from synapse.util.rlimit import change_resource_limit from synapse.util.rlimit import change_resource_limit
try:
import affinity
except Exception:
affinity = None
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -89,15 +84,20 @@ def start_reactor(
with PreserveLoggingContext(): with PreserveLoggingContext():
logger.info("Running") logger.info("Running")
if cpu_affinity is not None: if cpu_affinity is not None:
if not affinity: # Turn the bitmask into bits, reverse it so we go from 0 up
quit_with_error( mask_to_bits = bin(cpu_affinity)[2:][::-1]
"Missing package 'affinity' required for cpu_affinity\n"
"option\n\n" cpus = []
"Install by running:\n\n" cpu_num = 0
" pip install affinity\n\n"
) for i in mask_to_bits:
logger.info("Setting CPU affinity to %s" % cpu_affinity) if i == "1":
affinity.set_process_affinity_mask(0, cpu_affinity) cpus.append(cpu_num)
cpu_num += 1
p = psutil.Process()
p.cpu_affinity(cpus)
change_resource_limit(soft_file_limit) change_resource_limit(soft_file_limit)
if gc_thresholds: if gc_thresholds:
gc.set_threshold(*gc_thresholds) gc.set_threshold(*gc_thresholds)

View file

@ -136,7 +136,7 @@ def start(config_options):
"Synapse appservice", config_options "Synapse appservice", config_options
) )
except ConfigError as e: except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n") sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1) sys.exit(1)
assert config.worker_app == "synapse.app.appservice" assert config.worker_app == "synapse.app.appservice"

View file

@ -153,7 +153,7 @@ def start(config_options):
"Synapse client reader", config_options "Synapse client reader", config_options
) )
except ConfigError as e: except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n") sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1) sys.exit(1)
assert config.worker_app == "synapse.app.client_reader" assert config.worker_app == "synapse.app.client_reader"

View file

@ -169,7 +169,7 @@ def start(config_options):
"Synapse event creator", config_options "Synapse event creator", config_options
) )
except ConfigError as e: except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n") sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1) sys.exit(1)
assert config.worker_app == "synapse.app.event_creator" assert config.worker_app == "synapse.app.event_creator"
@ -178,6 +178,9 @@ def start(config_options):
setup_logging(config, use_worker_options=True) setup_logging(config, use_worker_options=True)
# This should only be done on the user directory worker or the master
config.update_user_directory = False
events.USE_FROZEN_DICTS = config.use_frozen_dicts events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config) database_engine = create_engine(config.database_config)

View file

@ -140,7 +140,7 @@ def start(config_options):
"Synapse federation reader", config_options "Synapse federation reader", config_options
) )
except ConfigError as e: except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n") sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1) sys.exit(1)
assert config.worker_app == "synapse.app.federation_reader" assert config.worker_app == "synapse.app.federation_reader"

View file

@ -160,7 +160,7 @@ def start(config_options):
"Synapse federation sender", config_options "Synapse federation sender", config_options
) )
except ConfigError as e: except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n") sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1) sys.exit(1)
assert config.worker_app == "synapse.app.federation_sender" assert config.worker_app == "synapse.app.federation_sender"

View file

@ -228,7 +228,7 @@ def start(config_options):
"Synapse frontend proxy", config_options "Synapse frontend proxy", config_options
) )
except ConfigError as e: except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n") sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1) sys.exit(1)
assert config.worker_app == "synapse.app.frontend_proxy" assert config.worker_app == "synapse.app.frontend_proxy"

View file

@ -301,7 +301,7 @@ class SynapseHomeServer(HomeServer):
try: try:
database_engine.check_database(db_conn.cursor()) database_engine.check_database(db_conn.cursor())
except IncorrectDatabaseSetup as e: except IncorrectDatabaseSetup as e:
quit_with_error(e.message) quit_with_error(str(e))
# Gauges to expose monthly active user control metrics # Gauges to expose monthly active user control metrics
@ -328,7 +328,7 @@ def setup(config_options):
config_options, config_options,
) )
except ConfigError as e: except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n") sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1) sys.exit(1)
if not config: if not config:
@ -386,7 +386,6 @@ def setup(config_options):
hs.get_pusherpool().start() hs.get_pusherpool().start()
hs.get_datastore().start_profiling() hs.get_datastore().start_profiling()
hs.get_datastore().start_doing_background_updates() hs.get_datastore().start_doing_background_updates()
hs.get_federation_client().start_get_pdu_cache()
reactor.callWhenRunning(start) reactor.callWhenRunning(start)

View file

@ -133,7 +133,7 @@ def start(config_options):
"Synapse media repository", config_options "Synapse media repository", config_options
) )
except ConfigError as e: except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n") sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1) sys.exit(1)
assert config.worker_app == "synapse.app.media_repository" assert config.worker_app == "synapse.app.media_repository"

View file

@ -28,6 +28,7 @@ from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import __func__
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.events import SlavedEventStore from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore from synapse.replication.slave.storage.pushers import SlavedPusherStore
@ -49,31 +50,31 @@ class PusherSlaveStore(
SlavedAccountDataStore SlavedAccountDataStore
): ):
update_pusher_last_stream_ordering_and_success = ( update_pusher_last_stream_ordering_and_success = (
DataStore.update_pusher_last_stream_ordering_and_success.__func__ __func__(DataStore.update_pusher_last_stream_ordering_and_success)
) )
update_pusher_failing_since = ( update_pusher_failing_since = (
DataStore.update_pusher_failing_since.__func__ __func__(DataStore.update_pusher_failing_since)
) )
update_pusher_last_stream_ordering = ( update_pusher_last_stream_ordering = (
DataStore.update_pusher_last_stream_ordering.__func__ __func__(DataStore.update_pusher_last_stream_ordering)
) )
get_throttle_params_by_room = ( get_throttle_params_by_room = (
DataStore.get_throttle_params_by_room.__func__ __func__(DataStore.get_throttle_params_by_room)
) )
set_throttle_params = ( set_throttle_params = (
DataStore.set_throttle_params.__func__ __func__(DataStore.set_throttle_params)
) )
get_time_of_last_push_action_before = ( get_time_of_last_push_action_before = (
DataStore.get_time_of_last_push_action_before.__func__ __func__(DataStore.get_time_of_last_push_action_before)
) )
get_profile_displayname = ( get_profile_displayname = (
DataStore.get_profile_displayname.__func__ __func__(DataStore.get_profile_displayname)
) )
@ -191,7 +192,7 @@ def start(config_options):
"Synapse pusher", config_options "Synapse pusher", config_options
) )
except ConfigError as e: except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n") sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1) sys.exit(1)
assert config.worker_app == "synapse.app.pusher" assert config.worker_app == "synapse.app.pusher"

View file

@ -33,7 +33,7 @@ from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage._base import BaseSlavedStore, __func__
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
@ -147,7 +147,7 @@ class SynchrotronPresence(object):
and haven't come back yet. If there are poke the master about them. and haven't come back yet. If there are poke the master about them.
""" """
now = self.clock.time_msec() now = self.clock.time_msec()
for user_id, last_sync_ms in self.users_going_offline.items(): for user_id, last_sync_ms in list(self.users_going_offline.items()):
if now - last_sync_ms > 10 * 1000: if now - last_sync_ms > 10 * 1000:
self.users_going_offline.pop(user_id, None) self.users_going_offline.pop(user_id, None)
self.send_user_sync(user_id, False, last_sync_ms) self.send_user_sync(user_id, False, last_sync_ms)
@ -156,9 +156,9 @@ class SynchrotronPresence(object):
# TODO Hows this supposed to work? # TODO Hows this supposed to work?
pass pass
get_states = PresenceHandler.get_states.__func__ get_states = __func__(PresenceHandler.get_states)
get_state = PresenceHandler.get_state.__func__ get_state = __func__(PresenceHandler.get_state)
current_state_for_users = PresenceHandler.current_state_for_users.__func__ current_state_for_users = __func__(PresenceHandler.current_state_for_users)
def user_syncing(self, user_id, affect_presence): def user_syncing(self, user_id, affect_presence):
if affect_presence: if affect_presence:
@ -208,7 +208,7 @@ class SynchrotronPresence(object):
) for row in rows] ) for row in rows]
for state in states: for state in states:
self.user_to_current_state[row.user_id] = state self.user_to_current_state[state.user_id] = state
stream_id = token stream_id = token
yield self.notify_from_replication(states, stream_id) yield self.notify_from_replication(states, stream_id)
@ -410,7 +410,7 @@ def start(config_options):
"Synapse synchrotron", config_options "Synapse synchrotron", config_options
) )
except ConfigError as e: except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n") sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1) sys.exit(1)
assert config.worker_app == "synapse.app.synchrotron" assert config.worker_app == "synapse.app.synchrotron"

View file

@ -188,7 +188,7 @@ def start(config_options):
"Synapse user directory", config_options "Synapse user directory", config_options
) )
except ConfigError as e: except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n") sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1) sys.exit(1)
assert config.worker_app == "synapse.app.user_dir" assert config.worker_app == "synapse.app.user_dir"

View file

@ -25,7 +25,7 @@ if __name__ == "__main__":
try: try:
config = HomeServerConfig.load_config("", sys.argv[3:]) config = HomeServerConfig.load_config("", sys.argv[3:])
except ConfigError as e: except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n") sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1) sys.exit(1)
print (getattr(config, key)) print (getattr(config, key))

View file

@ -98,9 +98,9 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
creation_event = auth_events.get((EventTypes.Create, ""), None) creation_event = auth_events.get((EventTypes.Create, ""), None)
if not creation_event: if not creation_event:
raise SynapseError( raise AuthError(
403, 403,
"Room %r does not exist" % (event.room_id,) "No create event in auth events",
) )
creating_domain = get_domain_from_id(event.room_id) creating_domain = get_domain_from_id(event.room_id)
@ -155,10 +155,7 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
if user_level < invite_level: if user_level < invite_level:
raise AuthError( raise AuthError(
403, ( 403, "You don't have permission to invite users",
"You cannot issue a third party invite for %s." %
(event.content.display_name,)
)
) )
else: else:
logger.debug("Allowing! %s", event) logger.debug("Allowing! %s", event)
@ -305,7 +302,7 @@ def _is_membership_change_allowed(event, auth_events):
if user_level < invite_level: if user_level < invite_level:
raise AuthError( raise AuthError(
403, "You cannot invite user %s." % target_user_id 403, "You don't have permission to invite users",
) )
elif Membership.JOIN == membership: elif Membership.JOIN == membership:
# Joins are valid iff caller == target and they were: # Joins are valid iff caller == target and they were:

View file

@ -13,15 +13,22 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import os
from distutils.util import strtobool
import six import six
from synapse.util.caches import intern_dict from synapse.util.caches import intern_dict
from synapse.util.frozenutils import freeze from synapse.util.frozenutils import freeze
# Whether we should use frozen_dict in FrozenEvent. Using frozen_dicts prevents # Whether we should use frozen_dict in FrozenEvent. Using frozen_dicts prevents
# bugs where we accidentally share e.g. signature dicts. However, converting # bugs where we accidentally share e.g. signature dicts. However, converting a
# a dict to frozen_dicts is expensive. # dict to frozen_dicts is expensive.
USE_FROZEN_DICTS = True #
# NOTE: This is overridden by the configuration by the Synapse worker apps, but
# for the sake of tests, it is set here while it cannot be configured on the
# homeserver object itself.
USE_FROZEN_DICTS = strtobool(os.environ.get("SYNAPSE_USE_FROZEN_DICTS", "0"))
class _EventInternalMetadata(object): class _EventInternalMetadata(object):

View file

@ -209,8 +209,6 @@ class FederationClient(FederationBase):
Will attempt to get the PDU from each destination in the list until Will attempt to get the PDU from each destination in the list until
one succeeds. one succeeds.
This will persist the PDU locally upon receipt.
Args: Args:
destinations (list): Which home servers to query destinations (list): Which home servers to query
event_id (str): event to fetch event_id (str): event to fetch
@ -289,8 +287,7 @@ class FederationClient(FederationBase):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def get_state_for_room(self, destination, room_id, event_id): def get_state_for_room(self, destination, room_id, event_id):
"""Requests all of the `current` state PDUs for a given room from """Requests all of the room state at a given event from a remote home server.
a remote home server.
Args: Args:
destination (str): The remote homeserver to query for the state. destination (str): The remote homeserver to query for the state.
@ -298,9 +295,10 @@ class FederationClient(FederationBase):
event_id (str): The id of the event we want the state at. event_id (str): The id of the event we want the state at.
Returns: Returns:
Deferred: Results in a list of PDUs. Deferred[Tuple[List[EventBase], List[EventBase]]]:
A list of events in the state, and a list of events in the auth chain
for the given event.
""" """
try: try:
# First we try and ask for just the IDs, as thats far quicker if # First we try and ask for just the IDs, as thats far quicker if
# we have most of the state and auth_chain already. # we have most of the state and auth_chain already.

View file

@ -46,6 +46,7 @@ from synapse.replication.http.federation import (
from synapse.types import get_domain_from_id from synapse.types import get_domain_from_id
from synapse.util.async_helpers import Linearizer, concurrently_execute from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.caches.response_cache import ResponseCache from synapse.util.caches.response_cache import ResponseCache
from synapse.util.logcontext import nested_logging_context
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
# when processing incoming transactions, we try to handle multiple rooms in # when processing incoming transactions, we try to handle multiple rooms in
@ -187,21 +188,22 @@ class FederationServer(FederationBase):
for pdu in pdus_by_room[room_id]: for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id event_id = pdu.event_id
try: with nested_logging_context(event_id):
yield self._handle_received_pdu( try:
origin, pdu yield self._handle_received_pdu(
) origin, pdu
pdu_results[event_id] = {} )
except FederationError as e: pdu_results[event_id] = {}
logger.warn("Error handling PDU %s: %s", event_id, e) except FederationError as e:
pdu_results[event_id] = {"error": str(e)} logger.warn("Error handling PDU %s: %s", event_id, e)
except Exception as e: pdu_results[event_id] = {"error": str(e)}
f = failure.Failure() except Exception as e:
pdu_results[event_id] = {"error": str(e)} f = failure.Failure()
logger.error( pdu_results[event_id] = {"error": str(e)}
"Failed to handle PDU %s: %s", logger.error(
event_id, f.getTraceback().rstrip(), "Failed to handle PDU %s: %s",
) event_id, f.getTraceback().rstrip(),
)
yield concurrently_execute( yield concurrently_execute(
process_pdus_for_room, pdus_by_room.keys(), process_pdus_for_room, pdus_by_room.keys(),

View file

@ -137,26 +137,6 @@ class TransactionQueue(object):
self._processing_pending_presence = False self._processing_pending_presence = False
def can_send_to(self, destination):
"""Can we send messages to the given server?
We can't send messages to ourselves. If we are running on localhost
then we can only federation with other servers running on localhost.
Otherwise we only federate with servers on a public domain.
Args:
destination(str): The server we are possibly trying to send to.
Returns:
bool: True if we can send to the server.
"""
if destination == self.server_name:
return False
if self.server_name.startswith("localhost"):
return destination.startswith("localhost")
else:
return not destination.startswith("localhost")
def notify_new_events(self, current_id): def notify_new_events(self, current_id):
"""This gets called when we have some new events we might want to """This gets called when we have some new events we might want to
send out to other servers. send out to other servers.
@ -279,10 +259,7 @@ class TransactionQueue(object):
self._order += 1 self._order += 1
destinations = set(destinations) destinations = set(destinations)
destinations = set( destinations.discard(self.server_name)
dest for dest in destinations if self.can_send_to(dest)
)
logger.debug("Sending to: %s", str(destinations)) logger.debug("Sending to: %s", str(destinations))
if not destinations: if not destinations:
@ -358,7 +335,7 @@ class TransactionQueue(object):
for destinations, states in hosts_and_states: for destinations, states in hosts_and_states:
for destination in destinations: for destination in destinations:
if not self.can_send_to(destination): if destination == self.server_name:
continue continue
self.pending_presence_by_dest.setdefault( self.pending_presence_by_dest.setdefault(
@ -377,7 +354,8 @@ class TransactionQueue(object):
content=content, content=content,
) )
if not self.can_send_to(destination): if destination == self.server_name:
logger.info("Not sending EDU to ourselves")
return return
sent_edus_counter.inc() sent_edus_counter.inc()
@ -392,10 +370,8 @@ class TransactionQueue(object):
self._attempt_new_transaction(destination) self._attempt_new_transaction(destination)
def send_device_messages(self, destination): def send_device_messages(self, destination):
if destination == self.server_name or destination == "localhost": if destination == self.server_name:
return logger.info("Not sending device update to ourselves")
if not self.can_send_to(destination):
return return
self._attempt_new_transaction(destination) self._attempt_new_transaction(destination)

View file

@ -28,6 +28,7 @@ from synapse.metrics import (
event_processing_loop_room_count, event_processing_loop_room_count,
) )
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.util import log_failure
from synapse.util.logcontext import make_deferred_yieldable, run_in_background from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
@ -36,17 +37,6 @@ logger = logging.getLogger(__name__)
events_processed_counter = Counter("synapse_handlers_appservice_events_processed", "") events_processed_counter = Counter("synapse_handlers_appservice_events_processed", "")
def log_failure(failure):
logger.error(
"Application Services Failure",
exc_info=(
failure.type,
failure.value,
failure.getTracebackObject()
)
)
class ApplicationServicesHandler(object): class ApplicationServicesHandler(object):
def __init__(self, hs): def __init__(self, hs):
@ -112,7 +102,10 @@ class ApplicationServicesHandler(object):
if not self.started_scheduler: if not self.started_scheduler:
def start_scheduler(): def start_scheduler():
return self.scheduler.start().addErrback(log_failure) return self.scheduler.start().addErrback(
log_failure, "Application Services Failure",
)
run_as_background_process("as_scheduler", start_scheduler) run_as_background_process("as_scheduler", start_scheduler)
self.started_scheduler = True self.started_scheduler = True

View file

@ -341,7 +341,7 @@ class E2eKeysHandler(object):
def _exception_to_failure(e): def _exception_to_failure(e):
if isinstance(e, CodeMessageException): if isinstance(e, CodeMessageException):
return { return {
"status": e.code, "message": e.message, "status": e.code, "message": str(e),
} }
if isinstance(e, NotRetryingDestination): if isinstance(e, NotRetryingDestination):

View file

@ -18,7 +18,6 @@
import itertools import itertools
import logging import logging
import sys
import six import six
from six import iteritems, itervalues from six import iteritems, itervalues
@ -106,7 +105,7 @@ class FederationHandler(BaseHandler):
self.hs = hs self.hs = hs
self.store = hs.get_datastore() self.store = hs.get_datastore() # type: synapse.storage.DataStore
self.federation_client = hs.get_federation_client() self.federation_client = hs.get_federation_client()
self.state_handler = hs.get_state_handler() self.state_handler = hs.get_state_handler()
self.server_name = hs.hostname self.server_name = hs.hostname
@ -323,14 +322,22 @@ class FederationHandler(BaseHandler):
affected=pdu.event_id, affected=pdu.event_id,
) )
# Calculate the state of the previous events, and # Calculate the state after each of the previous events, and
# de-conflict them to find the current state. # resolve them to find the correct state at the current event.
state_groups = []
auth_chains = set() auth_chains = set()
event_map = {
event_id: pdu,
}
try: try:
# Get the state of the events we know about # Get the state of the events we know about
ours = yield self.store.get_state_groups(room_id, list(seen)) ours = yield self.store.get_state_groups_ids(room_id, seen)
state_groups.append(ours)
# state_maps is a list of mappings from (type, state_key) to event_id
# type: list[dict[tuple[str, str], str]]
state_maps = list(ours.values())
# we don't need this any more, let's delete it.
del ours
# Ask the remote server for the states we don't # Ask the remote server for the states we don't
# know about # know about
@ -339,27 +346,65 @@ class FederationHandler(BaseHandler):
"[%s %s] Requesting state at missing prev_event %s", "[%s %s] Requesting state at missing prev_event %s",
room_id, event_id, p, room_id, event_id, p,
) )
state, got_auth_chain = (
yield self.federation_client.get_state_for_room( with logcontext.nested_logging_context(p):
origin, room_id, p, # note that if any of the missing prevs share missing state or
# auth events, the requests to fetch those events are deduped
# by the get_pdu_cache in federation_client.
remote_state, got_auth_chain = (
yield self.federation_client.get_state_for_room(
origin, room_id, p,
)
) )
)
auth_chains.update(got_auth_chain) # we want the state *after* p; get_state_for_room returns the
state_group = {(x.type, x.state_key): x.event_id for x in state} # state *before* p.
state_groups.append(state_group) remote_event = yield self.federation_client.get_pdu(
[origin], p, outlier=True,
)
if remote_event is None:
raise Exception(
"Unable to get missing prev_event %s" % (p, )
)
if remote_event.is_state():
remote_state.append(remote_event)
# XXX hrm I'm not convinced that duplicate events will compare
# for equality, so I'm not sure this does what the author
# hoped.
auth_chains.update(got_auth_chain)
remote_state_map = {
(x.type, x.state_key): x.event_id for x in remote_state
}
state_maps.append(remote_state_map)
for x in remote_state:
event_map[x.event_id] = x
# Resolve any conflicting state # Resolve any conflicting state
@defer.inlineCallbacks
def fetch(ev_ids): def fetch(ev_ids):
return self.store.get_events( fetched = yield self.store.get_events(
ev_ids, get_prev_content=False, check_redacted=False ev_ids, get_prev_content=False, check_redacted=False,
) )
# add any events we fetch here to the `event_map` so that we
# can use them to build the state event list below.
event_map.update(fetched)
defer.returnValue(fetched)
room_version = yield self.store.get_room_version(room_id) room_version = yield self.store.get_room_version(room_id)
state_map = yield resolve_events_with_factory( state_map = yield resolve_events_with_factory(
room_version, state_groups, {event_id: pdu}, fetch room_version, state_maps, event_map, fetch,
) )
state = (yield self.store.get_events(state_map.values())).values() # we need to give _process_received_pdu the actual state events
# rather than event ids, so generate that now.
state = [
event_map[e] for e in six.itervalues(state_map)
]
auth_chain = list(auth_chains) auth_chain = list(auth_chains)
except Exception: except Exception:
logger.warn( logger.warn(
@ -483,20 +528,21 @@ class FederationHandler(BaseHandler):
"[%s %s] Handling received prev_event %s", "[%s %s] Handling received prev_event %s",
room_id, event_id, ev.event_id, room_id, event_id, ev.event_id,
) )
try: with logcontext.nested_logging_context(ev.event_id):
yield self.on_receive_pdu( try:
origin, yield self.on_receive_pdu(
ev, origin,
sent_to_us_directly=False, ev,
) sent_to_us_directly=False,
except FederationError as e:
if e.code == 403:
logger.warn(
"[%s %s] Received prev_event %s failed history check.",
room_id, event_id, ev.event_id,
) )
else: except FederationError as e:
raise if e.code == 403:
logger.warn(
"[%s %s] Received prev_event %s failed history check.",
room_id, event_id, ev.event_id,
)
else:
raise
@defer.inlineCallbacks @defer.inlineCallbacks
def _process_received_pdu(self, origin, event, state, auth_chain): def _process_received_pdu(self, origin, event, state, auth_chain):
@ -572,6 +618,10 @@ class FederationHandler(BaseHandler):
}) })
seen_ids.add(e.event_id) seen_ids.add(e.event_id)
logger.info(
"[%s %s] persisting newly-received auth/state events %s",
room_id, event_id, [e["event"].event_id for e in event_infos]
)
yield self._handle_new_events(origin, event_infos) yield self._handle_new_events(origin, event_infos)
try: try:
@ -1135,7 +1185,8 @@ class FederationHandler(BaseHandler):
try: try:
logger.info("Processing queued PDU %s which was received " logger.info("Processing queued PDU %s which was received "
"while we were joining %s", p.event_id, p.room_id) "while we were joining %s", p.event_id, p.room_id)
yield self.on_receive_pdu(origin, p, sent_to_us_directly=True) with logcontext.nested_logging_context(p.event_id):
yield self.on_receive_pdu(origin, p, sent_to_us_directly=True)
except Exception as e: except Exception as e:
logger.warn( logger.warn(
"Error handling queued PDU %s from %s: %s", "Error handling queued PDU %s from %s: %s",
@ -1550,6 +1601,9 @@ class FederationHandler(BaseHandler):
auth_events=auth_events, auth_events=auth_events,
) )
# reraise does not allow inlineCallbacks to preserve the stacktrace, so we
# hack around with a try/finally instead.
success = False
try: try:
if not event.internal_metadata.is_outlier() and not backfilled: if not event.internal_metadata.is_outlier() and not backfilled:
yield self.action_generator.handle_push_actions_for_event( yield self.action_generator.handle_push_actions_for_event(
@ -1560,15 +1614,13 @@ class FederationHandler(BaseHandler):
[(event, context)], [(event, context)],
backfilled=backfilled, backfilled=backfilled,
) )
except: # noqa: E722, as we reraise the exception this is fine. success = True
tp, value, tb = sys.exc_info() finally:
if not success:
logcontext.run_in_background( logcontext.run_in_background(
self.store.remove_push_actions_from_staging, self.store.remove_push_actions_from_staging,
event.event_id, event.event_id,
) )
six.reraise(tp, value, tb)
defer.returnValue(context) defer.returnValue(context)
@ -1581,15 +1633,22 @@ class FederationHandler(BaseHandler):
Notifies about the events where appropriate. Notifies about the events where appropriate.
""" """
contexts = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[ @defer.inlineCallbacks
logcontext.run_in_background( def prep(ev_info):
self._prep_event, event = ev_info["event"]
with logcontext.nested_logging_context(suffix=event.event_id):
res = yield self._prep_event(
origin, origin,
ev_info["event"], event,
state=ev_info.get("state"), state=ev_info.get("state"),
auth_events=ev_info.get("auth_events"), auth_events=ev_info.get("auth_events"),
) )
defer.returnValue(res)
contexts = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
logcontext.run_in_background(prep, ev_info)
for ev_info in event_infos for ev_info in event_infos
], consumeErrors=True, ], consumeErrors=True,
)) ))

View file

@ -14,9 +14,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import logging import logging
import sys
import six
from six import iteritems, itervalues, string_types from six import iteritems, itervalues, string_types
from canonicaljson import encode_canonical_json, json from canonicaljson import encode_canonical_json, json
@ -624,6 +622,9 @@ class EventCreationHandler(object):
event, context event, context
) )
# reraise does not allow inlineCallbacks to preserve the stacktrace, so we
# hack around with a try/finally instead.
success = False
try: try:
# If we're a worker we need to hit out to the master. # If we're a worker we need to hit out to the master.
if self.config.worker_app: if self.config.worker_app:
@ -636,6 +637,7 @@ class EventCreationHandler(object):
ratelimit=ratelimit, ratelimit=ratelimit,
extra_users=extra_users, extra_users=extra_users,
) )
success = True
return return
yield self.persist_and_notify_client_event( yield self.persist_and_notify_client_event(
@ -645,17 +647,16 @@ class EventCreationHandler(object):
ratelimit=ratelimit, ratelimit=ratelimit,
extra_users=extra_users, extra_users=extra_users,
) )
except: # noqa: E722, as we reraise the exception this is fine.
# Ensure that we actually remove the entries in the push actions
# staging area, if we calculated them.
tp, value, tb = sys.exc_info()
run_in_background( success = True
self.store.remove_push_actions_from_staging, finally:
event.event_id, if not success:
) # Ensure that we actually remove the entries in the push actions
# staging area, if we calculated them.
six.reraise(tp, value, tb) run_in_background(
self.store.remove_push_actions_from_staging,
event.event_id,
)
@defer.inlineCallbacks @defer.inlineCallbacks
def persist_and_notify_client_event( def persist_and_notify_client_event(

View file

@ -142,10 +142,8 @@ class BaseProfileHandler(BaseHandler):
if e.code != 404: if e.code != 404:
logger.exception("Failed to get displayname") logger.exception("Failed to get displayname")
raise raise
except Exception:
logger.exception("Failed to get displayname") defer.returnValue(result["displayname"])
else:
defer.returnValue(result["displayname"])
@defer.inlineCallbacks @defer.inlineCallbacks
def set_displayname(self, target_user, requester, new_displayname, by_admin=False): def set_displayname(self, target_user, requester, new_displayname, by_admin=False):
@ -199,8 +197,6 @@ class BaseProfileHandler(BaseHandler):
if e.code != 404: if e.code != 404:
logger.exception("Failed to get avatar_url") logger.exception("Failed to get avatar_url")
raise raise
except Exception:
logger.exception("Failed to get avatar_url")
defer.returnValue(result["avatar_url"]) defer.returnValue(result["avatar_url"])
@ -278,7 +274,7 @@ class BaseProfileHandler(BaseHandler):
except Exception as e: except Exception as e:
logger.warn( logger.warn(
"Failed to update join event for room %s - %s", "Failed to update join event for room %s - %s",
room_id, str(e.message) room_id, str(e)
) )

View file

@ -20,6 +20,8 @@ import logging
from six import iteritems, itervalues from six import iteritems, itervalues
from prometheus_client import Counter
from twisted.internet import defer from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership from synapse.api.constants import EventTypes, Membership
@ -36,6 +38,19 @@ from synapse.visibility import filter_events_for_client
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# Counts the number of times we returned a non-empty sync. `type` is one of
# "initial_sync", "full_state_sync" or "incremental_sync", `lazy_loaded` is
# "true" or "false" depending on if the request asked for lazy loaded members or
# not.
non_empty_sync_counter = Counter(
"synapse_handlers_sync_nonempty_total",
"Count of non empty sync responses. type is initial_sync/full_state_sync"
"/incremental_sync. lazy_loaded indicates if lazy loaded members were "
"enabled for that request.",
["type", "lazy_loaded"],
)
# Store the cache that tracks which lazy-loaded members have been sent to a given # Store the cache that tracks which lazy-loaded members have been sent to a given
# client for no more than 30 minutes. # client for no more than 30 minutes.
LAZY_LOADED_MEMBERS_CACHE_MAX_AGE = 30 * 60 * 1000 LAZY_LOADED_MEMBERS_CACHE_MAX_AGE = 30 * 60 * 1000
@ -227,14 +242,16 @@ class SyncHandler(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def _wait_for_sync_for_user(self, sync_config, since_token, timeout, def _wait_for_sync_for_user(self, sync_config, since_token, timeout,
full_state): full_state):
if since_token is None:
sync_type = "initial_sync"
elif full_state:
sync_type = "full_state_sync"
else:
sync_type = "incremental_sync"
context = LoggingContext.current_context() context = LoggingContext.current_context()
if context: if context:
if since_token is None: context.tag = sync_type
context.tag = "initial_sync"
elif full_state:
context.tag = "full_state_sync"
else:
context.tag = "incremental_sync"
if timeout == 0 or since_token is None or full_state: if timeout == 0 or since_token is None or full_state:
# we are going to return immediately, so don't bother calling # we are going to return immediately, so don't bother calling
@ -242,7 +259,6 @@ class SyncHandler(object):
result = yield self.current_sync_for_user( result = yield self.current_sync_for_user(
sync_config, since_token, full_state=full_state, sync_config, since_token, full_state=full_state,
) )
defer.returnValue(result)
else: else:
def current_sync_callback(before_token, after_token): def current_sync_callback(before_token, after_token):
return self.current_sync_for_user(sync_config, since_token) return self.current_sync_for_user(sync_config, since_token)
@ -251,7 +267,15 @@ class SyncHandler(object):
sync_config.user.to_string(), timeout, current_sync_callback, sync_config.user.to_string(), timeout, current_sync_callback,
from_token=since_token, from_token=since_token,
) )
defer.returnValue(result)
if result:
if sync_config.filter_collection.lazy_load_members():
lazy_loaded = "true"
else:
lazy_loaded = "false"
non_empty_sync_counter.labels(sync_type, lazy_loaded).inc()
defer.returnValue(result)
def current_sync_for_user(self, sync_config, since_token=None, def current_sync_for_user(self, sync_config, since_token=None,
full_state=False): full_state=False):
@ -567,13 +591,13 @@ class SyncHandler(object):
# be a valid name or canonical_alias - i.e. we're checking that they # be a valid name or canonical_alias - i.e. we're checking that they
# haven't been "deleted" by blatting {} over the top. # haven't been "deleted" by blatting {} over the top.
if name_id: if name_id:
name = yield self.store.get_event(name_id, allow_none=False) name = yield self.store.get_event(name_id, allow_none=True)
if name and name.content: if name and name.content:
defer.returnValue(summary) defer.returnValue(summary)
if canonical_alias_id: if canonical_alias_id:
canonical_alias = yield self.store.get_event( canonical_alias = yield self.store.get_event(
canonical_alias_id, allow_none=False, canonical_alias_id, allow_none=True,
) )
if canonical_alias and canonical_alias.content: if canonical_alias and canonical_alias.content:
defer.returnValue(summary) defer.returnValue(summary)

View file

@ -20,6 +20,7 @@ from twisted.internet import defer
from synapse.api.errors import AuthError, SynapseError from synapse.api.errors import AuthError, SynapseError
from synapse.types import UserID, get_domain_from_id from synapse.types import UserID, get_domain_from_id
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.logcontext import run_in_background from synapse.util.logcontext import run_in_background
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
from synapse.util.wheel_timer import WheelTimer from synapse.util.wheel_timer import WheelTimer
@ -68,6 +69,11 @@ class TypingHandler(object):
# map room IDs to sets of users currently typing # map room IDs to sets of users currently typing
self._room_typing = {} self._room_typing = {}
# caches which room_ids changed at which serials
self._typing_stream_change_cache = StreamChangeCache(
"TypingStreamChangeCache", self._latest_room_serial,
)
self.clock.looping_call( self.clock.looping_call(
self._handle_timeouts, self._handle_timeouts,
5000, 5000,
@ -218,6 +224,7 @@ class TypingHandler(object):
for domain in set(get_domain_from_id(u) for u in users): for domain in set(get_domain_from_id(u) for u in users):
if domain != self.server_name: if domain != self.server_name:
logger.debug("sending typing update to %s", domain)
self.federation.send_edu( self.federation.send_edu(
destination=domain, destination=domain,
edu_type="m.typing", edu_type="m.typing",
@ -274,19 +281,29 @@ class TypingHandler(object):
self._latest_room_serial += 1 self._latest_room_serial += 1
self._room_serials[member.room_id] = self._latest_room_serial self._room_serials[member.room_id] = self._latest_room_serial
self._typing_stream_change_cache.entity_has_changed(
member.room_id, self._latest_room_serial,
)
self.notifier.on_new_event( self.notifier.on_new_event(
"typing_key", self._latest_room_serial, rooms=[member.room_id] "typing_key", self._latest_room_serial, rooms=[member.room_id]
) )
def get_all_typing_updates(self, last_id, current_id): def get_all_typing_updates(self, last_id, current_id):
# TODO: Work out a way to do this without scanning the entire state.
if last_id == current_id: if last_id == current_id:
return [] return []
changed_rooms = self._typing_stream_change_cache.get_all_entities_changed(
last_id,
)
if changed_rooms is None:
changed_rooms = self._room_serials
rows = [] rows = []
for room_id, serial in self._room_serials.items(): for room_id in changed_rooms:
if last_id < serial and serial <= current_id: serial = self._room_serials[room_id]
if last_id < serial <= current_id:
typing = self._room_typing[room_id] typing = self._room_typing[room_id]
rows.append((serial, room_id, list(typing))) rows.append((serial, room_id, list(typing)))
rows.sort() rows.sort()

View file

@ -75,14 +75,14 @@ class SynapseRequest(Request):
return '<%s at 0x%x method=%r uri=%r clientproto=%r site=%r>' % ( return '<%s at 0x%x method=%r uri=%r clientproto=%r site=%r>' % (
self.__class__.__name__, self.__class__.__name__,
id(self), id(self),
self.method.decode('ascii', errors='replace'), self.get_method(),
self.get_redacted_uri(), self.get_redacted_uri(),
self.clientproto.decode('ascii', errors='replace'), self.clientproto.decode('ascii', errors='replace'),
self.site.site_tag, self.site.site_tag,
) )
def get_request_id(self): def get_request_id(self):
return "%s-%i" % (self.method.decode('ascii'), self.request_seq) return "%s-%i" % (self.get_method(), self.request_seq)
def get_redacted_uri(self): def get_redacted_uri(self):
uri = self.uri uri = self.uri
@ -90,6 +90,21 @@ class SynapseRequest(Request):
uri = self.uri.decode('ascii') uri = self.uri.decode('ascii')
return redact_uri(uri) return redact_uri(uri)
def get_method(self):
"""Gets the method associated with the request (or placeholder if not
method has yet been received).
Note: This is necessary as the placeholder value in twisted is str
rather than bytes, so we need to sanitise `self.method`.
Returns:
str
"""
method = self.method
if isinstance(method, bytes):
method = self.method.decode('ascii')
return method
def get_user_agent(self): def get_user_agent(self):
return self.requestHeaders.getRawHeaders(b"User-Agent", [None])[-1] return self.requestHeaders.getRawHeaders(b"User-Agent", [None])[-1]
@ -119,7 +134,7 @@ class SynapseRequest(Request):
# dispatching to the handler, so that the handler # dispatching to the handler, so that the handler
# can update the servlet name in the request # can update the servlet name in the request
# metrics # metrics
requests_counter.labels(self.method.decode('ascii'), requests_counter.labels(self.get_method(),
self.request_metrics.name).inc() self.request_metrics.name).inc()
@contextlib.contextmanager @contextlib.contextmanager
@ -207,14 +222,14 @@ class SynapseRequest(Request):
self.start_time = time.time() self.start_time = time.time()
self.request_metrics = RequestMetrics() self.request_metrics = RequestMetrics()
self.request_metrics.start( self.request_metrics.start(
self.start_time, name=servlet_name, method=self.method.decode('ascii'), self.start_time, name=servlet_name, method=self.get_method(),
) )
self.site.access_logger.info( self.site.access_logger.info(
"%s - %s - Received request: %s %s", "%s - %s - Received request: %s %s",
self.getClientIP(), self.getClientIP(),
self.site.site_tag, self.site.site_tag,
self.method.decode('ascii'), self.get_method(),
self.get_redacted_uri() self.get_redacted_uri()
) )
@ -280,7 +295,7 @@ class SynapseRequest(Request):
int(usage.db_txn_count), int(usage.db_txn_count),
self.sentLength, self.sentLength,
code, code,
self.method.decode('ascii'), self.get_method(),
self.get_redacted_uri(), self.get_redacted_uri(),
self.clientproto.decode('ascii', errors='replace'), self.clientproto.decode('ascii', errors='replace'),
user_agent, user_agent,

View file

@ -101,9 +101,13 @@ class _Collector(object):
labels=["name"], labels=["name"],
) )
# We copy the dict so that it doesn't change from underneath us # We copy the dict so that it doesn't change from underneath us.
# We also copy the process lists as that can also change
with _bg_metrics_lock: with _bg_metrics_lock:
_background_processes_copy = dict(_background_processes) _background_processes_copy = {
k: list(v)
for k, v in six.iteritems(_background_processes)
}
for desc, processes in six.iteritems(_background_processes_copy): for desc, processes in six.iteritems(_background_processes_copy):
background_process_in_flight_count.add_metric( background_process_in_flight_count.add_metric(

View file

@ -24,9 +24,10 @@ from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError from synapse.api.errors import AuthError
from synapse.handlers.presence import format_user_presence_state from synapse.handlers.presence import format_user_presence_state
from synapse.metrics import LaterGauge from synapse.metrics import LaterGauge
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import StreamToken from synapse.types import StreamToken
from synapse.util.async_helpers import ObservableDeferred, timeout_deferred from synapse.util.async_helpers import ObservableDeferred, timeout_deferred
from synapse.util.logcontext import PreserveLoggingContext, run_in_background from synapse.util.logcontext import PreserveLoggingContext
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
from synapse.visibility import filter_events_for_client from synapse.visibility import filter_events_for_client
@ -248,7 +249,10 @@ class Notifier(object):
def _on_new_room_event(self, event, room_stream_id, extra_users=[]): def _on_new_room_event(self, event, room_stream_id, extra_users=[]):
"""Notify any user streams that are interested in this room event""" """Notify any user streams that are interested in this room event"""
# poke any interested application service. # poke any interested application service.
run_in_background(self._notify_app_services, room_stream_id) run_as_background_process(
"notify_app_services",
self._notify_app_services, room_stream_id,
)
if self.federation_sender: if self.federation_sender:
self.federation_sender.notify_new_events(room_stream_id) self.federation_sender.notify_new_events(room_stream_id)

View file

@ -33,31 +33,35 @@ logger = logging.getLogger(__name__)
# [2] https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-dependencies # [2] https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-dependencies
REQUIREMENTS = { REQUIREMENTS = {
"jsonschema>=2.5.1": ["jsonschema>=2.5.1"], "jsonschema>=2.5.1": ["jsonschema>=2.5.1"],
"frozendict>=0.4": ["frozendict"], "frozendict>=1": ["frozendict"],
"unpaddedbase64>=1.1.0": ["unpaddedbase64>=1.1.0"], "unpaddedbase64>=1.1.0": ["unpaddedbase64>=1.1.0"],
"canonicaljson>=1.1.3": ["canonicaljson>=1.1.3"], "canonicaljson>=1.1.3": ["canonicaljson>=1.1.3"],
"signedjson>=1.0.0": ["signedjson>=1.0.0"], "signedjson>=1.0.0": ["signedjson>=1.0.0"],
"pynacl>=1.2.1": ["nacl>=1.2.1", "nacl.bindings"], "pynacl>=1.2.1": ["nacl>=1.2.1", "nacl.bindings"],
"service_identity>=1.0.0": ["service_identity>=1.0.0"], "service_identity>=16.0.0": ["service_identity>=16.0.0"],
"Twisted>=17.1.0": ["twisted>=17.1.0"], "Twisted>=17.1.0": ["twisted>=17.1.0"],
"treq>=15.1": ["treq>=15.1"], "treq>=15.1": ["treq>=15.1"],
# Twisted has required pyopenssl 16.0 since about Twisted 16.6. # Twisted has required pyopenssl 16.0 since about Twisted 16.6.
"pyopenssl>=16.0.0": ["OpenSSL>=16.0.0"], "pyopenssl>=16.0.0": ["OpenSSL>=16.0.0"],
"pyyaml": ["yaml"], "pyyaml>=3.11": ["yaml"],
"pyasn1": ["pyasn1"], "pyasn1>=0.1.9": ["pyasn1"],
"daemonize": ["daemonize"], "pyasn1-modules>=0.0.7": ["pyasn1_modules"],
"bcrypt": ["bcrypt>=3.1.0"], "daemonize>=2.3.1": ["daemonize"],
"pillow": ["PIL"], "bcrypt>=3.1.0": ["bcrypt>=3.1.0"],
"pydenticon": ["pydenticon"], "pillow>=3.1.2": ["PIL"],
"sortedcontainers": ["sortedcontainers"], "pydenticon>=0.2": ["pydenticon"],
"pysaml2>=3.0.0": ["saml2>=3.0.0"], "sortedcontainers>=1.4.4": ["sortedcontainers"],
"pymacaroons-pynacl": ["pymacaroons"], "pysaml2>=3.0.0": ["saml2"],
"pymacaroons-pynacl>=0.9.3": ["pymacaroons"],
"msgpack-python>=0.3.0": ["msgpack"], "msgpack-python>=0.3.0": ["msgpack"],
"phonenumbers>=8.2.0": ["phonenumbers"], "phonenumbers>=8.2.0": ["phonenumbers"],
"six": ["six"], "six>=1.10": ["six"],
"prometheus_client": ["prometheus_client"],
# prometheus_client 0.4.0 changed the format of counter metrics
# (cf https://github.com/matrix-org/synapse/issues/4001)
"prometheus_client>=0.0.18,<0.4.0": ["prometheus_client"],
# we use attr.s(slots), which arrived in 16.0.0 # we use attr.s(slots), which arrived in 16.0.0
"attrs>=16.0.0": ["attr>=16.0.0"], "attrs>=16.0.0": ["attr>=16.0.0"],
@ -78,9 +82,6 @@ CONDITIONAL_REQUIREMENTS = {
"psutil": { "psutil": {
"psutil>=2.0.0": ["psutil>=2.0.0"], "psutil>=2.0.0": ["psutil>=2.0.0"],
}, },
"affinity": {
"affinity": ["affinity"],
},
"postgres": { "postgres": {
"psycopg2>=2.6": ["psycopg2"] "psycopg2>=2.6": ["psycopg2"]
} }

View file

@ -15,6 +15,8 @@
import logging import logging
import six
from synapse.storage._base import SQLBaseStore from synapse.storage._base import SQLBaseStore
from synapse.storage.engines import PostgresEngine from synapse.storage.engines import PostgresEngine
@ -23,6 +25,13 @@ from ._slaved_id_tracker import SlavedIdTracker
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def __func__(inp):
if six.PY3:
return inp
else:
return inp.__func__
class BaseSlavedStore(SQLBaseStore): class BaseSlavedStore(SQLBaseStore):
def __init__(self, db_conn, hs): def __init__(self, db_conn, hs):
super(BaseSlavedStore, self).__init__(db_conn, hs) super(BaseSlavedStore, self).__init__(db_conn, hs)

View file

@ -17,7 +17,7 @@ from synapse.storage import DataStore
from synapse.util.caches.expiringcache import ExpiringCache from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.caches.stream_change_cache import StreamChangeCache from synapse.util.caches.stream_change_cache import StreamChangeCache
from ._base import BaseSlavedStore from ._base import BaseSlavedStore, __func__
from ._slaved_id_tracker import SlavedIdTracker from ._slaved_id_tracker import SlavedIdTracker
@ -43,11 +43,11 @@ class SlavedDeviceInboxStore(BaseSlavedStore):
expiry_ms=30 * 60 * 1000, expiry_ms=30 * 60 * 1000,
) )
get_to_device_stream_token = DataStore.get_to_device_stream_token.__func__ get_to_device_stream_token = __func__(DataStore.get_to_device_stream_token)
get_new_messages_for_device = DataStore.get_new_messages_for_device.__func__ get_new_messages_for_device = __func__(DataStore.get_new_messages_for_device)
get_new_device_msgs_for_remote = DataStore.get_new_device_msgs_for_remote.__func__ get_new_device_msgs_for_remote = __func__(DataStore.get_new_device_msgs_for_remote)
delete_messages_for_device = DataStore.delete_messages_for_device.__func__ delete_messages_for_device = __func__(DataStore.delete_messages_for_device)
delete_device_msgs_for_remote = DataStore.delete_device_msgs_for_remote.__func__ delete_device_msgs_for_remote = __func__(DataStore.delete_device_msgs_for_remote)
def stream_positions(self): def stream_positions(self):
result = super(SlavedDeviceInboxStore, self).stream_positions() result = super(SlavedDeviceInboxStore, self).stream_positions()

View file

@ -13,23 +13,14 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import six
from synapse.storage import DataStore from synapse.storage import DataStore
from synapse.storage.end_to_end_keys import EndToEndKeyStore from synapse.storage.end_to_end_keys import EndToEndKeyStore
from synapse.util.caches.stream_change_cache import StreamChangeCache from synapse.util.caches.stream_change_cache import StreamChangeCache
from ._base import BaseSlavedStore from ._base import BaseSlavedStore, __func__
from ._slaved_id_tracker import SlavedIdTracker from ._slaved_id_tracker import SlavedIdTracker
def __func__(inp):
if six.PY3:
return inp
else:
return inp.__func__
class SlavedDeviceStore(BaseSlavedStore): class SlavedDeviceStore(BaseSlavedStore):
def __init__(self, db_conn, hs): def __init__(self, db_conn, hs):
super(SlavedDeviceStore, self).__init__(db_conn, hs) super(SlavedDeviceStore, self).__init__(db_conn, hs)

View file

@ -16,7 +16,7 @@
from synapse.storage import DataStore from synapse.storage import DataStore
from synapse.util.caches.stream_change_cache import StreamChangeCache from synapse.util.caches.stream_change_cache import StreamChangeCache
from ._base import BaseSlavedStore from ._base import BaseSlavedStore, __func__
from ._slaved_id_tracker import SlavedIdTracker from ._slaved_id_tracker import SlavedIdTracker
@ -33,9 +33,9 @@ class SlavedGroupServerStore(BaseSlavedStore):
"_group_updates_stream_cache", self._group_updates_id_gen.get_current_token(), "_group_updates_stream_cache", self._group_updates_id_gen.get_current_token(),
) )
get_groups_changes_for_user = DataStore.get_groups_changes_for_user.__func__ get_groups_changes_for_user = __func__(DataStore.get_groups_changes_for_user)
get_group_stream_token = DataStore.get_group_stream_token.__func__ get_group_stream_token = __func__(DataStore.get_group_stream_token)
get_all_groups_for_user = DataStore.get_all_groups_for_user.__func__ get_all_groups_for_user = __func__(DataStore.get_all_groups_for_user)
def stream_positions(self): def stream_positions(self):
result = super(SlavedGroupServerStore, self).stream_positions() result = super(SlavedGroupServerStore, self).stream_positions()

View file

@ -16,7 +16,7 @@
from synapse.storage import DataStore from synapse.storage import DataStore
from synapse.storage.keys import KeyStore from synapse.storage.keys import KeyStore
from ._base import BaseSlavedStore from ._base import BaseSlavedStore, __func__
class SlavedKeyStore(BaseSlavedStore): class SlavedKeyStore(BaseSlavedStore):
@ -24,11 +24,11 @@ class SlavedKeyStore(BaseSlavedStore):
"_get_server_verify_key" "_get_server_verify_key"
] ]
get_server_verify_keys = DataStore.get_server_verify_keys.__func__ get_server_verify_keys = __func__(DataStore.get_server_verify_keys)
store_server_verify_key = DataStore.store_server_verify_key.__func__ store_server_verify_key = __func__(DataStore.store_server_verify_key)
get_server_certificate = DataStore.get_server_certificate.__func__ get_server_certificate = __func__(DataStore.get_server_certificate)
store_server_certificate = DataStore.store_server_certificate.__func__ store_server_certificate = __func__(DataStore.store_server_certificate)
get_server_keys_json = DataStore.get_server_keys_json.__func__ get_server_keys_json = __func__(DataStore.get_server_keys_json)
store_server_keys_json = DataStore.store_server_keys_json.__func__ store_server_keys_json = __func__(DataStore.store_server_keys_json)

View file

@ -17,7 +17,7 @@ from synapse.storage import DataStore
from synapse.storage.presence import PresenceStore from synapse.storage.presence import PresenceStore
from synapse.util.caches.stream_change_cache import StreamChangeCache from synapse.util.caches.stream_change_cache import StreamChangeCache
from ._base import BaseSlavedStore from ._base import BaseSlavedStore, __func__
from ._slaved_id_tracker import SlavedIdTracker from ._slaved_id_tracker import SlavedIdTracker
@ -34,8 +34,8 @@ class SlavedPresenceStore(BaseSlavedStore):
"PresenceStreamChangeCache", self._presence_id_gen.get_current_token() "PresenceStreamChangeCache", self._presence_id_gen.get_current_token()
) )
_get_active_presence = DataStore._get_active_presence.__func__ _get_active_presence = __func__(DataStore._get_active_presence)
take_presence_startup_info = DataStore.take_presence_startup_info.__func__ take_presence_startup_info = __func__(DataStore.take_presence_startup_info)
_get_presence_for_user = PresenceStore.__dict__["_get_presence_for_user"] _get_presence_for_user = PresenceStore.__dict__["_get_presence_for_user"]
get_presence_for_users = PresenceStore.__dict__["get_presence_for_users"] get_presence_for_users = PresenceStore.__dict__["get_presence_for_users"]

View file

@ -65,10 +65,15 @@ def resolve_events_with_factory(state_sets, event_map, state_map_factory):
for event_ids in itervalues(conflicted_state) for event_ids in itervalues(conflicted_state)
for event_id in event_ids for event_id in event_ids
) )
needed_event_count = len(needed_events)
if event_map is not None: if event_map is not None:
needed_events -= set(iterkeys(event_map)) needed_events -= set(iterkeys(event_map))
logger.info("Asking for %d conflicted events", len(needed_events)) logger.info(
"Asking for %d/%d conflicted events",
len(needed_events),
needed_event_count,
)
# dict[str, FrozenEvent]: a map from state event id to event. Only includes # dict[str, FrozenEvent]: a map from state event id to event. Only includes
# the state events which are in conflict (and those in event_map) # the state events which are in conflict (and those in event_map)
@ -85,11 +90,16 @@ def resolve_events_with_factory(state_sets, event_map, state_map_factory):
) )
new_needed_events = set(itervalues(auth_events)) new_needed_events = set(itervalues(auth_events))
new_needed_event_count = len(new_needed_events)
new_needed_events -= needed_events new_needed_events -= needed_events
if event_map is not None: if event_map is not None:
new_needed_events -= set(iterkeys(event_map)) new_needed_events -= set(iterkeys(event_map))
logger.info("Asking for %d auth events", len(new_needed_events)) logger.info(
"Asking for %d/%d auth events",
len(new_needed_events),
new_needed_event_count,
)
state_map_new = yield state_map_factory(new_needed_events) state_map_new = yield state_map_factory(new_needed_events)
state_map.update(state_map_new) state_map.update(state_map_new)

Some files were not shown because too many files have changed in this diff Show more