Merge branch 'develop' into matrix-org-hotfixes

This commit is contained in:
Richard van der Hoff 2021-06-29 11:27:25 +01:00
commit 077d441d42
55 changed files with 3347 additions and 1858 deletions

View file

@ -23,7 +23,12 @@ jobs:
mdbook-version: '0.4.9'
- name: Build the documentation
run: mdbook build
# mdbook will only create an index.html if we're including docs/README.md in SUMMARY.md.
# However, we're using docs/README.md for other purposes and need to pick a new page
# as the default. Let's opt for the welcome page instead.
run: |
mdbook build
cp book/welcome_and_overview.html book/index.html
# Deploy to the latest documentation directories
- name: Deploy latest documentation

View file

@ -1,11 +1,13 @@
Synapse 1.37.0rc1 (2021-06-23)
Synapse 1.37.0 (2021-06-29)
===========================
This release deprecates the current spam checker interface. See the [upgrade notes](https://matrix-org.github.io/synapse/develop/upgrade#deprecation-of-the-current-spam-checker-interface) for more information on how to update to the new generic module interface.
This release also removes support for fetching and renewing TLS certificates using the ACME v1 protocol, which has been fully decommissioned by Let's Encrypt on June 1st 2021. Admins previously using this feature should use a [reverse proxy](https://matrix-org.github.io/synapse/develop/reverse_proxy.html) to handle TLS termination, or use an external ACME client (such as [certbot](https://certbot.eff.org/)) to retrieve a certificate and key and provide them to Synapse using the `tls_certificate_path` and `tls_private_key_path` configuration settings.
Synapse 1.37.0rc1 (2021-06-24)
==============================
This release deprecates the current spam checker interface. See the [upgrade notes](https://github.com/matrix-org/synapse/blob/develop/UPGRADE.rst#deprecation-of-the-current-spam-checker-interface) for more information on how to update to the new generic module interface.
This release also removes support for fetching and renewing TLS certificate using the ACME v1 protocol, which has been fully decomissioned by Let's Encrypt on June 1st 2021. Admins previously using this feature should use a [reverse proxy](https://matrix-org.github.io/synapse/develop/reverse_proxy.html) to handle TLS termination, or use an external ACME client (such as [certbot](https://certbot.eff.org/)) to retrieve a certificate and key and provide them to Synapse using the `tls_certificate_path` and `tls_private_key_path` configuration settings.
Features
--------
@ -41,7 +43,7 @@ Improved Documentation
Deprecations and Removals
-------------------------
- The current spam checker interface is deprecated in favour of a new generic modules system. See the [upgrade notes](https://github.com/matrix-org/synapse/blob/develop/UPGRADE.rst#deprecation-of-the-current-spam-checker-interface) for more information on how to update to the new system. ([\#10062](https://github.com/matrix-org/synapse/issues/10062), [\#10210](https://github.com/matrix-org/synapse/issues/10210), [\#10238](https://github.com/matrix-org/synapse/issues/10238))
- The current spam checker interface is deprecated in favour of a new generic modules system. See the [upgrade notes](https://matrix-org.github.io/synapse/develop/upgrade#deprecation-of-the-current-spam-checker-interface) for more information on how to update to the new system. ([\#10062](https://github.com/matrix-org/synapse/issues/10062), [\#10210](https://github.com/matrix-org/synapse/issues/10210), [\#10238](https://github.com/matrix-org/synapse/issues/10238))
- Stop supporting the unstable spaces prefixes from MSC1772. ([\#10161](https://github.com/matrix-org/synapse/issues/10161))
- Remove Synapse's support for automatically fetching and renewing certificates using the ACME v1 protocol. This protocol has been fully turned off by Let's Encrypt for existing installations on June 1st 2021. Admins previously using this feature should use a [reverse proxy](https://matrix-org.github.io/synapse/develop/reverse_proxy.html) to handle TLS termination, or use an external ACME client (such as [certbot](https://certbot.eff.org/)) to retrieve a certificate and key and provide them to Synapse using the `tls_certificate_path` and `tls_private_key_path` configuration settings. ([\#10194](https://github.com/matrix-org/synapse/issues/10194))
@ -756,7 +758,7 @@ Internal Changes
Synapse 1.29.0 (2021-03-08)
===========================
Note that synapse now expects an `X-Forwarded-Proto` header when used with a reverse proxy. Please see [UPGRADE.rst](UPGRADE.rst#upgrading-to-v1290) for more details on this change.
Note that synapse now expects an `X-Forwarded-Proto` header when used with a reverse proxy. Please see the [upgrade notes](docs/upgrade.md#upgrading-to-v1290) for more details on this change.
No significant changes.
@ -821,7 +823,7 @@ Synapse 1.28.0 (2021-02-25)
Note that this release drops support for ARMv7 in the official Docker images, due to repeated problems building for ARMv7 (and the associated maintenance burden this entails).
This release also fixes the documentation included in v1.27.0 around the callback URI for SAML2 identity providers. If your server is configured to use single sign-on via a SAML2 IdP, you may need to make configuration changes. Please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes.
This release also fixes the documentation included in v1.27.0 around the callback URI for SAML2 identity providers. If your server is configured to use single sign-on via a SAML2 IdP, you may need to make configuration changes. Please review the [upgrade notes](docs/upgrade.md) for more details on these changes.
Internal Changes
@ -920,9 +922,9 @@ Synapse 1.27.0 (2021-02-16)
Note that this release includes a change in Synapse to use Redis as a cache ─ as well as a pub/sub mechanism ─ if Redis support is enabled for workers. No action is needed by server administrators, and we do not expect resource usage of the Redis instance to change dramatically.
This release also changes the callback URI for OpenID Connect (OIDC) and SAML2 identity providers. If your server is configured to use single sign-on via an OIDC/OAuth2 or SAML2 IdP, you may need to make configuration changes. Please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes.
This release also changes the callback URI for OpenID Connect (OIDC) and SAML2 identity providers. If your server is configured to use single sign-on via an OIDC/OAuth2 or SAML2 IdP, you may need to make configuration changes. Please review the [upgrade notes](docs/upgrade.md) for more details on these changes.
This release also changes escaping of variables in the HTML templates for SSO or email notifications. If you have customised these templates, please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes.
This release also changes escaping of variables in the HTML templates for SSO or email notifications. If you have customised these templates, please review the [upgrade notes](docs/upgrade.md) for more details on these changes.
Bugfixes
@ -1026,7 +1028,7 @@ Synapse 1.26.0 (2021-01-27)
===========================
This release brings a new schema version for Synapse and rolling back to a previous
version is not trivial. Please review [UPGRADE.rst](UPGRADE.rst) for more details
version is not trivial. Please review the [upgrade notes](docs/upgrade.md) for more details
on these changes and for general upgrade guidance.
No significant changes since 1.26.0rc2.
@ -1053,7 +1055,7 @@ Synapse 1.26.0rc1 (2021-01-20)
==============================
This release brings a new schema version for Synapse and rolling back to a previous
version is not trivial. Please review [UPGRADE.rst](UPGRADE.rst) for more details
version is not trivial. Please review the [upgrade notes](docs/upgrade.md) for more details
on these changes and for general upgrade guidance.
Features
@ -1459,7 +1461,7 @@ Internal Changes
Synapse 1.23.0 (2020-11-18)
===========================
This release changes the way structured logging is configured. See the [upgrade notes](UPGRADE.rst#upgrading-to-v1230) for details.
This release changes the way structured logging is configured. See the [upgrade notes](docs/upgrade.md#upgrading-to-v1230) for details.
**Note**: We are aware of a trivially exploitable denial of service vulnerability in versions of Synapse prior to 1.20.0. Complete details will be disclosed on Monday, November 23rd. If you have not upgraded recently, please do so.
@ -2062,7 +2064,10 @@ No significant changes since 1.19.0rc1.
Removal warning
---------------
As outlined in the [previous release](https://github.com/matrix-org/synapse/releases/tag/v1.18.0), we are no longer publishing Docker images with the `-py3` tag suffix. On top of that, we have also removed the `latest-py3` tag. Please see [the announcement in the upgrade notes for 1.18.0](https://github.com/matrix-org/synapse/blob/develop/UPGRADE.rst#upgrading-to-v1180).
As outlined in the [previous release](https://github.com/matrix-org/synapse/releases/tag/v1.18.0),
we are no longer publishing Docker images with the `-py3` tag suffix. On top of that, we have also removed the
`latest-py3` tag. Please see
[the announcement in the upgrade notes for 1.18.0](https://github.com/matrix-org/synapse/blob/develop/docs/upgrade.md#upgrading-to-v1180).
Synapse 1.19.0rc1 (2020-08-13)
@ -2093,7 +2098,7 @@ Bugfixes
Updates to the Docker image
---------------------------
- We no longer publish Docker images with the `-py3` tag suffix, as [announced in the upgrade notes](https://github.com/matrix-org/synapse/blob/develop/UPGRADE.rst#upgrading-to-v1180). ([\#8056](https://github.com/matrix-org/synapse/issues/8056))
- We no longer publish Docker images with the `-py3` tag suffix, as [announced in the upgrade notes](https://github.com/matrix-org/synapse/blob/develop/docs/upgrade.md#upgrading-to-v1180). ([\#8056](https://github.com/matrix-org/synapse/issues/8056))
Improved Documentation
@ -2651,7 +2656,7 @@ configurations of Synapse:
to be incomplete or empty if Synapse was upgraded directly from v1.2.1 or
earlier, to versions between v1.4.0 and v1.12.x.
Please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes
Please review the [upgrade notes](docs/upgrade.md) for more details on these changes
and for general upgrade guidance.
@ -2752,7 +2757,7 @@ Bugfixes
- Fix bad error handling that would cause Synapse to crash if it's provided with a YAML configuration file that's either empty or doesn't parse into a key-value map. ([\#7341](https://github.com/matrix-org/synapse/issues/7341))
- Fix incorrect metrics reporting for `renew_attestations` background task. ([\#7344](https://github.com/matrix-org/synapse/issues/7344))
- Prevent non-federating rooms from appearing in responses to federated `POST /publicRoom` requests when a filter was included. ([\#7367](https://github.com/matrix-org/synapse/issues/7367))
- Fix a bug which would cause the room durectory to be incorrectly populated if Synapse was upgraded directly from v1.2.1 or earlier to v1.4.0 or later. Note that this fix does not apply retrospectively; see the [upgrade notes](UPGRADE.rst#upgrading-to-v1130) for more information. ([\#7387](https://github.com/matrix-org/synapse/issues/7387))
- Fix a bug which would cause the room durectory to be incorrectly populated if Synapse was upgraded directly from v1.2.1 or earlier to v1.4.0 or later. Note that this fix does not apply retrospectively; see the [upgrade notes](docs/upgrade.md#upgrading-to-v1130) for more information. ([\#7387](https://github.com/matrix-org/synapse/issues/7387))
- Fix bug in `EventContext.deserialize`. ([\#7393](https://github.com/matrix-org/synapse/issues/7393))
@ -2902,7 +2907,7 @@ Synapse 1.12.0 includes a database update which is run as part of the upgrade,
and which may take some time (several hours in the case of a large
server). Synapse will not respond to HTTP requests while this update is taking
place. For imformation on seeing if you are affected, and workaround if you
are, see the [upgrade notes](UPGRADE.rst#upgrading-to-v1120).
are, see the [upgrade notes](docs/upgrade.md#upgrading-to-v1120).
Security advisory
-----------------
@ -3455,7 +3460,7 @@ Bugfixes
Synapse 1.7.0 (2019-12-13)
==========================
This release changes the default settings so that only local authenticated users can query the server's room directory. See the [upgrade notes](UPGRADE.rst#upgrading-to-v170) for details.
This release changes the default settings so that only local authenticated users can query the server's room directory. See the [upgrade notes](docs/upgrade.md#upgrading-to-v170) for details.
Support for SQLite versions before 3.11 is now deprecated. A future release will refuse to start if used with an SQLite version before 3.11.
@ -3819,7 +3824,7 @@ Synapse 1.4.0rc1 (2019-09-26)
=============================
Note that this release includes significant changes around 3pid
verification. Administrators are reminded to review the [upgrade notes](UPGRADE.rst#upgrading-to-v140).
verification. Administrators are reminded to review the [upgrade notes](docs/upgrade.md#upgrading-to-v140).
Features
--------
@ -4195,7 +4200,7 @@ Synapse 1.1.0 (2019-07-04)
==========================
As of v1.1.0, Synapse no longer supports Python 2, nor Postgres version 9.4.
See the [upgrade notes](UPGRADE.rst#upgrading-to-v110) for more details.
See the [upgrade notes](docs/upgrade.md#upgrading-to-v110) for more details.
This release also deprecates the use of environment variables to configure the
docker image. See the [docker README](https://github.com/matrix-org/synapse/blob/release-v1.1.0/docker/README.md#legacy-dynamic-configuration-file-support)
@ -4225,7 +4230,7 @@ Synapse 1.1.0rc1 (2019-07-02)
=============================
As of v1.1.0, Synapse no longer supports Python 2, nor Postgres version 9.4.
See the [upgrade notes](UPGRADE.rst#upgrading-to-v110) for more details.
See the [upgrade notes](docs/upgrade.md#upgrading-to-v110) for more details.
Features
--------
@ -4997,7 +5002,7 @@ run on Python versions 3.5 or 3.6 (as well as 2.7). Support for Python 3.7
remains experimental.
We recommend upgrading to Python 3, but make sure to read the [upgrade
notes](UPGRADE.rst#upgrading-to-v0340) when doing so.
notes](docs/upgrade.md#upgrading-to-v0340) when doing so.
Features
--------

View file

@ -25,7 +25,7 @@ The overall architecture is::
``#matrix:matrix.org`` is the official support room for Matrix, and can be
accessed by any client from https://matrix.org/docs/projects/try-matrix-now.html or
via IRC bridge at irc://irc.freenode.net/matrix.
via IRC bridge at irc://irc.libera.chat/matrix.
Synapse is currently in rapid development, but as of version 0.5 we believe it
is sufficiently stable to be run as an internet-facing service for real usage!
@ -186,11 +186,11 @@ impact to other applications will be minimal.
Upgrading an existing Synapse
=============================
The instructions for upgrading synapse are in `UPGRADE.rst`_.
The instructions for upgrading synapse are in `the upgrade notes`_.
Please check these instructions as upgrading may require extra steps for some
versions of synapse.
.. _UPGRADE.rst: UPGRADE.rst
.. _the upgrade notes: https://matrix-org.github.io/synapse/develop/upgrade.html
.. _reverse-proxy:

File diff suppressed because it is too large Load diff

1
changelog.d/10114.misc Normal file
View file

@ -0,0 +1 @@
Drop Origin and Accept from the value of the Access-Control-Allow-Headers response header.

1
changelog.d/10166.doc Normal file
View file

@ -0,0 +1 @@
Move the upgrade notes to [docs/upgrade.md](https://github.com/matrix-org/synapse/blob/develop/docs/upgrade.md) and convert them to markdown.

1
changelog.d/10213.misc Normal file
View file

@ -0,0 +1 @@
Add type hints to the federation servlets.

View file

@ -0,0 +1 @@
Omit empty fields from the `/sync` response. Contributed by @deepbluev7.

1
changelog.d/10223.bugfix Normal file
View file

@ -0,0 +1 @@
Fix a long-standing bug which meant that invite rejections and knocks were not sent out over federation in a timely manner.

View file

@ -0,0 +1 @@
Improve validation on federation `send_{join,leave,knock}` endpoints.

1
changelog.d/10237.misc Normal file
View file

@ -0,0 +1 @@
Improve the reliability of auto-joining remote rooms.

1
changelog.d/10239.misc Normal file
View file

@ -0,0 +1 @@
Update the release script to use the semver terminology and determine the release branch based on the next version.

1
changelog.d/10242.doc Normal file
View file

@ -0,0 +1 @@
Choose Welcome & Overview as the default page for synapse documentation website.

View file

@ -0,0 +1 @@
Improve validation on federation `send_{join,leave,knock}` endpoints.

1
changelog.d/10258.doc Normal file
View file

@ -0,0 +1 @@
Adjust the URL in the README.rst file to point to irc.libera.chat.

View file

@ -0,0 +1 @@
Mark events received over federation which fail a spam check as "soft-failed".

1
changelog.d/10264.bugfix Normal file
View file

@ -0,0 +1 @@
Fix a long-standing bug where Synapse would return errors after 2<sup>31</sup> events were handled by the server.

1
changelog.d/9450.feature Normal file
View file

@ -0,0 +1 @@
Implement refresh tokens as specified by [MSC2918](https://github.com/matrix-org/matrix-doc/pull/2918).

6
debian/changelog vendored
View file

@ -1,3 +1,9 @@
matrix-synapse-py3 (1.37.0) stable; urgency=medium
* New synapse release 1.37.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 29 Jun 2021 10:15:25 +0100
matrix-synapse-py3 (1.36.0) stable; urgency=medium
* New synapse release 1.36.0.

View file

@ -11,7 +11,7 @@
- [Delegation](delegate.md)
# Upgrading
- [Upgrading between Synapse Versions](upgrading/README.md)
- [Upgrading between Synapse Versions](upgrade.md)
- [Upgrading from pre-Synapse 1.0](MSC1711_certificates_FAQ.md)
# Usage

1353
docs/upgrade.md Normal file

File diff suppressed because it is too large Load diff

View file

@ -1,7 +0,0 @@
<!--
Include the contents of UPGRADE.rst from the project root without moving it, which may
break links around the internet. Additionally, note that SUMMARY.md is unable to
directly link to content outside of the docs/ directory. So we use this file as a
redirection.
-->
{{#include ../../UPGRADE.rst}}

View file

@ -65,4 +65,4 @@ if [[ -n "$1" ]]; then
fi
# Run the tests!
go test -v -tags synapse_blacklist,msc2946,msc3083,msc2716 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests
go test -v -tags synapse_blacklist,msc2946,msc3083,msc2716,msc2403 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests

View file

@ -83,12 +83,6 @@ def run():
if current_version.pre:
# If the current version is an RC we don't need to bump any of the
# version numbers (other than the RC number).
base_version = "{}.{}.{}".format(
current_version.major,
current_version.minor,
current_version.micro,
)
if rc:
new_version = "{}.{}.{}rc{}".format(
current_version.major,
@ -97,49 +91,57 @@ def run():
current_version.pre[1] + 1,
)
else:
new_version = base_version
new_version = "{}.{}.{}".format(
current_version.major,
current_version.minor,
current_version.micro,
)
else:
# If this is a new release cycle then we need to know if its a major
# version bump or a hotfix.
# If this is a new release cycle then we need to know if it's a minor
# or a patch version bump.
release_type = click.prompt(
"Release type",
type=click.Choice(("major", "hotfix")),
type=click.Choice(("minor", "patch")),
show_choices=True,
default="major",
default="minor",
)
if release_type == "major":
base_version = new_version = "{}.{}.{}".format(
current_version.major,
current_version.minor + 1,
0,
)
if release_type == "minor":
if rc:
new_version = "{}.{}.{}rc1".format(
current_version.major,
current_version.minor + 1,
0,
)
else:
new_version = "{}.{}.{}".format(
current_version.major,
current_version.minor + 1,
0,
)
else:
base_version = new_version = "{}.{}.{}".format(
current_version.major,
current_version.minor,
current_version.micro + 1,
)
if rc:
new_version = "{}.{}.{}rc1".format(
current_version.major,
current_version.minor,
current_version.micro + 1,
)
else:
new_version = "{}.{}.{}".format(
current_version.major,
current_version.minor,
current_version.micro + 1,
)
# Confirm the calculated version is OK.
if not click.confirm(f"Create new version: {new_version}?", default=True):
click.get_current_context().abort()
# Switch to the release branch.
release_branch_name = f"release-v{current_version.major}.{current_version.minor}"
parsed_new_version = version.parse(new_version)
release_branch_name = (
f"release-v{parsed_new_version.major}.{parsed_new_version.minor}"
)
release_branch = find_ref(repo, release_branch_name)
if release_branch:
if release_branch.is_remote():
@ -153,7 +155,7 @@ def run():
# release type.
if current_version.is_prerelease:
default = release_branch_name
elif release_type == "major":
elif release_type == "minor":
default = "develop"
else:
default = "master"

View file

@ -93,6 +93,7 @@ BOOLEAN_COLUMNS = {
"local_media_repository": ["safe_from_quarantine"],
"users": ["shadow_banned"],
"e2e_fallback_keys_json": ["used"],
"access_tokens": ["used"],
}
@ -307,7 +308,8 @@ class Porter(object):
information_schema.table_constraints AS tc
INNER JOIN information_schema.constraint_column_usage AS ccu
USING (table_schema, constraint_name)
WHERE tc.constraint_type = 'FOREIGN KEY';
WHERE tc.constraint_type = 'FOREIGN KEY'
AND tc.table_name != ccu.table_name;
"""
txn.execute(sql)

View file

@ -47,7 +47,7 @@ try:
except ImportError:
pass
__version__ = "1.37.0rc1"
__version__ = "1.37.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View file

@ -245,6 +245,11 @@ class Auth:
errcode=Codes.GUEST_ACCESS_FORBIDDEN,
)
# Mark the token as used. This is used to invalidate old refresh
# tokens after some time.
if not user_info.token_used and token_id is not None:
await self.store.mark_access_token_as_used(token_id)
requester = create_requester(
user_info.user_id,
token_id,

View file

@ -119,6 +119,27 @@ class RegistrationConfig(Config):
session_lifetime = self.parse_duration(session_lifetime)
self.session_lifetime = session_lifetime
# The `access_token_lifetime` applies for tokens that can be renewed
# using a refresh token, as per MSC2918. If it is `None`, the refresh
# token mechanism is disabled.
#
# Since it is incompatible with the `session_lifetime` mechanism, it is set to
# `None` by default if a `session_lifetime` is set.
access_token_lifetime = config.get(
"access_token_lifetime", "5m" if session_lifetime is None else None
)
if access_token_lifetime is not None:
access_token_lifetime = self.parse_duration(access_token_lifetime)
self.access_token_lifetime = access_token_lifetime
if session_lifetime is not None and access_token_lifetime is not None:
raise ConfigError(
"The refresh token mechanism is incompatible with the "
"`session_lifetime` option. Consider disabling the "
"`session_lifetime` option or disabling the refresh token "
"mechanism by removing the `access_token_lifetime` option."
)
# The success template used during fallback auth.
self.fallback_success_template = self.read_template("auth_success.html")

View file

@ -89,12 +89,12 @@ class FederationBase:
result = await self.spam_checker.check_event_for_spam(pdu)
if result:
logger.warning(
"Event contains spam, redacting %s: %s",
pdu.event_id,
pdu.get_pdu_json(),
)
return prune_event(pdu)
logger.warning("Event contains spam, soft-failing %s", pdu.event_id)
# we redact (to save disk space) as well as soft-failing (to stop
# using the event in prev_events).
redacted_event = prune_event(pdu)
redacted_event.internal_metadata.soft_failed = True
return redacted_event
return pdu

View file

@ -34,7 +34,7 @@ from twisted.internet import defer
from twisted.internet.abstract import isIPAddress
from twisted.python import failure
from synapse.api.constants import EduTypes, EventTypes
from synapse.api.constants import EduTypes, EventTypes, Membership
from synapse.api.errors import (
AuthError,
Codes,
@ -46,6 +46,7 @@ from synapse.api.errors import (
)
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.federation.federation_base import FederationBase, event_from_pdu_json
from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction
@ -537,26 +538,21 @@ class FederationServer(FederationBase):
return {"event": ret_pdu.get_pdu_json(time_now)}
async def on_send_join_request(
self, origin: str, content: JsonDict
self, origin: str, content: JsonDict, room_id: str
) -> Dict[str, Any]:
logger.debug("on_send_join_request: content: %s", content)
context = await self._on_send_membership_event(
origin, content, Membership.JOIN, room_id
)
assert_params_in_dict(content, ["room_id"])
room_version = await self.store.get_room_version(content["room_id"])
pdu = event_from_pdu_json(content, room_version)
prev_state_ids = await context.get_prev_state_ids()
state_ids = list(prev_state_ids.values())
auth_chain = await self.store.get_auth_chain(room_id, state_ids)
state = await self.store.get_events(state_ids)
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures)
pdu = await self._check_sigs_and_hash(room_version, pdu)
res_pdus = await self.handler.on_send_join_request(origin, pdu)
time_now = self._clock.time_msec()
return {
"state": [p.get_pdu_json(time_now) for p in res_pdus["state"]],
"auth_chain": [p.get_pdu_json(time_now) for p in res_pdus["auth_chain"]],
"state": [p.get_pdu_json(time_now) for p in state.values()],
"auth_chain": [p.get_pdu_json(time_now) for p in auth_chain],
}
async def on_make_leave_request(
@ -571,21 +567,11 @@ class FederationServer(FederationBase):
time_now = self._clock.time_msec()
return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
async def on_send_leave_request(self, origin: str, content: JsonDict) -> dict:
async def on_send_leave_request(
self, origin: str, content: JsonDict, room_id: str
) -> dict:
logger.debug("on_send_leave_request: content: %s", content)
assert_params_in_dict(content, ["room_id"])
room_version = await self.store.get_room_version(content["room_id"])
pdu = event_from_pdu_json(content, room_version)
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures)
pdu = await self._check_sigs_and_hash(room_version, pdu)
await self.handler.on_send_leave_request(origin, pdu)
await self._on_send_membership_event(origin, content, Membership.LEAVE, room_id)
return {}
async def on_make_knock_request(
@ -651,29 +637,9 @@ class FederationServer(FederationBase):
Returns:
The stripped room state.
"""
logger.debug("on_send_knock_request: content: %s", content)
room_version = await self.store.get_room_version(room_id)
# Check that this room supports knocking as defined by its room version
if not room_version.msc2403_knocking:
raise SynapseError(
403,
"This room version does not support knocking",
errcode=Codes.FORBIDDEN,
)
pdu = event_from_pdu_json(content, room_version)
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_knock_request: pdu sigs: %s", pdu.signatures)
pdu = await self._check_sigs_and_hash(room_version, pdu)
# Handle the event, and retrieve the EventContext
event_context = await self.handler.on_send_knock_request(origin, pdu)
event_context = await self._on_send_membership_event(
origin, content, Membership.KNOCK, room_id
)
# Retrieve stripped state events from the room and send them back to the remote
# server. This will allow the remote server's clients to display information
@ -685,6 +651,63 @@ class FederationServer(FederationBase):
)
return {"knock_state_events": stripped_room_state}
async def _on_send_membership_event(
self, origin: str, content: JsonDict, membership_type: str, room_id: str
) -> EventContext:
"""Handle an on_send_{join,leave,knock} request
Does some preliminary validation before passing the request on to the
federation handler.
Args:
origin: The (authenticated) requesting server
content: The body of the send_* request - a complete membership event
membership_type: The expected membership type (join or leave, depending
on the endpoint)
room_id: The room_id from the request, to be validated against the room_id
in the event
Returns:
The context of the event after inserting it into the room graph.
Raises:
SynapseError if there is a problem with the request, including things like
the room_id not matching or the event not being authorized.
"""
assert_params_in_dict(content, ["room_id"])
if content["room_id"] != room_id:
raise SynapseError(
400,
"Room ID in body does not match that in request path",
Codes.BAD_JSON,
)
room_version = await self.store.get_room_version(room_id)
if membership_type == Membership.KNOCK and not room_version.msc2403_knocking:
raise SynapseError(
403,
"This room version does not support knocking",
errcode=Codes.FORBIDDEN,
)
event = event_from_pdu_json(content, room_version)
if event.type != EventTypes.Member or not event.is_state():
raise SynapseError(400, "Not an m.room.member event", Codes.BAD_JSON)
if event.content.get("membership") != membership_type:
raise SynapseError(400, "Not a %s event" % membership_type, Codes.BAD_JSON)
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, event.room_id)
logger.debug("_on_send_membership_event: pdu sigs: %s", event.signatures)
event = await self._check_sigs_and_hash(room_version, event)
return await self.handler.on_send_membership_event(origin, event)
async def on_event_auth(
self, origin: str, room_id: str, event_id: str
) -> Tuple[int, Dict[str, Any]]:

File diff suppressed because it is too large Load diff

View file

@ -30,6 +30,7 @@ from typing import (
Optional,
Tuple,
Union,
cast,
)
import attr
@ -72,6 +73,7 @@ from synapse.util.stringutils import base62_encode
from synapse.util.threepids import canonicalise_email
if TYPE_CHECKING:
from synapse.rest.client.v1.login import LoginResponse
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
@ -777,6 +779,108 @@ class AuthHandler(BaseHandler):
"params": params,
}
async def refresh_token(
self,
refresh_token: str,
valid_until_ms: Optional[int],
) -> Tuple[str, str]:
"""
Consumes a refresh token and generate both a new access token and a new refresh token from it.
The consumed refresh token is considered invalid after the first use of the new access token or the new refresh token.
Args:
refresh_token: The token to consume.
valid_until_ms: The expiration timestamp of the new access token.
Returns:
A tuple containing the new access token and refresh token
"""
# Verify the token signature first before looking up the token
if not self._verify_refresh_token(refresh_token):
raise SynapseError(401, "invalid refresh token", Codes.UNKNOWN_TOKEN)
existing_token = await self.store.lookup_refresh_token(refresh_token)
if existing_token is None:
raise SynapseError(401, "refresh token does not exist", Codes.UNKNOWN_TOKEN)
if (
existing_token.has_next_access_token_been_used
or existing_token.has_next_refresh_token_been_refreshed
):
raise SynapseError(
403, "refresh token isn't valid anymore", Codes.FORBIDDEN
)
(
new_refresh_token,
new_refresh_token_id,
) = await self.get_refresh_token_for_user_id(
user_id=existing_token.user_id, device_id=existing_token.device_id
)
access_token = await self.get_access_token_for_user_id(
user_id=existing_token.user_id,
device_id=existing_token.device_id,
valid_until_ms=valid_until_ms,
refresh_token_id=new_refresh_token_id,
)
await self.store.replace_refresh_token(
existing_token.token_id, new_refresh_token_id
)
return access_token, new_refresh_token
def _verify_refresh_token(self, token: str) -> bool:
"""
Verifies the shape of a refresh token.
Args:
token: The refresh token to verify
Returns:
Whether the token has the right shape
"""
parts = token.split("_", maxsplit=4)
if len(parts) != 4:
return False
type, localpart, rand, crc = parts
# Refresh tokens are prefixed by "syr_", let's check that
if type != "syr":
return False
# Check the CRC
base = f"{type}_{localpart}_{rand}"
expected_crc = base62_encode(crc32(base.encode("ascii")), minwidth=6)
if crc != expected_crc:
return False
return True
async def get_refresh_token_for_user_id(
self,
user_id: str,
device_id: str,
) -> Tuple[str, int]:
"""
Creates a new refresh token for the user with the given user ID.
Args:
user_id: canonical user ID
device_id: the device ID to associate with the token.
Returns:
The newly created refresh token and its ID in the database
"""
refresh_token = self.generate_refresh_token(UserID.from_string(user_id))
refresh_token_id = await self.store.add_refresh_token_to_user(
user_id=user_id,
token=refresh_token,
device_id=device_id,
)
return refresh_token, refresh_token_id
async def get_access_token_for_user_id(
self,
user_id: str,
@ -784,6 +888,7 @@ class AuthHandler(BaseHandler):
valid_until_ms: Optional[int],
puppets_user_id: Optional[str] = None,
is_appservice_ghost: bool = False,
refresh_token_id: Optional[int] = None,
) -> str:
"""
Creates a new access token for the user with the given user ID.
@ -801,6 +906,8 @@ class AuthHandler(BaseHandler):
valid_until_ms: when the token is valid until. None for
no expiry.
is_appservice_ghost: Whether the user is an application ghost user
refresh_token_id: the refresh token ID that will be associated with
this access token.
Returns:
The access token for the user's session.
Raises:
@ -836,6 +943,7 @@ class AuthHandler(BaseHandler):
device_id=device_id,
valid_until_ms=valid_until_ms,
puppets_user_id=puppets_user_id,
refresh_token_id=refresh_token_id,
)
# the device *should* have been registered before we got here; however,
@ -928,7 +1036,7 @@ class AuthHandler(BaseHandler):
self,
login_submission: Dict[str, Any],
ratelimit: bool = False,
) -> Tuple[str, Optional[Callable[[Dict[str, str]], Awaitable[None]]]]:
) -> Tuple[str, Optional[Callable[["LoginResponse"], Awaitable[None]]]]:
"""Authenticates the user for the /login API
Also used by the user-interactive auth flow to validate auth types which don't
@ -1073,7 +1181,7 @@ class AuthHandler(BaseHandler):
self,
username: str,
login_submission: Dict[str, Any],
) -> Tuple[str, Optional[Callable[[Dict[str, str]], Awaitable[None]]]]:
) -> Tuple[str, Optional[Callable[["LoginResponse"], Awaitable[None]]]]:
"""Helper for validate_login
Handles login, once we've mapped 3pids onto userids
@ -1151,7 +1259,7 @@ class AuthHandler(BaseHandler):
async def check_password_provider_3pid(
self, medium: str, address: str, password: str
) -> Tuple[Optional[str], Optional[Callable[[Dict[str, str]], Awaitable[None]]]]:
) -> Tuple[Optional[str], Optional[Callable[["LoginResponse"], Awaitable[None]]]]:
"""Check if a password provider is able to validate a thirdparty login
Args:
@ -1215,6 +1323,19 @@ class AuthHandler(BaseHandler):
crc = base62_encode(crc32(base.encode("ascii")), minwidth=6)
return f"{base}_{crc}"
def generate_refresh_token(self, for_user: UserID) -> str:
"""Generates an opaque string, for use as a refresh token"""
# we use the following format for refresh tokens:
# syr_<base64 local part>_<random string>_<base62 crc check>
b64local = unpaddedbase64.encode_base64(for_user.localpart.encode("utf-8"))
random_string = stringutils.random_string(20)
base = f"syr_{b64local}_{random_string}"
crc = base62_encode(crc32(base.encode("ascii")), minwidth=6)
return f"{base}_{crc}"
async def validate_short_term_login_token(
self, login_token: str
) -> LoginTokenAttributes:
@ -1563,7 +1684,7 @@ class AuthHandler(BaseHandler):
)
respond_with_html(request, 200, html)
async def _sso_login_callback(self, login_result: JsonDict) -> None:
async def _sso_login_callback(self, login_result: "LoginResponse") -> None:
"""
A login callback which might add additional attributes to the login response.
@ -1577,7 +1698,8 @@ class AuthHandler(BaseHandler):
extra_attributes = self._extra_attributes.get(login_result["user_id"])
if extra_attributes:
login_result.update(extra_attributes.extra_attributes)
login_result_dict = cast(Dict[str, Any], login_result)
login_result_dict.update(extra_attributes.extra_attributes)
def _expire_sso_extra_attributes(self) -> None:
"""

View file

@ -1711,80 +1711,6 @@ class FederationHandler(BaseHandler):
return event
async def on_send_join_request(self, origin: str, pdu: EventBase) -> JsonDict:
"""We have received a join event for a room. Fully process it and
respond with the current state and auth chains.
"""
event = pdu
logger.debug(
"on_send_join_request from %s: Got event: %s, signatures: %s",
origin,
event.event_id,
event.signatures,
)
if get_domain_from_id(event.sender) != origin:
logger.info(
"Got /send_join request for user %r from different origin %s",
event.sender,
origin,
)
raise SynapseError(403, "User not from origin", Codes.FORBIDDEN)
event.internal_metadata.outlier = False
# Send this event on behalf of the origin server.
#
# The reasons we have the destination server rather than the origin
# server send it are slightly mysterious: the origin server should have
# all the necessary state once it gets the response to the send_join,
# so it could send the event itself if it wanted to. It may be that
# doing it this way reduces failure modes, or avoids certain attacks
# where a new server selectively tells a subset of the federation that
# it has joined.
#
# The fact is that, as of the current writing, Synapse doesn't send out
# the join event over federation after joining, and changing it now
# would introduce the danger of backwards-compatibility problems.
event.internal_metadata.send_on_behalf_of = origin
# Calculate the event context.
context = await self.state_handler.compute_event_context(event)
# Get the state before the new event.
prev_state_ids = await context.get_prev_state_ids()
# Check if the user is already in the room or invited to the room.
user_id = event.state_key
prev_member_event_id = prev_state_ids.get((EventTypes.Member, user_id), None)
prev_member_event = None
if prev_member_event_id:
prev_member_event = await self.store.get_event(prev_member_event_id)
# Check if the member should be allowed access via membership in a space.
await self._event_auth_handler.check_restricted_join_rules(
prev_state_ids,
event.room_version,
user_id,
prev_member_event,
)
# Persist the event.
await self._auth_and_persist_event(origin, event, context)
logger.debug(
"on_send_join_request: After _auth_and_persist_event: %s, sigs: %s",
event.event_id,
event.signatures,
)
state_ids = list(prev_state_ids.values())
auth_chain = await self.store.get_auth_chain(event.room_id, state_ids)
state = await self.store.get_events(list(prev_state_ids.values()))
return {"state": list(state.values()), "auth_chain": auth_chain}
async def on_invite_request(
self, origin: str, event: EventBase, room_version: RoomVersion
) -> EventBase:
@ -1960,37 +1886,6 @@ class FederationHandler(BaseHandler):
return event
async def on_send_leave_request(self, origin: str, pdu: EventBase) -> None:
"""We have received a leave event for a room. Fully process it."""
event = pdu
logger.debug(
"on_send_leave_request: Got event: %s, signatures: %s",
event.event_id,
event.signatures,
)
if get_domain_from_id(event.sender) != origin:
logger.info(
"Got /send_leave request for user %r from different origin %s",
event.sender,
origin,
)
raise SynapseError(403, "User not from origin", Codes.FORBIDDEN)
event.internal_metadata.outlier = False
context = await self.state_handler.compute_event_context(event)
await self._auth_and_persist_event(origin, event, context)
logger.debug(
"on_send_leave_request: After _auth_and_persist_event: %s, sigs: %s",
event.event_id,
event.signatures,
)
return None
@log_function
async def on_make_knock_request(
self, origin: str, room_id: str, user_id: str
@ -2054,51 +1949,115 @@ class FederationHandler(BaseHandler):
return event
@log_function
async def on_send_knock_request(
async def on_send_membership_event(
self, origin: str, event: EventBase
) -> EventContext:
"""
We have received a knock event for a room. Verify that event and send it into the room
on the knocking homeserver's behalf.
We have received a join/leave/knock event for a room via send_join/leave/knock.
Verify that event and send it into the room on the remote homeserver's behalf.
This is quite similar to on_receive_pdu, with the following principal
differences:
* only membership events are permitted (and only events with
sender==state_key -- ie, no kicks or bans)
* *We* send out the event on behalf of the remote server.
* We enforce the membership restrictions of restricted rooms.
* Rejected events result in an exception rather than being stored.
There are also other differences, however it is not clear if these are by
design or omission. In particular, we do not attempt to backfill any missing
prev_events.
Args:
origin: The remote homeserver of the knocking user.
event: The knocking member event that has been signed by the remote homeserver.
origin: The homeserver of the remote (joining/invited/knocking) user.
event: The member event that has been signed by the remote homeserver.
Returns:
The context of the event after inserting it into the room graph.
Raises:
SynapseError if the event is not accepted into the room
"""
logger.debug(
"on_send_knock_request: Got event: %s, signatures: %s",
"on_send_membership_event: Got event: %s, signatures: %s",
event.event_id,
event.signatures,
)
if get_domain_from_id(event.sender) != origin:
logger.info(
"Got /send_knock request for user %r from different origin %s",
"Got send_membership request for user %r from different origin %s",
event.sender,
origin,
)
raise SynapseError(403, "User not from origin", Codes.FORBIDDEN)
event.internal_metadata.outlier = False
if event.sender != event.state_key:
raise SynapseError(400, "state_key and sender must match", Codes.BAD_JSON)
assert not event.internal_metadata.outlier
# Send this event on behalf of the other server.
#
# The remote server isn't a full participant in the room at this point, so
# may not have an up-to-date list of the other homeservers participating in
# the room, so we send it on their behalf.
event.internal_metadata.send_on_behalf_of = origin
context = await self.state_handler.compute_event_context(event)
event_allowed = await self.third_party_event_rules.check_event_allowed(
event, context
)
if not event_allowed:
logger.info("Sending of knock %s forbidden by third-party rules", event)
context = await self._check_event_auth(origin, event, context)
if context.rejected:
raise SynapseError(
403, "This event is not allowed in this context", Codes.FORBIDDEN
403, f"{event.membership} event was rejected", Codes.FORBIDDEN
)
await self._auth_and_persist_event(origin, event, context)
# for joins, we need to check the restrictions of restricted rooms
if event.membership == Membership.JOIN:
await self._check_join_restrictions(context, event)
# for knock events, we run the third-party event rules. It's not entirely clear
# why we don't do this for other sorts of membership events.
if event.membership == Membership.KNOCK:
event_allowed = await self.third_party_event_rules.check_event_allowed(
event, context
)
if not event_allowed:
logger.info("Sending of knock %s forbidden by third-party rules", event)
raise SynapseError(
403, "This event is not allowed in this context", Codes.FORBIDDEN
)
# all looks good, we can persist the event.
await self._run_push_actions_and_persist_event(event, context)
return context
async def _check_join_restrictions(
self, context: EventContext, event: EventBase
) -> None:
"""Check that restrictions in restricted join rules are matched
Called when we receive a join event via send_join.
Raises an auth error if the restrictions are not matched.
"""
prev_state_ids = await context.get_prev_state_ids()
# Check if the user is already in the room or invited to the room.
user_id = event.state_key
prev_member_event_id = prev_state_ids.get((EventTypes.Member, user_id), None)
prev_member_event = None
if prev_member_event_id:
prev_member_event = await self.store.get_event(prev_member_event_id)
# Check if the member should be allowed access via membership in a space.
await self._event_auth_handler.check_restricted_join_rules(
prev_state_ids,
event.room_version,
user_id,
prev_member_event,
)
async def get_state_for_pdu(self, room_id: str, event_id: str) -> List[EventBase]:
"""Returns the state at the event. i.e. not including said event."""
@ -2240,6 +2199,18 @@ class FederationHandler(BaseHandler):
backfilled=backfilled,
)
await self._run_push_actions_and_persist_event(event, context, backfilled)
async def _run_push_actions_and_persist_event(
self, event: EventBase, context: EventContext, backfilled: bool = False
):
"""Run the push actions for a received event, and persist it.
Args:
event: The event itself.
context: The event context.
backfilled: True if the event was backfilled.
"""
try:
if (
not event.internal_metadata.is_outlier()
@ -2553,9 +2524,9 @@ class FederationHandler(BaseHandler):
origin: str,
event: EventBase,
context: EventContext,
state: Optional[Iterable[EventBase]],
auth_events: Optional[MutableStateMap[EventBase]],
backfilled: bool,
state: Optional[Iterable[EventBase]] = None,
auth_events: Optional[MutableStateMap[EventBase]] = None,
backfilled: bool = False,
) -> EventContext:
"""
Checks whether an event should be rejected (for failing auth checks).

View file

@ -15,9 +15,10 @@
"""Contains functions for registering clients."""
import logging
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Tuple
from typing import TYPE_CHECKING, Iterable, List, Optional, Tuple
from prometheus_client import Counter
from typing_extensions import TypedDict
from synapse import types
from synapse.api.constants import MAX_USERID_LENGTH, EventTypes, JoinRules, LoginType
@ -54,6 +55,16 @@ login_counter = Counter(
["guest", "auth_provider"],
)
LoginDict = TypedDict(
"LoginDict",
{
"device_id": str,
"access_token": str,
"valid_until_ms": Optional[int],
"refresh_token": Optional[str],
},
)
class RegistrationHandler(BaseHandler):
def __init__(self, hs: "HomeServer"):
@ -85,6 +96,7 @@ class RegistrationHandler(BaseHandler):
self.pusher_pool = hs.get_pusherpool()
self.session_lifetime = hs.config.session_lifetime
self.access_token_lifetime = hs.config.access_token_lifetime
async def check_username(
self,
@ -386,11 +398,32 @@ class RegistrationHandler(BaseHandler):
room_alias = RoomAlias.from_string(r)
if self.hs.hostname != room_alias.domain:
logger.warning(
"Cannot create room alias %s, "
"it does not match server domain",
# If the alias is remote, try to join the room. This might fail
# because the room might be invite only, but we don't have any local
# user in the room to invite this one with, so at this point that's
# the best we can do.
logger.info(
"Cannot automatically create room with alias %s as it isn't"
" local, trying to join the room instead",
r,
)
(
room,
remote_room_hosts,
) = await room_member_handler.lookup_room_alias(room_alias)
room_id = room.to_string()
await room_member_handler.update_membership(
requester=create_requester(
user_id, authenticated_entity=self._server_name
),
target=UserID.from_string(user_id),
room_id=room_id,
remote_room_hosts=remote_room_hosts,
action="join",
ratelimit=False,
)
else:
# A shallow copy is OK here since the only key that is
# modified is room_alias_name.
@ -448,22 +481,32 @@ class RegistrationHandler(BaseHandler):
)
# Calculate whether the room requires an invite or can be
# joined directly. Note that unless a join rule of public exists,
# it is treated as requiring an invite.
requires_invite = True
state = await self.store.get_filtered_current_state_ids(
room_id, StateFilter.from_types([(EventTypes.JoinRules, "")])
# joined directly. By default, we consider the room as requiring an
# invite if the homeserver is in the room (unless told otherwise by the
# join rules). Otherwise we consider it as being joinable, at the risk of
# failing to join, but in this case there's little more we can do since
# we don't have a local user in the room to craft up an invite with.
requires_invite = await self.store.is_host_joined(
room_id,
self.server_name,
)
event_id = state.get((EventTypes.JoinRules, ""))
if event_id:
join_rules_event = await self.store.get_event(
event_id, allow_none=True
if requires_invite:
# If the server is in the room, check if the room is public.
state = await self.store.get_filtered_current_state_ids(
room_id, StateFilter.from_types([(EventTypes.JoinRules, "")])
)
if join_rules_event:
join_rule = join_rules_event.content.get("join_rule", None)
requires_invite = join_rule and join_rule != JoinRules.PUBLIC
event_id = state.get((EventTypes.JoinRules, ""))
if event_id:
join_rules_event = await self.store.get_event(
event_id, allow_none=True
)
if join_rules_event:
join_rule = join_rules_event.content.get("join_rule", None)
requires_invite = (
join_rule and join_rule != JoinRules.PUBLIC
)
# Send the invite, if necessary.
if requires_invite:
@ -665,7 +708,8 @@ class RegistrationHandler(BaseHandler):
is_guest: bool = False,
is_appservice_ghost: bool = False,
auth_provider_id: Optional[str] = None,
) -> Tuple[str, str]:
should_issue_refresh_token: bool = False,
) -> Tuple[str, str, Optional[int], Optional[str]]:
"""Register a device for a user and generate an access token.
The access token will be limited by the homeserver's session_lifetime config.
@ -677,8 +721,9 @@ class RegistrationHandler(BaseHandler):
is_guest: Whether this is a guest account
auth_provider_id: The SSO IdP the user used, if any (just used for the
prometheus metrics).
should_issue_refresh_token: Whether it should also issue a refresh token
Returns:
Tuple of device ID and access token
Tuple of device ID, access token, access token expiration time and refresh token
"""
res = await self._register_device_client(
user_id=user_id,
@ -686,6 +731,7 @@ class RegistrationHandler(BaseHandler):
initial_display_name=initial_display_name,
is_guest=is_guest,
is_appservice_ghost=is_appservice_ghost,
should_issue_refresh_token=should_issue_refresh_token,
)
login_counter.labels(
@ -693,7 +739,12 @@ class RegistrationHandler(BaseHandler):
auth_provider=(auth_provider_id or ""),
).inc()
return res["device_id"], res["access_token"]
return (
res["device_id"],
res["access_token"],
res["valid_until_ms"],
res["refresh_token"],
)
async def register_device_inner(
self,
@ -702,7 +753,8 @@ class RegistrationHandler(BaseHandler):
initial_display_name: Optional[str],
is_guest: bool = False,
is_appservice_ghost: bool = False,
) -> Dict[str, str]:
should_issue_refresh_token: bool = False,
) -> LoginDict:
"""Helper for register_device
Does the bits that need doing on the main process. Not for use outside this
@ -717,6 +769,9 @@ class RegistrationHandler(BaseHandler):
)
valid_until_ms = self.clock.time_msec() + self.session_lifetime
refresh_token = None
refresh_token_id = None
registered_device_id = await self.device_handler.check_device_registered(
user_id, device_id, initial_display_name
)
@ -724,14 +779,30 @@ class RegistrationHandler(BaseHandler):
assert valid_until_ms is None
access_token = self.macaroon_gen.generate_guest_access_token(user_id)
else:
if should_issue_refresh_token:
(
refresh_token,
refresh_token_id,
) = await self._auth_handler.get_refresh_token_for_user_id(
user_id,
device_id=registered_device_id,
)
valid_until_ms = self.clock.time_msec() + self.access_token_lifetime
access_token = await self._auth_handler.get_access_token_for_user_id(
user_id,
device_id=registered_device_id,
valid_until_ms=valid_until_ms,
is_appservice_ghost=is_appservice_ghost,
refresh_token_id=refresh_token_id,
)
return {"device_id": registered_device_id, "access_token": access_token}
return {
"device_id": registered_device_id,
"access_token": access_token,
"valid_until_ms": valid_until_ms,
"refresh_token": refresh_token,
}
async def post_registration_actions(
self, user_id: str, auth_result: dict, access_token: Optional[str]

View file

@ -728,7 +728,7 @@ def set_cors_headers(request: Request):
)
request.setHeader(
b"Access-Control-Allow-Headers",
b"Origin, X-Requested-With, Content-Type, Accept, Authorization, Date",
b"X-Requested-With, Content-Type, Authorization, Date",
)

View file

@ -109,12 +109,22 @@ def parse_boolean_from_args(args, name, default=None, required=False):
return default
@overload
def parse_bytes_from_args(
args: Dict[bytes, List[bytes]],
name: str,
default: Optional[bytes] = None,
) -> Optional[bytes]:
...
@overload
def parse_bytes_from_args(
args: Dict[bytes, List[bytes]],
name: str,
default: Literal[None] = None,
required: Literal[True] = True,
*,
required: Literal[True],
) -> bytes:
...
@ -197,7 +207,12 @@ def parse_string(
"""
args = request.args # type: Dict[bytes, List[bytes]] # type: ignore
return parse_string_from_args(
args, name, default, required, allowed_values, encoding
args,
name,
default,
required=required,
allowed_values=allowed_values,
encoding=encoding,
)
@ -227,7 +242,20 @@ def parse_strings_from_args(
args: Dict[bytes, List[bytes]],
name: str,
default: Optional[List[str]] = None,
required: Literal[True] = True,
*,
allowed_values: Optional[Iterable[str]] = None,
encoding: str = "ascii",
) -> Optional[List[str]]:
...
@overload
def parse_strings_from_args(
args: Dict[bytes, List[bytes]],
name: str,
default: Optional[List[str]] = None,
*,
required: Literal[True],
allowed_values: Optional[Iterable[str]] = None,
encoding: str = "ascii",
) -> List[str]:
@ -239,6 +267,7 @@ def parse_strings_from_args(
args: Dict[bytes, List[bytes]],
name: str,
default: Optional[List[str]] = None,
*,
required: bool = False,
allowed_values: Optional[Iterable[str]] = None,
encoding: str = "ascii",
@ -299,7 +328,20 @@ def parse_string_from_args(
args: Dict[bytes, List[bytes]],
name: str,
default: Optional[str] = None,
required: Literal[True] = True,
*,
allowed_values: Optional[Iterable[str]] = None,
encoding: str = "ascii",
) -> Optional[str]:
...
@overload
def parse_string_from_args(
args: Dict[bytes, List[bytes]],
name: str,
default: Optional[str] = None,
*,
required: Literal[True],
allowed_values: Optional[Iterable[str]] = None,
encoding: str = "ascii",
) -> str:

View file

@ -168,7 +168,7 @@ class ModuleApi:
"Using deprecated ModuleApi.register which creates a dummy user device."
)
user_id = yield self.register_user(localpart, displayname, emails or [])
_, access_token = yield self.register_device(user_id)
_, access_token, _, _ = yield self.register_device(user_id)
return user_id, access_token
def register_user(

View file

@ -36,20 +36,29 @@ class RegisterDeviceReplicationServlet(ReplicationEndpoint):
@staticmethod
async def _serialize_payload(
user_id, device_id, initial_display_name, is_guest, is_appservice_ghost
user_id,
device_id,
initial_display_name,
is_guest,
is_appservice_ghost,
should_issue_refresh_token,
):
"""
Args:
user_id (int)
device_id (str|None): Device ID to use, if None a new one is
generated.
initial_display_name (str|None)
is_guest (bool)
is_appservice_ghost (bool)
should_issue_refresh_token (bool)
"""
return {
"device_id": device_id,
"initial_display_name": initial_display_name,
"is_guest": is_guest,
"is_appservice_ghost": is_appservice_ghost,
"should_issue_refresh_token": should_issue_refresh_token,
}
async def _handle_request(self, request, user_id):
@ -59,6 +68,7 @@ class RegisterDeviceReplicationServlet(ReplicationEndpoint):
initial_display_name = content["initial_display_name"]
is_guest = content["is_guest"]
is_appservice_ghost = content["is_appservice_ghost"]
should_issue_refresh_token = content["should_issue_refresh_token"]
res = await self.registration_handler.register_device_inner(
user_id,
@ -66,6 +76,7 @@ class RegisterDeviceReplicationServlet(ReplicationEndpoint):
initial_display_name,
is_guest,
is_appservice_ghost=is_appservice_ghost,
should_issue_refresh_token=should_issue_refresh_token,
)
return 200, res

View file

@ -14,7 +14,9 @@
import logging
import re
from typing import TYPE_CHECKING, Awaitable, Callable, Dict, List, Optional
from typing import TYPE_CHECKING, Any, Awaitable, Callable, Dict, List, Optional
from typing_extensions import TypedDict
from synapse.api.errors import Codes, LoginError, SynapseError
from synapse.api.ratelimiting import Ratelimiter
@ -25,6 +27,8 @@ from synapse.http import get_request_uri
from synapse.http.server import HttpServer, finish_request
from synapse.http.servlet import (
RestServlet,
assert_params_in_dict,
parse_boolean,
parse_bytes_from_args,
parse_json_object_from_request,
parse_string,
@ -40,6 +44,21 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
LoginResponse = TypedDict(
"LoginResponse",
{
"user_id": str,
"access_token": str,
"home_server": str,
"expires_in_ms": Optional[int],
"refresh_token": Optional[str],
"device_id": str,
"well_known": Optional[Dict[str, Any]],
},
total=False,
)
class LoginRestServlet(RestServlet):
PATTERNS = client_patterns("/login$", v1=True)
CAS_TYPE = "m.login.cas"
@ -48,6 +67,7 @@ class LoginRestServlet(RestServlet):
JWT_TYPE = "org.matrix.login.jwt"
JWT_TYPE_DEPRECATED = "m.login.jwt"
APPSERVICE_TYPE = "uk.half-shot.msc2778.login.application_service"
REFRESH_TOKEN_PARAM = "org.matrix.msc2918.refresh_token"
def __init__(self, hs: "HomeServer"):
super().__init__()
@ -65,9 +85,12 @@ class LoginRestServlet(RestServlet):
self.cas_enabled = hs.config.cas_enabled
self.oidc_enabled = hs.config.oidc_enabled
self._msc2858_enabled = hs.config.experimental.msc2858_enabled
self._msc2918_enabled = hs.config.access_token_lifetime is not None
self.auth = hs.get_auth()
self.clock = hs.get_clock()
self.auth_handler = self.hs.get_auth_handler()
self.registration_handler = hs.get_registration_handler()
self._sso_handler = hs.get_sso_handler()
@ -138,6 +161,15 @@ class LoginRestServlet(RestServlet):
async def on_POST(self, request: SynapseRequest):
login_submission = parse_json_object_from_request(request)
if self._msc2918_enabled:
# Check if this login should also issue a refresh token, as per
# MSC2918
should_issue_refresh_token = parse_boolean(
request, name=LoginRestServlet.REFRESH_TOKEN_PARAM, default=False
)
else:
should_issue_refresh_token = False
try:
if login_submission["type"] == LoginRestServlet.APPSERVICE_TYPE:
appservice = self.auth.get_appservice_by_req(request)
@ -147,19 +179,32 @@ class LoginRestServlet(RestServlet):
None, request.getClientIP()
)
result = await self._do_appservice_login(login_submission, appservice)
result = await self._do_appservice_login(
login_submission,
appservice,
should_issue_refresh_token=should_issue_refresh_token,
)
elif self.jwt_enabled and (
login_submission["type"] == LoginRestServlet.JWT_TYPE
or login_submission["type"] == LoginRestServlet.JWT_TYPE_DEPRECATED
):
await self._address_ratelimiter.ratelimit(None, request.getClientIP())
result = await self._do_jwt_login(login_submission)
result = await self._do_jwt_login(
login_submission,
should_issue_refresh_token=should_issue_refresh_token,
)
elif login_submission["type"] == LoginRestServlet.TOKEN_TYPE:
await self._address_ratelimiter.ratelimit(None, request.getClientIP())
result = await self._do_token_login(login_submission)
result = await self._do_token_login(
login_submission,
should_issue_refresh_token=should_issue_refresh_token,
)
else:
await self._address_ratelimiter.ratelimit(None, request.getClientIP())
result = await self._do_other_login(login_submission)
result = await self._do_other_login(
login_submission,
should_issue_refresh_token=should_issue_refresh_token,
)
except KeyError:
raise SynapseError(400, "Missing JSON keys.")
@ -169,7 +214,10 @@ class LoginRestServlet(RestServlet):
return 200, result
async def _do_appservice_login(
self, login_submission: JsonDict, appservice: ApplicationService
self,
login_submission: JsonDict,
appservice: ApplicationService,
should_issue_refresh_token: bool = False,
):
identifier = login_submission.get("identifier")
logger.info("Got appservice login request with identifier: %r", identifier)
@ -198,14 +246,21 @@ class LoginRestServlet(RestServlet):
raise LoginError(403, "Invalid access_token", errcode=Codes.FORBIDDEN)
return await self._complete_login(
qualified_user_id, login_submission, ratelimit=appservice.is_rate_limited()
qualified_user_id,
login_submission,
ratelimit=appservice.is_rate_limited(),
should_issue_refresh_token=should_issue_refresh_token,
)
async def _do_other_login(self, login_submission: JsonDict) -> Dict[str, str]:
async def _do_other_login(
self, login_submission: JsonDict, should_issue_refresh_token: bool = False
) -> LoginResponse:
"""Handle non-token/saml/jwt logins
Args:
login_submission:
should_issue_refresh_token: True if this login should issue
a refresh token alongside the access token.
Returns:
HTTP response
@ -224,7 +279,10 @@ class LoginRestServlet(RestServlet):
login_submission, ratelimit=True
)
result = await self._complete_login(
canonical_user_id, login_submission, callback
canonical_user_id,
login_submission,
callback,
should_issue_refresh_token=should_issue_refresh_token,
)
return result
@ -232,11 +290,12 @@ class LoginRestServlet(RestServlet):
self,
user_id: str,
login_submission: JsonDict,
callback: Optional[Callable[[Dict[str, str]], Awaitable[None]]] = None,
callback: Optional[Callable[[LoginResponse], Awaitable[None]]] = None,
create_non_existent_users: bool = False,
ratelimit: bool = True,
auth_provider_id: Optional[str] = None,
) -> Dict[str, str]:
should_issue_refresh_token: bool = False,
) -> LoginResponse:
"""Called when we've successfully authed the user and now need to
actually login them in (e.g. create devices). This gets called on
all successful logins.
@ -253,6 +312,8 @@ class LoginRestServlet(RestServlet):
ratelimit: Whether to ratelimit the login request.
auth_provider_id: The SSO IdP the user used, if any (just used for the
prometheus metrics).
should_issue_refresh_token: True if this login should issue
a refresh token alongside the access token.
Returns:
result: Dictionary of account information after successful login.
@ -274,28 +335,48 @@ class LoginRestServlet(RestServlet):
device_id = login_submission.get("device_id")
initial_display_name = login_submission.get("initial_device_display_name")
device_id, access_token = await self.registration_handler.register_device(
user_id, device_id, initial_display_name, auth_provider_id=auth_provider_id
(
device_id,
access_token,
valid_until_ms,
refresh_token,
) = await self.registration_handler.register_device(
user_id,
device_id,
initial_display_name,
auth_provider_id=auth_provider_id,
should_issue_refresh_token=should_issue_refresh_token,
)
result = {
"user_id": user_id,
"access_token": access_token,
"home_server": self.hs.hostname,
"device_id": device_id,
}
result = LoginResponse(
user_id=user_id,
access_token=access_token,
home_server=self.hs.hostname,
device_id=device_id,
)
if valid_until_ms is not None:
expires_in_ms = valid_until_ms - self.clock.time_msec()
result["expires_in_ms"] = expires_in_ms
if refresh_token is not None:
result["refresh_token"] = refresh_token
if callback is not None:
await callback(result)
return result
async def _do_token_login(self, login_submission: JsonDict) -> Dict[str, str]:
async def _do_token_login(
self, login_submission: JsonDict, should_issue_refresh_token: bool = False
) -> LoginResponse:
"""
Handle the final stage of SSO login.
Args:
login_submission: The JSON request body.
login_submission: The JSON request body.
should_issue_refresh_token: True if this login should issue
a refresh token alongside the access token.
Returns:
The body of the JSON response.
@ -309,9 +390,12 @@ class LoginRestServlet(RestServlet):
login_submission,
self.auth_handler._sso_login_callback,
auth_provider_id=res.auth_provider_id,
should_issue_refresh_token=should_issue_refresh_token,
)
async def _do_jwt_login(self, login_submission: JsonDict) -> Dict[str, str]:
async def _do_jwt_login(
self, login_submission: JsonDict, should_issue_refresh_token: bool = False
) -> LoginResponse:
token = login_submission.get("token", None)
if token is None:
raise LoginError(
@ -342,7 +426,10 @@ class LoginRestServlet(RestServlet):
user_id = UserID(user, self.hs.hostname).to_string()
result = await self._complete_login(
user_id, login_submission, create_non_existent_users=True
user_id,
login_submission,
create_non_existent_users=True,
should_issue_refresh_token=should_issue_refresh_token,
)
return result
@ -371,6 +458,42 @@ def _get_auth_flow_dict_for_idp(
return e
class RefreshTokenServlet(RestServlet):
PATTERNS = client_patterns(
"/org.matrix.msc2918.refresh_token/refresh$", releases=(), unstable=True
)
def __init__(self, hs: "HomeServer"):
self._auth_handler = hs.get_auth_handler()
self._clock = hs.get_clock()
self.access_token_lifetime = hs.config.access_token_lifetime
async def on_POST(
self,
request: SynapseRequest,
):
refresh_submission = parse_json_object_from_request(request)
assert_params_in_dict(refresh_submission, ["refresh_token"])
token = refresh_submission["refresh_token"]
if not isinstance(token, str):
raise SynapseError(400, "Invalid param: refresh_token", Codes.INVALID_PARAM)
valid_until_ms = self._clock.time_msec() + self.access_token_lifetime
access_token, refresh_token = await self._auth_handler.refresh_token(
token, valid_until_ms
)
expires_in_ms = valid_until_ms - self._clock.time_msec()
return (
200,
{
"access_token": access_token,
"refresh_token": refresh_token,
"expires_in_ms": expires_in_ms,
},
)
class SsoRedirectServlet(RestServlet):
PATTERNS = list(client_patterns("/login/(cas|sso)/redirect$", v1=True)) + [
re.compile(
@ -477,6 +600,8 @@ class CasTicketServlet(RestServlet):
def register_servlets(hs, http_server):
LoginRestServlet(hs).register(http_server)
if hs.config.access_token_lifetime is not None:
RefreshTokenServlet(hs).register(http_server)
SsoRedirectServlet(hs).register(http_server)
if hs.config.cas_enabled:
CasTicketServlet(hs).register(http_server)

View file

@ -41,11 +41,13 @@ from synapse.http.server import finish_request, respond_with_html
from synapse.http.servlet import (
RestServlet,
assert_params_in_dict,
parse_boolean,
parse_json_object_from_request,
parse_string,
)
from synapse.metrics import threepid_send_requests
from synapse.push.mailer import Mailer
from synapse.types import JsonDict
from synapse.util.msisdn import phone_number_to_msisdn
from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.stringutils import assert_valid_client_secret, random_string
@ -399,6 +401,7 @@ class RegisterRestServlet(RestServlet):
self.password_policy_handler = hs.get_password_policy_handler()
self.clock = hs.get_clock()
self._registration_enabled = self.hs.config.enable_registration
self._msc2918_enabled = hs.config.access_token_lifetime is not None
self._registration_flows = _calculate_registration_flows(
hs.config, self.auth_handler
@ -424,6 +427,15 @@ class RegisterRestServlet(RestServlet):
"Do not understand membership kind: %s" % (kind.decode("utf8"),)
)
if self._msc2918_enabled:
# Check if this registration should also issue a refresh token, as
# per MSC2918
should_issue_refresh_token = parse_boolean(
request, name="org.matrix.msc2918.refresh_token", default=False
)
else:
should_issue_refresh_token = False
# Pull out the provided username and do basic sanity checks early since
# the auth layer will store these in sessions.
desired_username = None
@ -462,7 +474,10 @@ class RegisterRestServlet(RestServlet):
raise SynapseError(400, "Desired Username is missing or not a string")
result = await self._do_appservice_registration(
desired_username, access_token, body
desired_username,
access_token,
body,
should_issue_refresh_token=should_issue_refresh_token,
)
return 200, result
@ -665,7 +680,9 @@ class RegisterRestServlet(RestServlet):
registered = True
return_dict = await self._create_registration_details(
registered_user_id, params
registered_user_id,
params,
should_issue_refresh_token=should_issue_refresh_token,
)
if registered:
@ -677,7 +694,9 @@ class RegisterRestServlet(RestServlet):
return 200, return_dict
async def _do_appservice_registration(self, username, as_token, body):
async def _do_appservice_registration(
self, username, as_token, body, should_issue_refresh_token: bool = False
):
user_id = await self.registration_handler.appservice_register(
username, as_token
)
@ -685,19 +704,27 @@ class RegisterRestServlet(RestServlet):
user_id,
body,
is_appservice_ghost=True,
should_issue_refresh_token=should_issue_refresh_token,
)
async def _create_registration_details(
self, user_id, params, is_appservice_ghost=False
self,
user_id: str,
params: JsonDict,
is_appservice_ghost: bool = False,
should_issue_refresh_token: bool = False,
):
"""Complete registration of newly-registered user
Allocates device_id if one was not given; also creates access_token.
Args:
(str) user_id: full canonical @user:id
(object) params: registration parameters, from which we pull
device_id, initial_device_name and inhibit_login
user_id: full canonical @user:id
params: registration parameters, from which we pull device_id,
initial_device_name and inhibit_login
is_appservice_ghost
should_issue_refresh_token: True if this registration should issue
a refresh token alongside the access token.
Returns:
dictionary for response from /register
"""
@ -705,15 +732,29 @@ class RegisterRestServlet(RestServlet):
if not params.get("inhibit_login", False):
device_id = params.get("device_id")
initial_display_name = params.get("initial_device_display_name")
device_id, access_token = await self.registration_handler.register_device(
(
device_id,
access_token,
valid_until_ms,
refresh_token,
) = await self.registration_handler.register_device(
user_id,
device_id,
initial_display_name,
is_guest=False,
is_appservice_ghost=is_appservice_ghost,
should_issue_refresh_token=should_issue_refresh_token,
)
result.update({"access_token": access_token, "device_id": device_id})
if valid_until_ms is not None:
expires_in_ms = valid_until_ms - self.clock.time_msec()
result["expires_in_ms"] = expires_in_ms
if refresh_token is not None:
result["refresh_token"] = refresh_token
return result
async def _do_guest_registration(self, params, address=None):
@ -727,19 +768,30 @@ class RegisterRestServlet(RestServlet):
# we have nowhere to store it.
device_id = synapse.api.auth.GUEST_DEVICE_ID
initial_display_name = params.get("initial_device_display_name")
device_id, access_token = await self.registration_handler.register_device(
(
device_id,
access_token,
valid_until_ms,
refresh_token,
) = await self.registration_handler.register_device(
user_id, device_id, initial_display_name, is_guest=True
)
return (
200,
{
"user_id": user_id,
"device_id": device_id,
"access_token": access_token,
"home_server": self.hs.hostname,
},
)
result = {
"user_id": user_id,
"device_id": device_id,
"access_token": access_token,
"home_server": self.hs.hostname,
}
if valid_until_ms is not None:
expires_in_ms = valid_until_ms - self.clock.time_msec()
result["expires_in_ms"] = expires_in_ms
if refresh_token is not None:
result["refresh_token"] = refresh_token
return 200, result
def _calculate_registration_flows(

View file

@ -13,6 +13,7 @@
# limitations under the License.
import itertools
import logging
from collections import defaultdict
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Tuple
from synapse.api.constants import Membership, PresenceState
@ -232,29 +233,51 @@ class SyncRestServlet(RestServlet):
)
logger.debug("building sync response dict")
return {
"account_data": {"events": sync_result.account_data},
"to_device": {"events": sync_result.to_device},
"device_lists": {
"changed": list(sync_result.device_lists.changed),
"left": list(sync_result.device_lists.left),
},
"presence": SyncRestServlet.encode_presence(sync_result.presence, time_now),
"rooms": {
Membership.JOIN: joined,
Membership.INVITE: invited,
Membership.KNOCK: knocked,
Membership.LEAVE: archived,
},
"groups": {
Membership.JOIN: sync_result.groups.join,
Membership.INVITE: sync_result.groups.invite,
Membership.LEAVE: sync_result.groups.leave,
},
"device_one_time_keys_count": sync_result.device_one_time_keys_count,
"org.matrix.msc2732.device_unused_fallback_key_types": sync_result.device_unused_fallback_key_types,
"next_batch": await sync_result.next_batch.to_string(self.store),
}
response: dict = defaultdict(dict)
response["next_batch"] = await sync_result.next_batch.to_string(self.store)
if sync_result.account_data:
response["account_data"] = {"events": sync_result.account_data}
if sync_result.presence:
response["presence"] = SyncRestServlet.encode_presence(
sync_result.presence, time_now
)
if sync_result.to_device:
response["to_device"] = {"events": sync_result.to_device}
if sync_result.device_lists.changed:
response["device_lists"]["changed"] = list(sync_result.device_lists.changed)
if sync_result.device_lists.left:
response["device_lists"]["left"] = list(sync_result.device_lists.left)
if sync_result.device_one_time_keys_count:
response[
"device_one_time_keys_count"
] = sync_result.device_one_time_keys_count
if sync_result.device_unused_fallback_key_types:
response[
"org.matrix.msc2732.device_unused_fallback_key_types"
] = sync_result.device_unused_fallback_key_types
if joined:
response["rooms"][Membership.JOIN] = joined
if invited:
response["rooms"][Membership.INVITE] = invited
if knocked:
response["rooms"][Membership.KNOCK] = knocked
if archived:
response["rooms"][Membership.LEAVE] = archived
if sync_result.groups.join:
response["groups"][Membership.JOIN] = sync_result.groups.join
if sync_result.groups.invite:
response["groups"][Membership.INVITE] = sync_result.groups.invite
if sync_result.groups.leave:
response["groups"][Membership.LEAVE] = sync_result.groups.leave
return response
@staticmethod
def encode_presence(events, time_now):

View file

@ -29,6 +29,25 @@ from synapse.types import JsonDict
logger = logging.getLogger(__name__)
_REPLACE_STREAM_ORDRING_SQL_COMMANDS = (
# there should be no leftover rows without a stream_ordering2, but just in case...
"UPDATE events SET stream_ordering2 = stream_ordering WHERE stream_ordering2 IS NULL",
# finally, we can drop the rule and switch the columns
"DROP RULE populate_stream_ordering2 ON events",
"ALTER TABLE events DROP COLUMN stream_ordering",
"ALTER TABLE events RENAME COLUMN stream_ordering2 TO stream_ordering",
)
class _BackgroundUpdates:
EVENT_ORIGIN_SERVER_TS_NAME = "event_origin_server_ts"
EVENT_FIELDS_SENDER_URL_UPDATE_NAME = "event_fields_sender_url"
DELETE_SOFT_FAILED_EXTREMITIES = "delete_soft_failed_extremities"
POPULATE_STREAM_ORDERING2 = "populate_stream_ordering2"
INDEX_STREAM_ORDERING2 = "index_stream_ordering2"
REPLACE_STREAM_ORDERING_COLUMN = "replace_stream_ordering_column"
@attr.s(slots=True, frozen=True)
class _CalculateChainCover:
"""Return value for _calculate_chain_cover_txn."""
@ -48,19 +67,15 @@ class _CalculateChainCover:
class EventsBackgroundUpdatesStore(SQLBaseStore):
EVENT_ORIGIN_SERVER_TS_NAME = "event_origin_server_ts"
EVENT_FIELDS_SENDER_URL_UPDATE_NAME = "event_fields_sender_url"
DELETE_SOFT_FAILED_EXTREMITIES = "delete_soft_failed_extremities"
def __init__(self, database: DatabasePool, db_conn, hs):
super().__init__(database, db_conn, hs)
self.db_pool.updates.register_background_update_handler(
self.EVENT_ORIGIN_SERVER_TS_NAME, self._background_reindex_origin_server_ts
_BackgroundUpdates.EVENT_ORIGIN_SERVER_TS_NAME,
self._background_reindex_origin_server_ts,
)
self.db_pool.updates.register_background_update_handler(
self.EVENT_FIELDS_SENDER_URL_UPDATE_NAME,
_BackgroundUpdates.EVENT_FIELDS_SENDER_URL_UPDATE_NAME,
self._background_reindex_fields_sender,
)
@ -85,7 +100,8 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
)
self.db_pool.updates.register_background_update_handler(
self.DELETE_SOFT_FAILED_EXTREMITIES, self._cleanup_extremities_bg_update
_BackgroundUpdates.DELETE_SOFT_FAILED_EXTREMITIES,
self._cleanup_extremities_bg_update,
)
self.db_pool.updates.register_background_update_handler(
@ -139,6 +155,24 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
self._purged_chain_cover_index,
)
# bg updates for replacing stream_ordering with a BIGINT
# (these only run on postgres.)
self.db_pool.updates.register_background_update_handler(
_BackgroundUpdates.POPULATE_STREAM_ORDERING2,
self._background_populate_stream_ordering2,
)
self.db_pool.updates.register_background_index_update(
_BackgroundUpdates.INDEX_STREAM_ORDERING2,
index_name="events_stream_ordering",
table="events",
columns=["stream_ordering2"],
unique=True,
)
self.db_pool.updates.register_background_update_handler(
_BackgroundUpdates.REPLACE_STREAM_ORDERING_COLUMN,
self._background_replace_stream_ordering_column,
)
async def _background_reindex_fields_sender(self, progress, batch_size):
target_min_stream_id = progress["target_min_stream_id_inclusive"]
max_stream_id = progress["max_stream_id_exclusive"]
@ -190,18 +224,18 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
}
self.db_pool.updates._background_update_progress_txn(
txn, self.EVENT_FIELDS_SENDER_URL_UPDATE_NAME, progress
txn, _BackgroundUpdates.EVENT_FIELDS_SENDER_URL_UPDATE_NAME, progress
)
return len(rows)
result = await self.db_pool.runInteraction(
self.EVENT_FIELDS_SENDER_URL_UPDATE_NAME, reindex_txn
_BackgroundUpdates.EVENT_FIELDS_SENDER_URL_UPDATE_NAME, reindex_txn
)
if not result:
await self.db_pool.updates._end_background_update(
self.EVENT_FIELDS_SENDER_URL_UPDATE_NAME
_BackgroundUpdates.EVENT_FIELDS_SENDER_URL_UPDATE_NAME
)
return result
@ -264,18 +298,18 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
}
self.db_pool.updates._background_update_progress_txn(
txn, self.EVENT_ORIGIN_SERVER_TS_NAME, progress
txn, _BackgroundUpdates.EVENT_ORIGIN_SERVER_TS_NAME, progress
)
return len(rows_to_update)
result = await self.db_pool.runInteraction(
self.EVENT_ORIGIN_SERVER_TS_NAME, reindex_search_txn
_BackgroundUpdates.EVENT_ORIGIN_SERVER_TS_NAME, reindex_search_txn
)
if not result:
await self.db_pool.updates._end_background_update(
self.EVENT_ORIGIN_SERVER_TS_NAME
_BackgroundUpdates.EVENT_ORIGIN_SERVER_TS_NAME
)
return result
@ -454,7 +488,7 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
if not num_handled:
await self.db_pool.updates._end_background_update(
self.DELETE_SOFT_FAILED_EXTREMITIES
_BackgroundUpdates.DELETE_SOFT_FAILED_EXTREMITIES
)
def _drop_table_txn(txn):
@ -1009,3 +1043,75 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
await self.db_pool.updates._end_background_update("purged_chain_cover")
return result
async def _background_populate_stream_ordering2(
self, progress: JsonDict, batch_size: int
) -> int:
"""Populate events.stream_ordering2, then replace stream_ordering
This is to deal with the fact that stream_ordering was initially created as a
32-bit integer field.
"""
batch_size = max(batch_size, 1)
def process(txn: Cursor) -> int:
# if this is the first pass, find the minimum stream ordering
last_stream = progress.get("last_stream")
if last_stream is None:
txn.execute(
"""
SELECT stream_ordering FROM events ORDER BY stream_ordering LIMIT 1
"""
)
rows = txn.fetchall()
if not rows:
return 0
last_stream = rows[0][0] - 1
txn.execute(
"""
UPDATE events SET stream_ordering2=stream_ordering
WHERE stream_ordering > ? AND stream_ordering <= ?
""",
(last_stream, last_stream + batch_size),
)
row_count = txn.rowcount
self.db_pool.updates._background_update_progress_txn(
txn,
_BackgroundUpdates.POPULATE_STREAM_ORDERING2,
{"last_stream": last_stream + batch_size},
)
return row_count
result = await self.db_pool.runInteraction(
"_background_populate_stream_ordering2", process
)
if result != 0:
return result
await self.db_pool.updates._end_background_update(
_BackgroundUpdates.POPULATE_STREAM_ORDERING2
)
return 0
async def _background_replace_stream_ordering_column(
self, progress: JsonDict, batch_size: int
) -> int:
"""Drop the old 'stream_ordering' column and rename 'stream_ordering2' into its place."""
def process(txn: Cursor) -> None:
for sql in _REPLACE_STREAM_ORDRING_SQL_COMMANDS:
logger.info("completing stream_ordering migration: %s", sql)
txn.execute(sql)
await self.db_pool.runInteraction(
"_background_replace_stream_ordering_column", process
)
await self.db_pool.updates._end_background_update(
_BackgroundUpdates.REPLACE_STREAM_ORDERING_COLUMN
)
return 0

View file

@ -53,6 +53,9 @@ class TokenLookupResult:
valid_until_ms: The timestamp the token expires, if any.
token_owner: The "owner" of the token. This is either the same as the
user, or a server admin who is logged in as the user.
token_used: True if this token was used at least once in a request.
This field can be out of date since `get_user_by_access_token` is
cached.
"""
user_id = attr.ib(type=str)
@ -62,6 +65,7 @@ class TokenLookupResult:
device_id = attr.ib(type=Optional[str], default=None)
valid_until_ms = attr.ib(type=Optional[int], default=None)
token_owner = attr.ib(type=str)
token_used = attr.ib(type=bool, default=False)
# Make the token owner default to the user ID, which is the common case.
@token_owner.default
@ -69,6 +73,29 @@ class TokenLookupResult:
return self.user_id
@attr.s(frozen=True, slots=True)
class RefreshTokenLookupResult:
"""Result of looking up a refresh token."""
user_id = attr.ib(type=str)
"""The user this token belongs to."""
device_id = attr.ib(type=str)
"""The device associated with this refresh token."""
token_id = attr.ib(type=int)
"""The ID of this refresh token."""
next_token_id = attr.ib(type=Optional[int])
"""The ID of the refresh token which replaced this one."""
has_next_refresh_token_been_refreshed = attr.ib(type=bool)
"""True if the next refresh token was used for another refresh."""
has_next_access_token_been_used = attr.ib(type=bool)
"""True if the next access token was already used at least once."""
class RegistrationWorkerStore(CacheInvalidationWorkerStore):
def __init__(
self,
@ -441,7 +468,8 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
access_tokens.id as token_id,
access_tokens.device_id,
access_tokens.valid_until_ms,
access_tokens.user_id as token_owner
access_tokens.user_id as token_owner,
access_tokens.used as token_used
FROM users
INNER JOIN access_tokens on users.name = COALESCE(puppets_user_id, access_tokens.user_id)
WHERE token = ?
@ -449,8 +477,15 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
txn.execute(sql, (token,))
rows = self.db_pool.cursor_to_dict(txn)
if rows:
return TokenLookupResult(**rows[0])
row = rows[0]
# This field is nullable, ensure it comes out as a boolean
if row["token_used"] is None:
row["token_used"] = False
return TokenLookupResult(**row)
return None
@ -1072,6 +1107,111 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
desc="update_access_token_last_validated",
)
@cached()
async def mark_access_token_as_used(self, token_id: int) -> None:
"""
Mark the access token as used, which invalidates the refresh token used
to obtain it.
Because get_user_by_access_token is cached, this function might be
called multiple times for the same token, effectively doing unnecessary
SQL updates. Because updating the `used` field only goes one way (from
False to True) it is safe to cache this function as well to avoid this
issue.
Args:
token_id: The ID of the access token to update.
Raises:
StoreError if there was a problem updating this.
"""
await self.db_pool.simple_update_one(
"access_tokens",
{"id": token_id},
{"used": True},
desc="mark_access_token_as_used",
)
async def lookup_refresh_token(
self, token: str
) -> Optional[RefreshTokenLookupResult]:
"""Lookup a refresh token with hints about its validity."""
def _lookup_refresh_token_txn(txn) -> Optional[RefreshTokenLookupResult]:
txn.execute(
"""
SELECT
rt.id token_id,
rt.user_id,
rt.device_id,
rt.next_token_id,
(nrt.next_token_id IS NOT NULL) has_next_refresh_token_been_refreshed,
at.used has_next_access_token_been_used
FROM refresh_tokens rt
LEFT JOIN refresh_tokens nrt ON rt.next_token_id = nrt.id
LEFT JOIN access_tokens at ON at.refresh_token_id = nrt.id
WHERE rt.token = ?
""",
(token,),
)
row = txn.fetchone()
if row is None:
return None
return RefreshTokenLookupResult(
token_id=row[0],
user_id=row[1],
device_id=row[2],
next_token_id=row[3],
has_next_refresh_token_been_refreshed=row[4],
# This column is nullable, ensure it's a boolean
has_next_access_token_been_used=(row[5] or False),
)
return await self.db_pool.runInteraction(
"lookup_refresh_token", _lookup_refresh_token_txn
)
async def replace_refresh_token(self, token_id: int, next_token_id: int) -> None:
"""
Set the successor of a refresh token, removing the existing successor
if any.
Args:
token_id: ID of the refresh token to update.
next_token_id: ID of its successor.
"""
def _replace_refresh_token_txn(txn) -> None:
# First check if there was an existing refresh token
old_next_token_id = self.db_pool.simple_select_one_onecol_txn(
txn,
"refresh_tokens",
{"id": token_id},
"next_token_id",
allow_none=True,
)
self.db_pool.simple_update_one_txn(
txn,
"refresh_tokens",
{"id": token_id},
{"next_token_id": next_token_id},
)
# Delete the old "next" token if it exists. This should cascade and
# delete the associated access_token
if old_next_token_id is not None:
self.db_pool.simple_delete_one_txn(
txn,
"refresh_tokens",
{"id": old_next_token_id},
)
await self.db_pool.runInteraction(
"replace_refresh_token", _replace_refresh_token_txn
)
class RegistrationBackgroundUpdateStore(RegistrationWorkerStore):
def __init__(
@ -1263,6 +1403,7 @@ class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore):
self._ignore_unknown_session_error = hs.config.request_token_inhibit_3pid_errors
self._access_tokens_id_gen = IdGenerator(db_conn, "access_tokens", "id")
self._refresh_tokens_id_gen = IdGenerator(db_conn, "refresh_tokens", "id")
async def add_access_token_to_user(
self,
@ -1271,14 +1412,18 @@ class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore):
device_id: Optional[str],
valid_until_ms: Optional[int],
puppets_user_id: Optional[str] = None,
refresh_token_id: Optional[int] = None,
) -> int:
"""Adds an access token for the given user.
Args:
user_id: The user ID.
token: The new access token to add.
device_id: ID of the device to associate with the access token
device_id: ID of the device to associate with the access token.
valid_until_ms: when the token is valid until. None for no expiry.
puppets_user_id
refresh_token_id: ID of the refresh token generated alongside this
access token.
Raises:
StoreError if there was a problem adding this.
Returns:
@ -1297,12 +1442,47 @@ class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore):
"valid_until_ms": valid_until_ms,
"puppets_user_id": puppets_user_id,
"last_validated": now,
"refresh_token_id": refresh_token_id,
"used": False,
},
desc="add_access_token_to_user",
)
return next_id
async def add_refresh_token_to_user(
self,
user_id: str,
token: str,
device_id: Optional[str],
) -> int:
"""Adds a refresh token for the given user.
Args:
user_id: The user ID.
token: The new access token to add.
device_id: ID of the device to associate with the refresh token.
Raises:
StoreError if there was a problem adding this.
Returns:
The token ID
"""
next_id = self._refresh_tokens_id_gen.get_next()
await self.db_pool.simple_insert(
"refresh_tokens",
{
"id": next_id,
"user_id": user_id,
"device_id": device_id,
"token": token,
"next_token_id": None,
},
desc="add_refresh_token_to_user",
)
return next_id
def _set_device_for_access_token_txn(self, txn, token: str, device_id: str) -> str:
old_device_id = self.db_pool.simple_select_one_onecol_txn(
txn, "access_tokens", {"token": token}, "device_id"
@ -1545,7 +1725,7 @@ class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore):
device_id: Optional[str] = None,
) -> List[Tuple[str, int, Optional[str]]]:
"""
Invalidate access tokens belonging to a user
Invalidate access and refresh tokens belonging to a user
Args:
user_id: ID of user the tokens belong to
@ -1565,7 +1745,13 @@ class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore):
items = keyvalues.items()
where_clause = " AND ".join(k + " = ?" for k, _ in items)
values = [v for _, v in items] # type: List[Union[str, int]]
# Conveniently, refresh_tokens and access_tokens both use the user_id and device_id fields. Only caveat
# is the `except_token_id` param that is tricky to get right, so for now we're just using the same where
# clause and values before we handle that. This seems to be only used in the "set password" handler.
refresh_where_clause = where_clause
refresh_values = values.copy()
if except_token_id:
# TODO: support that for refresh tokens
where_clause += " AND id != ?"
values.append(except_token_id)
@ -1583,6 +1769,11 @@ class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore):
txn.execute("DELETE FROM access_tokens WHERE %s" % where_clause, values)
txn.execute(
"DELETE FROM refresh_tokens WHERE %s" % refresh_where_clause,
refresh_values,
)
return tokens_and_devices
return await self.db_pool.runInteraction("user_delete_access_tokens", f)
@ -1599,6 +1790,14 @@ class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore):
await self.db_pool.runInteraction("delete_access_token", f)
async def delete_refresh_token(self, refresh_token: str) -> None:
def f(txn):
self.db_pool.simple_delete_one_txn(
txn, table="refresh_tokens", keyvalues={"token": refresh_token}
)
await self.db_pool.runInteraction("delete_refresh_token", f)
async def add_user_pending_deactivation(self, user_id: str) -> None:
"""
Adds a user to the table of users who need to be parted from all the rooms they're

View file

@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
SCHEMA_VERSION = 59
SCHEMA_VERSION = 60
"""Represents the expectations made by the codebase about the database schema
This should be incremented whenever the codebase changes its requirements on the

View file

@ -0,0 +1,34 @@
/* Copyright 2021 The Matrix.org Foundation C.I.C
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-- Holds MSC2918 refresh tokens
CREATE TABLE refresh_tokens (
id BIGINT PRIMARY KEY,
user_id TEXT NOT NULL,
device_id TEXT NOT NULL,
token TEXT NOT NULL,
-- When consumed, a new refresh token is generated, which is tracked by
-- this foreign key
next_token_id BIGINT REFERENCES refresh_tokens (id) ON DELETE CASCADE,
UNIQUE(token)
);
-- Add a reference to the refresh token generated alongside each access token
ALTER TABLE "access_tokens"
ADD COLUMN refresh_token_id BIGINT REFERENCES refresh_tokens (id) ON DELETE CASCADE;
-- Add a flag whether the token was already used or not
ALTER TABLE "access_tokens"
ADD COLUMN used BOOLEAN;

View file

@ -0,0 +1,40 @@
/* Copyright 2021 The Matrix.org Foundation C.I.C
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-- This migration handles the process of changing the type of `stream_ordering` to
-- a BIGINT.
--
-- Note that this is only a problem on postgres as sqlite only has one "integer" type
-- which can cope with values up to 2^63.
-- First add a new column to contain the bigger stream_ordering
ALTER TABLE events ADD COLUMN stream_ordering2 BIGINT;
-- Create a rule which will populate it for new rows.
CREATE OR REPLACE RULE "populate_stream_ordering2" AS
ON INSERT TO events
DO UPDATE events SET stream_ordering2=NEW.stream_ordering WHERE stream_ordering=NEW.stream_ordering;
-- Start a bg process to populate it for old events
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
(6001, 'populate_stream_ordering2', '{}');
-- ... and another to build an index on it
INSERT INTO background_updates (ordering, update_name, progress_json, depends_on) VALUES
(6001, 'index_stream_ordering2', '{}', 'populate_stream_ordering2');
-- ... and another to do the switcheroo
INSERT INTO background_updates (ordering, update_name, progress_json, depends_on) VALUES
(6001, 'replace_stream_ordering_column', '{}', 'index_stream_ordering2');

View file

@ -58,6 +58,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
user_id=self.test_user, token_id=5, device_id="device"
)
self.store.get_user_by_access_token = simple_async_mock(user_info)
self.store.mark_access_token_as_used = simple_async_mock(None)
request = Mock(args={})
request.args[b"access_token"] = [self.test_token]

View file

@ -205,9 +205,7 @@ class FederationKnockingTestCase(
# Have this homeserver skip event auth checks. This is necessary due to
# event auth checks ensuring that events were signed by the sender's homeserver.
async def _check_event_auth(
origin, event, context, state, auth_events, backfilled
):
async def _check_event_auth(origin, event, context, *args, **kwargs):
return context
homeserver.get_federation_handler()._check_event_auth = _check_event_auth

View file

@ -257,7 +257,7 @@ class DehydrationTestCase(unittest.HomeserverTestCase):
self.assertEqual(device_data, {"device_data": {"foo": "bar"}})
# Create a new login for the user and dehydrated the device
device_id, access_token = self.get_success(
device_id, access_token, _expiration_time, _refresh_token = self.get_success(
self.registration.register_device(
user_id=user_id,
device_id=None,

View file

@ -251,7 +251,7 @@ class FederationTestCase(unittest.HomeserverTestCase):
join_event.signatures[other_server] = {"x": "y"}
with LoggingContext("send_join"):
d = run_in_background(
self.handler.on_send_join_request, other_server, join_event
self.handler.on_send_membership_event, other_server, join_event
)
self.get_success(d)

View file

@ -19,7 +19,7 @@ from synapse.api.constants import UserTypes
from synapse.api.errors import Codes, ResourceLimitError, SynapseError
from synapse.events.spamcheck import load_legacy_spam_checkers
from synapse.spam_checker_api import RegistrationBehaviour
from synapse.types import RoomAlias, UserID, create_requester
from synapse.types import RoomAlias, RoomID, UserID, create_requester
from tests.test_utils import make_awaitable
from tests.unittest import override_config
@ -719,3 +719,50 @@ class RegistrationTestCase(unittest.HomeserverTestCase):
)
return user_id, token
class RemoteAutoJoinTestCase(unittest.HomeserverTestCase):
"""Tests auto-join on remote rooms."""
def make_homeserver(self, reactor, clock):
self.room_id = "!roomid:remotetest"
async def update_membership(*args, **kwargs):
pass
async def lookup_room_alias(*args, **kwargs):
return RoomID.from_string(self.room_id), ["remotetest"]
self.room_member_handler = Mock(spec=["update_membership", "lookup_room_alias"])
self.room_member_handler.update_membership.side_effect = update_membership
self.room_member_handler.lookup_room_alias.side_effect = lookup_room_alias
hs = self.setup_test_homeserver(room_member_handler=self.room_member_handler)
return hs
def prepare(self, reactor, clock, hs):
self.handler = self.hs.get_registration_handler()
self.store = self.hs.get_datastore()
@override_config({"auto_join_rooms": ["#room:remotetest"]})
def test_auto_create_auto_join_remote_room(self):
"""Tests that we don't attempt to create remote rooms, and that we don't attempt
to invite ourselves to rooms we're not in."""
# Register a first user; this should call _create_and_join_rooms
self.get_success(self.handler.register_user(localpart="jeff"))
_, kwargs = self.room_member_handler.update_membership.call_args
self.assertEqual(kwargs["room_id"], self.room_id)
self.assertEqual(kwargs["action"], "join")
self.assertEqual(kwargs["remote_room_hosts"], ["remotetest"])
# Register a second user; this should call _join_rooms
self.get_success(self.handler.register_user(localpart="jeff2"))
_, kwargs = self.room_member_handler.update_membership.call_args
self.assertEqual(kwargs["room_id"], self.room_id)
self.assertEqual(kwargs["action"], "join")
self.assertEqual(kwargs["remote_room_hosts"], ["remotetest"])

View file

@ -228,7 +228,7 @@ class FederationSenderTestCase(BaseMultiWorkerStreamTestCase):
builder.build(prev_event_ids=prev_event_ids, auth_event_ids=None)
)
self.get_success(federation.on_send_join_request(remote_server, join_event))
self.get_success(federation.on_send_membership_event(remote_server, join_event))
self.replicate()
return room

View file

@ -20,7 +20,7 @@ import synapse.rest.admin
from synapse.api.constants import LoginType
from synapse.handlers.ui_auth.checkers import UserInteractiveAuthChecker
from synapse.rest.client.v1 import login
from synapse.rest.client.v2_alpha import auth, devices, register
from synapse.rest.client.v2_alpha import account, auth, devices, register
from synapse.rest.synapse.client import build_synapse_client_resource_tree
from synapse.types import JsonDict, UserID
@ -498,3 +498,221 @@ class UIAuthTests(unittest.HomeserverTestCase):
self.delete_device(
self.user_tok, self.device_id, 403, body={"auth": {"session": session_id}}
)
class RefreshAuthTests(unittest.HomeserverTestCase):
servlets = [
auth.register_servlets,
account.register_servlets,
login.register_servlets,
synapse.rest.admin.register_servlets_for_client_rest_resource,
register.register_servlets,
]
hijack_auth = False
def prepare(self, reactor, clock, hs):
self.user_pass = "pass"
self.user = self.register_user("test", self.user_pass)
def test_login_issue_refresh_token(self):
"""
A login response should include a refresh_token only if asked.
"""
# Test login
body = {"type": "m.login.password", "user": "test", "password": self.user_pass}
login_without_refresh = self.make_request(
"POST", "/_matrix/client/r0/login", body
)
self.assertEqual(login_without_refresh.code, 200, login_without_refresh.result)
self.assertNotIn("refresh_token", login_without_refresh.json_body)
login_with_refresh = self.make_request(
"POST",
"/_matrix/client/r0/login?org.matrix.msc2918.refresh_token=true",
body,
)
self.assertEqual(login_with_refresh.code, 200, login_with_refresh.result)
self.assertIn("refresh_token", login_with_refresh.json_body)
self.assertIn("expires_in_ms", login_with_refresh.json_body)
def test_register_issue_refresh_token(self):
"""
A register response should include a refresh_token only if asked.
"""
register_without_refresh = self.make_request(
"POST",
"/_matrix/client/r0/register",
{
"username": "test2",
"password": self.user_pass,
"auth": {"type": LoginType.DUMMY},
},
)
self.assertEqual(
register_without_refresh.code, 200, register_without_refresh.result
)
self.assertNotIn("refresh_token", register_without_refresh.json_body)
register_with_refresh = self.make_request(
"POST",
"/_matrix/client/r0/register?org.matrix.msc2918.refresh_token=true",
{
"username": "test3",
"password": self.user_pass,
"auth": {"type": LoginType.DUMMY},
},
)
self.assertEqual(register_with_refresh.code, 200, register_with_refresh.result)
self.assertIn("refresh_token", register_with_refresh.json_body)
self.assertIn("expires_in_ms", register_with_refresh.json_body)
def test_token_refresh(self):
"""
A refresh token can be used to issue a new access token.
"""
body = {"type": "m.login.password", "user": "test", "password": self.user_pass}
login_response = self.make_request(
"POST",
"/_matrix/client/r0/login?org.matrix.msc2918.refresh_token=true",
body,
)
self.assertEqual(login_response.code, 200, login_response.result)
refresh_response = self.make_request(
"POST",
"/_matrix/client/unstable/org.matrix.msc2918.refresh_token/refresh",
{"refresh_token": login_response.json_body["refresh_token"]},
)
self.assertEqual(refresh_response.code, 200, refresh_response.result)
self.assertIn("access_token", refresh_response.json_body)
self.assertIn("refresh_token", refresh_response.json_body)
self.assertIn("expires_in_ms", refresh_response.json_body)
# The access and refresh tokens should be different from the original ones after refresh
self.assertNotEqual(
login_response.json_body["access_token"],
refresh_response.json_body["access_token"],
)
self.assertNotEqual(
login_response.json_body["refresh_token"],
refresh_response.json_body["refresh_token"],
)
@override_config({"access_token_lifetime": "1m"})
def test_refresh_token_expiration(self):
"""
The access token should have some time as specified in the config.
"""
body = {"type": "m.login.password", "user": "test", "password": self.user_pass}
login_response = self.make_request(
"POST",
"/_matrix/client/r0/login?org.matrix.msc2918.refresh_token=true",
body,
)
self.assertEqual(login_response.code, 200, login_response.result)
self.assertApproximates(
login_response.json_body["expires_in_ms"], 60 * 1000, 100
)
refresh_response = self.make_request(
"POST",
"/_matrix/client/unstable/org.matrix.msc2918.refresh_token/refresh",
{"refresh_token": login_response.json_body["refresh_token"]},
)
self.assertEqual(refresh_response.code, 200, refresh_response.result)
self.assertApproximates(
refresh_response.json_body["expires_in_ms"], 60 * 1000, 100
)
def test_refresh_token_invalidation(self):
"""Refresh tokens are invalidated after first use of the next token.
A refresh token is considered invalid if:
- it was already used at least once
- and either
- the next access token was used
- the next refresh token was used
The chain of tokens goes like this:
login -|-> first_refresh -> third_refresh (fails)
|-> second_refresh -> fifth_refresh
|-> fourth_refresh (fails)
"""
body = {"type": "m.login.password", "user": "test", "password": self.user_pass}
login_response = self.make_request(
"POST",
"/_matrix/client/r0/login?org.matrix.msc2918.refresh_token=true",
body,
)
self.assertEqual(login_response.code, 200, login_response.result)
# This first refresh should work properly
first_refresh_response = self.make_request(
"POST",
"/_matrix/client/unstable/org.matrix.msc2918.refresh_token/refresh",
{"refresh_token": login_response.json_body["refresh_token"]},
)
self.assertEqual(
first_refresh_response.code, 200, first_refresh_response.result
)
# This one as well, since the token in the first one was never used
second_refresh_response = self.make_request(
"POST",
"/_matrix/client/unstable/org.matrix.msc2918.refresh_token/refresh",
{"refresh_token": login_response.json_body["refresh_token"]},
)
self.assertEqual(
second_refresh_response.code, 200, second_refresh_response.result
)
# This one should not, since the token from the first refresh is not valid anymore
third_refresh_response = self.make_request(
"POST",
"/_matrix/client/unstable/org.matrix.msc2918.refresh_token/refresh",
{"refresh_token": first_refresh_response.json_body["refresh_token"]},
)
self.assertEqual(
third_refresh_response.code, 401, third_refresh_response.result
)
# The associated access token should also be invalid
whoami_response = self.make_request(
"GET",
"/_matrix/client/r0/account/whoami",
access_token=first_refresh_response.json_body["access_token"],
)
self.assertEqual(whoami_response.code, 401, whoami_response.result)
# But all other tokens should work (they will expire after some time)
for access_token in [
second_refresh_response.json_body["access_token"],
login_response.json_body["access_token"],
]:
whoami_response = self.make_request(
"GET", "/_matrix/client/r0/account/whoami", access_token=access_token
)
self.assertEqual(whoami_response.code, 200, whoami_response.result)
# Now that the access token from the last valid refresh was used once, refreshing with the N-1 token should fail
fourth_refresh_response = self.make_request(
"POST",
"/_matrix/client/unstable/org.matrix.msc2918.refresh_token/refresh",
{"refresh_token": login_response.json_body["refresh_token"]},
)
self.assertEqual(
fourth_refresh_response.code, 403, fourth_refresh_response.result
)
# But refreshing from the last valid refresh token still works
fifth_refresh_response = self.make_request(
"POST",
"/_matrix/client/unstable/org.matrix.msc2918.refresh_token/refresh",
{"refresh_token": second_refresh_response.json_body["refresh_token"]},
)
self.assertEqual(
fifth_refresh_response.code, 200, fifth_refresh_response.result
)

View file

@ -41,35 +41,7 @@ class FilterTestCase(unittest.HomeserverTestCase):
channel = self.make_request("GET", "/sync")
self.assertEqual(channel.code, 200)
self.assertTrue(
{
"next_batch",
"rooms",
"presence",
"account_data",
"to_device",
"device_lists",
}.issubset(set(channel.json_body.keys()))
)
def test_sync_presence_disabled(self):
"""
When presence is disabled, the key does not appear in /sync.
"""
self.hs.config.use_presence = False
channel = self.make_request("GET", "/sync")
self.assertEqual(channel.code, 200)
self.assertTrue(
{
"next_batch",
"rooms",
"account_data",
"to_device",
"device_lists",
}.issubset(set(channel.json_body.keys()))
)
self.assertIn("next_batch", channel.json_body)
class SyncFilterTestCase(unittest.HomeserverTestCase):

View file

@ -306,8 +306,9 @@ class TestResourceLimitsServerNoticesWithRealRooms(unittest.HomeserverTestCase):
channel = self.make_request("GET", "/sync?timeout=0", access_token=tok)
invites = channel.json_body["rooms"]["invite"]
self.assertEqual(len(invites), 0, invites)
self.assertNotIn(
"rooms", channel.json_body, "Got invites without server notice"
)
def test_invite_with_notice(self):
"""Tests that, if the MAU limit is hit, the server notices user invites each user
@ -364,7 +365,8 @@ class TestResourceLimitsServerNoticesWithRealRooms(unittest.HomeserverTestCase):
# We could also pick another user and sync with it, which would return an
# invite to a system notices room, but it doesn't matter which user we're
# using so we use the last one because it saves us an extra sync.
invites = channel.json_body["rooms"]["invite"]
if "rooms" in channel.json_body:
invites = channel.json_body["rooms"]["invite"]
# Make sure we have an invite to process.
self.assertEqual(len(invites), 1, invites)