forked from MirrorHub/synapse
Merge branch 'release-v0.4.0' of github.com:matrix-org/synapse
This commit is contained in:
commit
449739e6a3
88 changed files with 1645 additions and 6704 deletions
11
CHANGES.rst
11
CHANGES.rst
|
@ -1,3 +1,14 @@
|
|||
Changes in synpase 0.4.0 (2014-10-17)
|
||||
=====================================
|
||||
This server includes changes to the federation protocol that is not backwards
|
||||
compatible.
|
||||
|
||||
The Matrix specification has been moved to a seperate git repository.
|
||||
|
||||
Homeserver:
|
||||
* Sign federation transactions.
|
||||
* Rename timestamp keys in PDUs.
|
||||
|
||||
Changes in synapse 0.3.4 (2014-09-25)
|
||||
=====================================
|
||||
This version adds support for using a TURN server. See docs/turn-howto.rst on
|
||||
|
|
83
README.rst
83
README.rst
|
@ -20,18 +20,21 @@ The overall architecture is::
|
|||
WARNING
|
||||
=======
|
||||
|
||||
**Synapse is currently in a state of rapid development, and not all features are yet functional.
|
||||
Critically, some security features are still in development, which means Synapse can *not*
|
||||
be considered secure or reliable at this point.** For instance:
|
||||
**Synapse is currently in a state of rapid development, and not all features
|
||||
are yet functional. Critically, some security features are still in
|
||||
development, which means Synapse can *not* be considered secure or reliable at
|
||||
this point.** For instance:
|
||||
|
||||
- **SSL Certificates used by server-server federation are not yet validated.**
|
||||
- **Room permissions are not yet enforced on traffic received via federation.**
|
||||
- **Homeservers do not yet cryptographically sign their events to avoid tampering**
|
||||
- **Homeservers do not yet cryptographically sign their events to avoid
|
||||
tampering**
|
||||
- Default configuration provides open signup to the service from the internet
|
||||
|
||||
Despite this, we believe Synapse is more than useful as a way for experimenting and
|
||||
exploring Synapse, and the missing features will land shortly. **Until then, please do *NOT*
|
||||
use Synapse for any remotely important or secure communication.**
|
||||
Despite this, we believe Synapse is more than useful as a way for experimenting
|
||||
and exploring Synapse, and the missing features will land shortly. **Until
|
||||
then, please do *NOT* use Synapse for any remotely important or secure
|
||||
communication.**
|
||||
|
||||
|
||||
Quick Start
|
||||
|
@ -46,17 +49,21 @@ To get up and running:
|
|||
- To simply play with an **existing** homeserver you can
|
||||
just go straight to http://matrix.org/alpha.
|
||||
|
||||
- To run your own **private** homeserver on localhost:8008, install synapse with
|
||||
``python setup.py develop --user`` and then run ``./synctl start`` twice (once to
|
||||
generate a config; once to actually run) - you will find a webclient running at
|
||||
http://localhost:8008. Please use a recent Chrome, Safari or Firefox for now...
|
||||
- To run your own **private** homeserver on localhost:8008, generate a basic
|
||||
config file: ``./synctl start`` will give you instructions on how to do this.
|
||||
For this purpose, you can use 'localhost' or your hostname as a server name.
|
||||
Once you've done so, running ``./synctl start`` again will start your private
|
||||
home sserver. You will find a webclient running at http://localhost:8008.
|
||||
Please use a recent Chrome or Firefox for now (or Safari if you don't need
|
||||
VoIP support).
|
||||
|
||||
- To run a **public** homeserver and let it exchange messages with other homeservers
|
||||
and participate in the global Matrix federation, you must expose port 8448 to the
|
||||
internet and edit homeserver.yaml to specify server_name (the public DNS entry for
|
||||
this server) and then run ``synctl start``. If you changed the server_name, you may
|
||||
need to move the old database (homeserver.db) out of the way first. Then come join
|
||||
``#matrix:matrix.org`` and say hi! :)
|
||||
- To run a **public** homeserver and let it exchange messages with other
|
||||
homeservers and participate in the global Matrix federation, you must expose
|
||||
port 8448 to the internet and edit homeserver.yaml to specify server_name
|
||||
(the public DNS entry for this server) and then run ``synctl start``. If you
|
||||
changed the server_name, you may need to move the old database
|
||||
(homeserver.db) out of the way first. Then come join ``#matrix:matrix.org``
|
||||
and say hi! :)
|
||||
|
||||
For more detailed setup instructions, please see further down this document.
|
||||
|
||||
|
@ -80,8 +87,8 @@ which handle:
|
|||
- Placing 1:1 VoIP and Video calls
|
||||
|
||||
These APIs are intended to be implemented on a wide range of servers, services
|
||||
and clients, letting developers build messaging and VoIP functionality on top of
|
||||
the entirely open Matrix ecosystem rather than using closed or proprietary
|
||||
and clients, letting developers build messaging and VoIP functionality on top
|
||||
of the entirely open Matrix ecosystem rather than using closed or proprietary
|
||||
solutions. The hope is for Matrix to act as the building blocks for a new
|
||||
generation of fully open and interoperable messaging and VoIP apps for the
|
||||
internet.
|
||||
|
@ -96,17 +103,17 @@ In Matrix, every user runs one or more Matrix clients, which connect through to
|
|||
a Matrix homeserver which stores all their personal chat history and user
|
||||
account information - much as a mail client connects through to an IMAP/SMTP
|
||||
server. Just like email, you can either run your own Matrix homeserver and
|
||||
control and own your own communications and history or use one hosted by someone
|
||||
else (e.g. matrix.org) - there is no single point of control or mandatory
|
||||
service provider in Matrix, unlike WhatsApp, Facebook, Hangouts, etc.
|
||||
control and own your own communications and history or use one hosted by
|
||||
someone else (e.g. matrix.org) - there is no single point of control or
|
||||
mandatory service provider in Matrix, unlike WhatsApp, Facebook, Hangouts, etc.
|
||||
|
||||
Synapse ships with two basic demo Matrix clients: webclient (a basic group chat
|
||||
web client demo implemented in AngularJS) and cmdclient (a basic Python
|
||||
command line utility which lets you easily see what the JSON APIs are up to).
|
||||
|
||||
We'd like to invite you to take a look at the Matrix spec, try to run a
|
||||
homeserver, and join the existing Matrix chatrooms already out there, experiment
|
||||
with the APIs and the demo clients, and let us know your thoughts at
|
||||
homeserver, and join the existing Matrix chatrooms already out there,
|
||||
experiment with the APIs and the demo clients, and let us know your thoughts at
|
||||
https://github.com/matrix-org/synapse/issues or at matrix@matrix.org.
|
||||
|
||||
Thanks for trying Matrix!
|
||||
|
@ -136,20 +143,20 @@ to install by making setup.py do so, in --user mode::
|
|||
$ python setup.py develop --user
|
||||
|
||||
You'll need a version of setuptools new enough to know about git, so you
|
||||
may need to also run:
|
||||
may need to also run::
|
||||
|
||||
$ sudo apt-get install python-pip
|
||||
$ sudo pip install --upgrade setuptools
|
||||
|
||||
If you don't have access to github, then you may need to install ``syutil``
|
||||
manually by checking it out and running ``python setup.py develop --user`` on it
|
||||
too.
|
||||
manually by checking it out and running ``python setup.py develop --user`` on
|
||||
it too.
|
||||
|
||||
If you get errors about ``sodium.h`` being missing, you may also need to
|
||||
manually install a newer PyNaCl via pip as setuptools installs an old one. Or
|
||||
you can check PyNaCl out of git directly (https://github.com/pyca/pynacl) and
|
||||
installing it. Installing PyNaCl using pip may also work (remember to remove any
|
||||
other versions installed by setuputils in, for example, ~/.local/lib).
|
||||
installing it. Installing PyNaCl using pip may also work (remember to remove
|
||||
any other versions installed by setuputils in, for example, ~/.local/lib).
|
||||
|
||||
On OSX, if you encounter ``clang: error: unknown argument: '-mno-fused-madd'``
|
||||
you will need to ``export CFLAGS=-Qunused-arguments``.
|
||||
|
@ -185,9 +192,9 @@ be publicly visible on the internet, and they will need to know its host name.
|
|||
You have two choices here, which will influence the form of your Matrix user
|
||||
IDs:
|
||||
|
||||
1) Use the machine's own hostname as available on public DNS in the form of its
|
||||
A or AAAA records. This is easier to set up initially, perhaps for testing,
|
||||
but lacks the flexibility of SRV.
|
||||
1) Use the machine's own hostname as available on public DNS in the form of
|
||||
its A or AAAA records. This is easier to set up initially, perhaps for
|
||||
testing, but lacks the flexibility of SRV.
|
||||
|
||||
2) Set up a SRV record for your domain name. This requires you create a SRV
|
||||
record in DNS, but gives the flexibility to run the server on your own
|
||||
|
@ -247,7 +254,7 @@ http://localhost:8080. Simply run::
|
|||
Running The Demo Web Client
|
||||
===========================
|
||||
|
||||
The homeserver runs a web client by default at http://localhost:8080.
|
||||
The homeserver runs a web client by default at https://localhost:8448/.
|
||||
|
||||
If this is the first time you have used the client from that browser (it uses
|
||||
HTML5 local storage to remember its config), you will need to log in to your
|
||||
|
@ -267,8 +274,8 @@ account. Your name will take the form of::
|
|||
|
||||
Specify your desired localpart in the topmost box of the "Register for an
|
||||
account" form, and click the "Register" button. Hostnames can contain ports if
|
||||
required due to lack of SRV records (e.g. @matthew:localhost:8080 on an internal
|
||||
synapse sandbox running on localhost)
|
||||
required due to lack of SRV records (e.g. @matthew:localhost:8448 on an
|
||||
internal synapse sandbox running on localhost)
|
||||
|
||||
|
||||
Logging In To An Existing Account
|
||||
|
@ -283,9 +290,9 @@ Identity Servers
|
|||
|
||||
The job of authenticating 3PIDs and tracking which 3PIDs are associated with a
|
||||
given Matrix user is very security-sensitive, as there is obvious risk of spam
|
||||
if it is too easy to sign up for Matrix accounts or harvest 3PID data. Meanwhile
|
||||
the job of publishing the end-to-end encryption public keys for Matrix users is
|
||||
also very security-sensitive for similar reasons.
|
||||
if it is too easy to sign up for Matrix accounts or harvest 3PID data.
|
||||
Meanwhile the job of publishing the end-to-end encryption public keys for
|
||||
Matrix users is also very security-sensitive for similar reasons.
|
||||
|
||||
Therefore the role of managing trusted identity in the Matrix ecosystem is
|
||||
farmed out to a cluster of known trusted ecosystem partners, who run 'Matrix
|
||||
|
|
2
VERSION
2
VERSION
|
@ -1 +1 @@
|
|||
0.3.4
|
||||
0.4.0
|
||||
|
|
|
@ -1,9 +0,0 @@
|
|||
Broad-sweeping stuff which would be nice to have
|
||||
================================================
|
||||
|
||||
- Additional SQL backends beyond sqlite
|
||||
- homeserver implementation in go
|
||||
- homeserver implementation in node.js
|
||||
- client SDKs
|
||||
- libpurple library
|
||||
- irssi plugin?
|
6
docs/README.rst
Normal file
6
docs/README.rst
Normal file
|
@ -0,0 +1,6 @@
|
|||
All matrix-generic documentation now lives in its own project at
|
||||
|
||||
github.com/matrix-org/matrix-doc.git
|
||||
|
||||
Only Synapse implementation-specific documentation lives here now
|
||||
(together with some older stuff will be shortly migrated over to matrix-doc)
|
|
@ -1,636 +0,0 @@
|
|||
.. TODO kegan
|
||||
Room config (specifically: message history,
|
||||
public rooms). /register seems super simplistic compared to /login, maybe it
|
||||
would be better if /register used the same technique as /login? /register should
|
||||
be "user" not "user_id".
|
||||
|
||||
|
||||
How to use the client-server API
|
||||
================================
|
||||
|
||||
This guide focuses on how the client-server APIs *provided by the reference
|
||||
home server* can be used. Since this is specific to a home server
|
||||
implementation, there may be variations in relation to registering/logging in
|
||||
which are not covered in extensive detail in this guide.
|
||||
|
||||
If you haven't already, get a home server up and running on
|
||||
``http://localhost:8008``.
|
||||
|
||||
|
||||
Accounts
|
||||
========
|
||||
Before you can send and receive messages, you must **register** for an account.
|
||||
If you already have an account, you must **login** into it.
|
||||
|
||||
`Try out the fiddle`__
|
||||
|
||||
.. __: http://jsfiddle.net/gh/get/jquery/1.8.3/matrix-org/synapse/tree/master/jsfiddles/register_login
|
||||
|
||||
Registration
|
||||
------------
|
||||
The aim of registration is to get a user ID and access token which you will need
|
||||
when accessing other APIs::
|
||||
|
||||
curl -XPOST -d '{"user_id":"example", "password":"wordpass"}' "http://localhost:8008/_matrix/client/api/v1/register"
|
||||
|
||||
{
|
||||
"access_token": "QGV4YW1wbGU6bG9jYWxob3N0.AqdSzFmFYrLrTmteXc",
|
||||
"home_server": "localhost",
|
||||
"user_id": "@example:localhost"
|
||||
}
|
||||
|
||||
NB: If a ``user_id`` is not specified, one will be randomly generated for you.
|
||||
If you do not specify a ``password``, you will be unable to login to the account
|
||||
if you forget the ``access_token``.
|
||||
|
||||
Implementation note: The matrix specification does not enforce how users
|
||||
register with a server. It just specifies the URL path and absolute minimum
|
||||
keys. The reference home server uses a username/password to authenticate user,
|
||||
but other home servers may use different methods.
|
||||
|
||||
Login
|
||||
-----
|
||||
The aim when logging in is to get an access token for your existing user ID::
|
||||
|
||||
curl -XGET "http://localhost:8008/_matrix/client/api/v1/login"
|
||||
|
||||
{
|
||||
"flows": [
|
||||
{
|
||||
"type": "m.login.password"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
curl -XPOST -d '{"type":"m.login.password", "user":"example", "password":"wordpass"}' "http://localhost:8008/_matrix/client/api/v1/login"
|
||||
|
||||
{
|
||||
"access_token": "QGV4YW1wbGU6bG9jYWxob3N0.vRDLTgxefmKWQEtgGd",
|
||||
"home_server": "localhost",
|
||||
"user_id": "@example:localhost"
|
||||
}
|
||||
|
||||
Implementation note: Different home servers may implement different methods for
|
||||
logging in to an existing account. In order to check that you know how to login
|
||||
to this home server, you must perform a ``GET`` first and make sure you
|
||||
recognise the login type. If you do not know how to login, you can
|
||||
``GET /login/fallback`` which will return a basic webpage which you can use to
|
||||
login. The reference home server implementation support username/password login,
|
||||
but other home servers may support different login methods (e.g. OAuth2).
|
||||
|
||||
|
||||
Communicating
|
||||
=============
|
||||
|
||||
In order to communicate with another user, you must **create a room** with that
|
||||
user and **send a message** to that room.
|
||||
|
||||
`Try out the fiddle`__
|
||||
|
||||
.. __: http://jsfiddle.net/gh/get/jquery/1.8.3/matrix-org/synapse/tree/master/jsfiddles/create_room_send_msg
|
||||
|
||||
Creating a room
|
||||
---------------
|
||||
If you want to send a message to someone, you have to be in a room with them. To
|
||||
create a room::
|
||||
|
||||
curl -XPOST -d '{"room_alias_name":"tutorial"}' "http://localhost:8008/_matrix/client/api/v1/createRoom?access_token=YOUR_ACCESS_TOKEN"
|
||||
|
||||
{
|
||||
"room_alias": "#tutorial:localhost",
|
||||
"room_id": "!CvcvRuDYDzTOzfKKgh:localhost"
|
||||
}
|
||||
|
||||
The "room alias" is a human-readable string which can be shared with other users
|
||||
so they can join a room, rather than the room ID which is a randomly generated
|
||||
string. You can have multiple room aliases per room.
|
||||
|
||||
.. TODO(kegan)
|
||||
How to add/remove aliases from an existing room.
|
||||
|
||||
|
||||
Sending messages
|
||||
----------------
|
||||
You can now send messages to this room::
|
||||
|
||||
curl -XPOST -d '{"msgtype":"m.text", "body":"hello"}' "http://localhost:8008/_matrix/client/api/v1/rooms/%21CvcvRuDYDzTOzfKKgh%3Alocalhost/send/m.room.message?access_token=YOUR_ACCESS_TOKEN"
|
||||
|
||||
{
|
||||
"event_id": "YUwRidLecu"
|
||||
}
|
||||
|
||||
The event ID returned is a unique ID which identifies this message.
|
||||
|
||||
NB: There are no limitations to the types of messages which can be exchanged.
|
||||
The only requirement is that ``"msgtype"`` is specified. The Matrix
|
||||
specification outlines the following standard types: ``m.text``, ``m.image``,
|
||||
``m.audio``, ``m.video``, ``m.location``, ``m.emote``. See the specification for
|
||||
more information on these types.
|
||||
|
||||
Users and rooms
|
||||
===============
|
||||
|
||||
Each room can be configured to allow or disallow certain rules. In particular,
|
||||
these rules may specify if you require an **invitation** from someone already in
|
||||
the room in order to **join the room**. In addition, you may also be able to
|
||||
join a room **via a room alias** if one was set up.
|
||||
|
||||
`Try out the fiddle`__
|
||||
|
||||
.. __: http://jsfiddle.net/gh/get/jquery/1.8.3/matrix-org/synapse/tree/master/jsfiddles/room_memberships
|
||||
|
||||
Inviting a user to a room
|
||||
-------------------------
|
||||
You can directly invite a user to a room like so::
|
||||
|
||||
curl -XPOST -d '{"user_id":"@myfriend:localhost"}' "http://localhost:8008/_matrix/client/api/v1/rooms/%21CvcvRuDYDzTOzfKKgh%3Alocalhost/invite?access_token=YOUR_ACCESS_TOKEN"
|
||||
|
||||
This informs ``@myfriend:localhost`` of the room ID
|
||||
``!CvcvRuDYDzTOzfKKgh:localhost`` and allows them to join the room.
|
||||
|
||||
Joining a room via an invite
|
||||
----------------------------
|
||||
If you receive an invite, you can join the room::
|
||||
|
||||
curl -XPOST -d '{}' "http://localhost:8008/_matrix/client/api/v1/rooms/%21CvcvRuDYDzTOzfKKgh%3Alocalhost/join?access_token=YOUR_ACCESS_TOKEN"
|
||||
|
||||
NB: Only the person invited (``@myfriend:localhost``) can change the membership
|
||||
state to ``"join"``. Repeatedly joining a room does nothing.
|
||||
|
||||
Joining a room via an alias
|
||||
---------------------------
|
||||
Alternatively, if you know the room alias for this room and the room config
|
||||
allows it, you can directly join a room via the alias::
|
||||
|
||||
curl -XPOST -d '{}' "http://localhost:8008/_matrix/client/api/v1/join/%23tutorial%3Alocalhost?access_token=YOUR_ACCESS_TOKEN"
|
||||
|
||||
{
|
||||
"room_id": "!CvcvRuDYDzTOzfKKgh:localhost"
|
||||
}
|
||||
|
||||
You will need to use the room ID when sending messages, not the room alias.
|
||||
|
||||
NB: If the room is configured to be an invite-only room, you will still require
|
||||
an invite in order to join the room even though you know the room alias. As a
|
||||
result, it is more common to see a room alias in relation to a public room,
|
||||
which do not require invitations.
|
||||
|
||||
Getting events
|
||||
==============
|
||||
An event is some interesting piece of data that a client may be interested in.
|
||||
It can be a message in a room, a room invite, etc. There are many different ways
|
||||
of getting events, depending on what the client already knows.
|
||||
|
||||
`Try out the fiddle`__
|
||||
|
||||
.. __: http://jsfiddle.net/gh/get/jquery/1.8.3/matrix-org/synapse/tree/master/jsfiddles/event_stream
|
||||
|
||||
Getting all state
|
||||
-----------------
|
||||
If the client doesn't know any information on the rooms the user is
|
||||
invited/joined on, they can get all the user's state for all rooms::
|
||||
|
||||
curl -XGET "http://localhost:8008/_matrix/client/api/v1/initialSync?access_token=YOUR_ACCESS_TOKEN"
|
||||
|
||||
{
|
||||
"end": "s39_18_0",
|
||||
"presence": [
|
||||
{
|
||||
"content": {
|
||||
"last_active_ago": 1061436,
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
"type": "m.presence"
|
||||
}
|
||||
],
|
||||
"rooms": [
|
||||
{
|
||||
"membership": "join",
|
||||
"messages": {
|
||||
"chunk": [
|
||||
{
|
||||
"content": {
|
||||
"@example:localhost": 10,
|
||||
"default": 0
|
||||
},
|
||||
"event_id": "wAumPSTsWF",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.power_levels",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"join_rule": "public"
|
||||
},
|
||||
"event_id": "jrLVqKHKiI",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.join_rules",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"level": 10
|
||||
},
|
||||
"event_id": "WpmTgsNWUZ",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.add_state_level",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"level": 0
|
||||
},
|
||||
"event_id": "qUMBJyKsTQ",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.send_event_level",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"ban_level": 5,
|
||||
"kick_level": 5
|
||||
},
|
||||
"event_id": "YAaDmKvoUW",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.ops_levels",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"avatar_url": null,
|
||||
"displayname": null,
|
||||
"membership": "join"
|
||||
},
|
||||
"event_id": "RJbPMtCutf",
|
||||
"membership": "join",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "@example:localhost",
|
||||
"ts": 1409665586730,
|
||||
"type": "m.room.member",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"body": "hello",
|
||||
"hsob_ts": 1409665660439,
|
||||
"msgtype": "m.text"
|
||||
},
|
||||
"event_id": "YUwRidLecu",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"ts": 1409665660439,
|
||||
"type": "m.room.message",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"membership": "invite"
|
||||
},
|
||||
"event_id": "YjNuBKnPsb",
|
||||
"membership": "invite",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "@myfriend:localhost",
|
||||
"ts": 1409666426819,
|
||||
"type": "m.room.member",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"avatar_url": null,
|
||||
"displayname": null,
|
||||
"membership": "join",
|
||||
"prev": "join"
|
||||
},
|
||||
"event_id": "KWwdDjNZnm",
|
||||
"membership": "join",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "@example:localhost",
|
||||
"ts": 1409666551582,
|
||||
"type": "m.room.member",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"avatar_url": null,
|
||||
"displayname": null,
|
||||
"membership": "join"
|
||||
},
|
||||
"event_id": "JFLVteSvQc",
|
||||
"membership": "join",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "@example:localhost",
|
||||
"ts": 1409666587265,
|
||||
"type": "m.room.member",
|
||||
"user_id": "@example:localhost"
|
||||
}
|
||||
],
|
||||
"end": "s39_18_0",
|
||||
"start": "t1-11_18_0"
|
||||
},
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state": [
|
||||
{
|
||||
"content": {
|
||||
"creator": "@example:localhost"
|
||||
},
|
||||
"event_id": "dMUoqVTZca",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.create",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"@example:localhost": 10,
|
||||
"default": 0
|
||||
},
|
||||
"event_id": "wAumPSTsWF",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.power_levels",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"join_rule": "public"
|
||||
},
|
||||
"event_id": "jrLVqKHKiI",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.join_rules",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"level": 10
|
||||
},
|
||||
"event_id": "WpmTgsNWUZ",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.add_state_level",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"level": 0
|
||||
},
|
||||
"event_id": "qUMBJyKsTQ",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.send_event_level",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"ban_level": 5,
|
||||
"kick_level": 5
|
||||
},
|
||||
"event_id": "YAaDmKvoUW",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.ops_levels",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"membership": "invite"
|
||||
},
|
||||
"event_id": "YjNuBKnPsb",
|
||||
"membership": "invite",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "@myfriend:localhost",
|
||||
"ts": 1409666426819,
|
||||
"type": "m.room.member",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"avatar_url": null,
|
||||
"displayname": null,
|
||||
"membership": "join"
|
||||
},
|
||||
"event_id": "JFLVteSvQc",
|
||||
"membership": "join",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "@example:localhost",
|
||||
"ts": 1409666587265,
|
||||
"type": "m.room.member",
|
||||
"user_id": "@example:localhost"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
This returns all the room information the user is invited/joined on, as well as
|
||||
all of the presences relevant for these rooms. This can be a LOT of data. You
|
||||
may just want the most recent event for each room. This can be achieved by
|
||||
applying query parameters to ``limit`` this request::
|
||||
|
||||
curl -XGET "http://localhost:8008/_matrix/client/api/v1/initialSync?limit=1&access_token=YOUR_ACCESS_TOKEN"
|
||||
|
||||
{
|
||||
"end": "s39_18_0",
|
||||
"presence": [
|
||||
{
|
||||
"content": {
|
||||
"last_active_ago": 1279484,
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
"type": "m.presence"
|
||||
}
|
||||
],
|
||||
"rooms": [
|
||||
{
|
||||
"membership": "join",
|
||||
"messages": {
|
||||
"chunk": [
|
||||
{
|
||||
"content": {
|
||||
"avatar_url": null,
|
||||
"displayname": null,
|
||||
"membership": "join"
|
||||
},
|
||||
"event_id": "JFLVteSvQc",
|
||||
"membership": "join",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "@example:localhost",
|
||||
"ts": 1409666587265,
|
||||
"type": "m.room.member",
|
||||
"user_id": "@example:localhost"
|
||||
}
|
||||
],
|
||||
"end": "s39_18_0",
|
||||
"start": "t10-30_18_0"
|
||||
},
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state": [
|
||||
{
|
||||
"content": {
|
||||
"creator": "@example:localhost"
|
||||
},
|
||||
"event_id": "dMUoqVTZca",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.create",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"@example:localhost": 10,
|
||||
"default": 0
|
||||
},
|
||||
"event_id": "wAumPSTsWF",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.power_levels",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"join_rule": "public"
|
||||
},
|
||||
"event_id": "jrLVqKHKiI",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.join_rules",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"level": 10
|
||||
},
|
||||
"event_id": "WpmTgsNWUZ",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.add_state_level",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"level": 0
|
||||
},
|
||||
"event_id": "qUMBJyKsTQ",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.send_event_level",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"ban_level": 5,
|
||||
"kick_level": 5
|
||||
},
|
||||
"event_id": "YAaDmKvoUW",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.ops_levels",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"membership": "invite"
|
||||
},
|
||||
"event_id": "YjNuBKnPsb",
|
||||
"membership": "invite",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "@myfriend:localhost",
|
||||
"ts": 1409666426819,
|
||||
"type": "m.room.member",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"avatar_url": null,
|
||||
"displayname": null,
|
||||
"membership": "join"
|
||||
},
|
||||
"event_id": "JFLVteSvQc",
|
||||
"membership": "join",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "@example:localhost",
|
||||
"ts": 1409666587265,
|
||||
"type": "m.room.member",
|
||||
"user_id": "@example:localhost"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
Getting live state
|
||||
------------------
|
||||
Once you know which rooms the client has previously interacted with, you need to
|
||||
listen for incoming events. This can be done like so::
|
||||
|
||||
curl -XGET "http://localhost:8008/_matrix/client/api/v1/events?access_token=YOUR_ACCESS_TOKEN"
|
||||
|
||||
{
|
||||
"chunk": [],
|
||||
"end": "s39_18_0",
|
||||
"start": "s39_18_0"
|
||||
}
|
||||
|
||||
This will block waiting for an incoming event, timing out after several seconds.
|
||||
Even if there are no new events (as in the example above), there will be some
|
||||
pagination stream response keys. The client should make subsequent requests
|
||||
using the value of the ``"end"`` key (in this case ``s39_18_0``) as the ``from``
|
||||
query parameter e.g. ``http://localhost:8008/_matrix/client/api/v1/events?access
|
||||
_token=YOUR_ACCESS_TOKEN&from=s39_18_0``. This value should be stored so when the
|
||||
client reopens your app after a period of inactivity, you can resume from where
|
||||
you got up to in the event stream. If it has been a long period of inactivity,
|
||||
there may be LOTS of events waiting for the user. In this case, you may wish to
|
||||
get all state instead and then resume getting live state from a newer end token.
|
||||
|
||||
NB: The timeout can be changed by adding a ``timeout`` query parameter, which is
|
||||
in milliseconds. A timeout of 0 will not block.
|
||||
|
||||
|
||||
Example application
|
||||
-------------------
|
||||
The following example demonstrates registration and login, live event streaming,
|
||||
creating and joining rooms, sending messages, getting member lists and getting
|
||||
historical messages for a room. This covers most functionality of a messaging
|
||||
application.
|
||||
|
||||
`Try out the fiddle`__
|
||||
|
||||
.. __: http://jsfiddle.net/gh/get/jquery/1.8.3/matrix-org/synapse/tree/master/jsfiddles/example_app
|
|
@ -1,103 +1,3 @@
|
|||
========
|
||||
Presence
|
||||
========
|
||||
|
||||
A description of presence information and visibility between users.
|
||||
|
||||
Overview
|
||||
========
|
||||
|
||||
Each user has the concept of Presence information. This encodes a sense of the
|
||||
"availability" of that user, suitable for display on other user's clients.
|
||||
|
||||
|
||||
Presence Information
|
||||
====================
|
||||
|
||||
The basic piece of presence information is an enumeration of a small set of
|
||||
state; such as "free to chat", "online", "busy", or "offline". The default state
|
||||
unless the user changes it is "online". Lower states suggest some amount of
|
||||
decreased availability from normal, which might have some client-side effect
|
||||
like muting notification sounds and suggests to other users not to bother them
|
||||
unless it is urgent. Equally, the "free to chat" state exists to let the user
|
||||
announce their general willingness to receive messages moreso than default.
|
||||
|
||||
Home servers should also allow a user to set their state as "hidden" - a state
|
||||
which behaves as offline, but allows the user to see the client state anyway and
|
||||
generally interact with client features such as reading message history or
|
||||
accessing contacts in the address book.
|
||||
|
||||
This basic state field applies to the user as a whole, regardless of how many
|
||||
client devices they have connected. The home server should synchronise this
|
||||
status choice among multiple devices to ensure the user gets a consistent
|
||||
experience.
|
||||
|
||||
Idle Time
|
||||
---------
|
||||
|
||||
As well as the basic state field, the presence information can also show a sense
|
||||
of an "idle timer". This should be maintained individually by the user's
|
||||
clients, and the homeserver can take the highest reported time as that to
|
||||
report. Likely this should be presented in fairly coarse granularity; possibly
|
||||
being limited to letting the home server automatically switch from a "free to
|
||||
chat" or "online" mode into "idle".
|
||||
|
||||
When a user is offline, the Home Server can still report when the user was last
|
||||
seen online, again perhaps in a somewhat coarse manner.
|
||||
|
||||
Device Type
|
||||
-----------
|
||||
|
||||
Client devices that may limit the user experience somewhat (such as "mobile"
|
||||
devices with limited ability to type on a real keyboard or read large amounts of
|
||||
text) should report this to the home server, as this is also useful information
|
||||
to report as "presence" if the user cannot be expected to provide a good typed
|
||||
response to messages.
|
||||
|
||||
|
||||
Presence List
|
||||
=============
|
||||
|
||||
Each user's home server stores a "presence list" for that user. This stores a
|
||||
list of other user IDs the user has chosen to add to it (remembering any ACL
|
||||
Pointer if appropriate).
|
||||
|
||||
To be added to a contact list, the user being added must grant permission. Once
|
||||
granted, both user's HS(es) store this information, as it allows the user who
|
||||
has added the contact some more abilities; see below. Since such subscriptions
|
||||
are likely to be bidirectional, HSes may wish to automatically accept requests
|
||||
when a reverse subscription already exists.
|
||||
|
||||
As a convenience, presence lists should support the ability to collect users
|
||||
into groups, which could allow things like inviting the entire group to a new
|
||||
("ad-hoc") chat room, or easy interaction with the profile information ACL
|
||||
implementation of the HS.
|
||||
|
||||
|
||||
Presence and Permissions
|
||||
========================
|
||||
|
||||
For a viewing user to be allowed to see the presence information of a target
|
||||
user, either
|
||||
|
||||
* The target user has allowed the viewing user to add them to their presence
|
||||
list, or
|
||||
|
||||
* The two users share at least one room in common
|
||||
|
||||
In the latter case, this allows for clients to display some minimal sense of
|
||||
presence information in a user list for a room.
|
||||
|
||||
Home servers can also use the user's choice of presence state as a signal for
|
||||
how to handle new private one-to-one chat message requests. For example, it
|
||||
might decide:
|
||||
|
||||
"free to chat": accept anything
|
||||
"online": accept from anyone in my addres book list
|
||||
"busy": accept from anyone in this "important people" group in my address
|
||||
book list
|
||||
|
||||
|
||||
API Efficiency
|
||||
==============
|
||||
|
||||
|
|
|
@ -1,46 +0,0 @@
|
|||
{
|
||||
"apiVersion": "1.0.0",
|
||||
"swaggerVersion": "1.2",
|
||||
"apis": [
|
||||
{
|
||||
"path": "-login",
|
||||
"description": "Login operations"
|
||||
},
|
||||
{
|
||||
"path": "-registration",
|
||||
"description": "Registration operations"
|
||||
},
|
||||
{
|
||||
"path": "-rooms",
|
||||
"description": "Room operations"
|
||||
},
|
||||
{
|
||||
"path": "-profile",
|
||||
"description": "Profile operations"
|
||||
},
|
||||
{
|
||||
"path": "-presence",
|
||||
"description": "Presence operations"
|
||||
},
|
||||
{
|
||||
"path": "-events",
|
||||
"description": "Event operations"
|
||||
},
|
||||
{
|
||||
"path": "-directory",
|
||||
"description": "Directory operations"
|
||||
}
|
||||
],
|
||||
"authorizations": {
|
||||
"token": {
|
||||
"scopes": []
|
||||
}
|
||||
},
|
||||
"info": {
|
||||
"title": "Matrix Client-Server API Reference",
|
||||
"description": "This contains the client-server API for the reference implementation of the home server",
|
||||
"termsOfServiceUrl": "http://matrix.org",
|
||||
"license": "Apache 2.0",
|
||||
"licenseUrl": "http://www.apache.org/licenses/LICENSE-2.0.html"
|
||||
}
|
||||
}
|
|
@ -1,85 +0,0 @@
|
|||
{
|
||||
"apiVersion": "1.0.0",
|
||||
"swaggerVersion": "1.2",
|
||||
"basePath": "http://localhost:8008/_matrix/client/api/v1",
|
||||
"resourcePath": "/directory",
|
||||
"produces": [
|
||||
"application/json"
|
||||
],
|
||||
"apis": [
|
||||
{
|
||||
"path": "/directory/room/{roomAlias}",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get the room ID corresponding to this room alias.",
|
||||
"notes": "Volatile: This API is likely to change.",
|
||||
"type": "DirectoryResponse",
|
||||
"nickname": "get_room_id_for_alias",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomAlias",
|
||||
"description": "The room alias.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"method": "PUT",
|
||||
"summary": "Create a new mapping from room alias to room ID.",
|
||||
"notes": "Volatile: This API is likely to change.",
|
||||
"type": "void",
|
||||
"nickname": "add_room_alias",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomAlias",
|
||||
"description": "The room alias to set.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The room ID to set.",
|
||||
"required": true,
|
||||
"type": "RoomAliasRequest",
|
||||
"paramType": "body"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"models": {
|
||||
"DirectoryResponse": {
|
||||
"id": "DirectoryResponse",
|
||||
"properties": {
|
||||
"room_id": {
|
||||
"type": "string",
|
||||
"description": "The fully-qualified room ID.",
|
||||
"required": true
|
||||
},
|
||||
"servers": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "string"
|
||||
},
|
||||
"description": "A list of servers that know about this room.",
|
||||
"required": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"RoomAliasRequest": {
|
||||
"id": "RoomAliasRequest",
|
||||
"properties": {
|
||||
"room_id": {
|
||||
"type": "string",
|
||||
"description": "The room ID to map the alias to.",
|
||||
"required": true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,247 +0,0 @@
|
|||
{
|
||||
"apiVersion": "1.0.0",
|
||||
"swaggerVersion": "1.2",
|
||||
"basePath": "http://localhost:8008/_matrix/client/api/v1",
|
||||
"resourcePath": "/events",
|
||||
"produces": [
|
||||
"application/json"
|
||||
],
|
||||
"apis": [
|
||||
{
|
||||
"path": "/events",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Listen on the event stream",
|
||||
"notes": "This can only be done by the logged in user. This will block until an event is received, or until the timeout is reached.",
|
||||
"type": "PaginationChunk",
|
||||
"nickname": "get_event_stream",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "from",
|
||||
"description": "The token to stream from.",
|
||||
"required": false,
|
||||
"type": "string",
|
||||
"paramType": "query"
|
||||
},
|
||||
{
|
||||
"name": "timeout",
|
||||
"description": "The maximum time in milliseconds to wait for an event.",
|
||||
"required": false,
|
||||
"type": "integer",
|
||||
"paramType": "query"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 400,
|
||||
"message": "Bad pagination token."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/events/{eventId}",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get information about a single event.",
|
||||
"notes": "Get information about a single event.",
|
||||
"type": "Event",
|
||||
"nickname": "get_event",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "eventId",
|
||||
"description": "The event ID to get.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 404,
|
||||
"message": "Event not found."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/initialSync",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get this user's current state.",
|
||||
"notes": "Get this user's current state.",
|
||||
"type": "InitialSyncResponse",
|
||||
"nickname": "initial_sync",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "limit",
|
||||
"description": "The maximum number of messages to return for each room.",
|
||||
"type": "integer",
|
||||
"paramType": "query",
|
||||
"required": false
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/publicRooms",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get a list of publicly visible rooms.",
|
||||
"type": "PublicRoomsPaginationChunk",
|
||||
"nickname": "get_public_room_list"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"models": {
|
||||
"PaginationChunk": {
|
||||
"id": "PaginationChunk",
|
||||
"properties": {
|
||||
"start": {
|
||||
"type": "string",
|
||||
"description": "A token which correlates to the first value in \"chunk\" for paginating.",
|
||||
"required": true
|
||||
},
|
||||
"end": {
|
||||
"type": "string",
|
||||
"description": "A token which correlates to the last value in \"chunk\" for paginating.",
|
||||
"required": true
|
||||
},
|
||||
"chunk": {
|
||||
"type": "array",
|
||||
"description": "An array of events.",
|
||||
"required": true,
|
||||
"items": {
|
||||
"$ref": "Event"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"Event": {
|
||||
"id": "Event",
|
||||
"properties": {
|
||||
"event_id": {
|
||||
"type": "string",
|
||||
"description": "An ID which uniquely identifies this event.",
|
||||
"required": true
|
||||
},
|
||||
"room_id": {
|
||||
"type": "string",
|
||||
"description": "The room in which this event occurred.",
|
||||
"required": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"PublicRoomInfo": {
|
||||
"id": "PublicRoomInfo",
|
||||
"properties": {
|
||||
"aliases": {
|
||||
"type": "array",
|
||||
"description": "A list of room aliases for this room.",
|
||||
"items": {
|
||||
"$ref": "string"
|
||||
}
|
||||
},
|
||||
"name": {
|
||||
"type": "string",
|
||||
"description": "The name of the room, as given by the m.room.name state event."
|
||||
},
|
||||
"room_id": {
|
||||
"type": "string",
|
||||
"description": "The room ID for this public room.",
|
||||
"required": true
|
||||
},
|
||||
"topic": {
|
||||
"type": "string",
|
||||
"description": "The topic of this room, as given by the m.room.topic state event."
|
||||
}
|
||||
}
|
||||
},
|
||||
"PublicRoomsPaginationChunk": {
|
||||
"id": "PublicRoomsPaginationChunk",
|
||||
"properties": {
|
||||
"start": {
|
||||
"type": "string",
|
||||
"description": "A token which correlates to the first value in \"chunk\" for paginating.",
|
||||
"required": true
|
||||
},
|
||||
"end": {
|
||||
"type": "string",
|
||||
"description": "A token which correlates to the last value in \"chunk\" for paginating.",
|
||||
"required": true
|
||||
},
|
||||
"chunk": {
|
||||
"type": "array",
|
||||
"description": "A list of public room data.",
|
||||
"required": true,
|
||||
"items": {
|
||||
"$ref": "PublicRoomInfo"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"InitialSyncResponse": {
|
||||
"id": "InitialSyncResponse",
|
||||
"properties": {
|
||||
"end": {
|
||||
"type": "string",
|
||||
"description": "A streaming token which can be used with /events to continue from this snapshot of data.",
|
||||
"required": true
|
||||
},
|
||||
"presence": {
|
||||
"type": "array",
|
||||
"description": "A list of presence events.",
|
||||
"items": {
|
||||
"$ref": "Event"
|
||||
},
|
||||
"required": false
|
||||
},
|
||||
"rooms": {
|
||||
"type": "array",
|
||||
"description": "A list of initial sync room data.",
|
||||
"required": false,
|
||||
"items": {
|
||||
"$ref": "InitialSyncRoomData"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"InitialSyncRoomData": {
|
||||
"id": "InitialSyncRoomData",
|
||||
"properties": {
|
||||
"membership": {
|
||||
"type": "string",
|
||||
"description": "This user's membership state in this room.",
|
||||
"required": true
|
||||
},
|
||||
"room_id": {
|
||||
"type": "string",
|
||||
"description": "The ID of this room.",
|
||||
"required": true
|
||||
},
|
||||
"messages": {
|
||||
"type": "PaginationChunk",
|
||||
"description": "The most recent messages for this room, governed by the limit parameter.",
|
||||
"required": false
|
||||
},
|
||||
"state": {
|
||||
"type": "array",
|
||||
"description": "A list of state events representing the current state of the room.",
|
||||
"required": false,
|
||||
"items": {
|
||||
"$ref": "Event"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,120 +0,0 @@
|
|||
{
|
||||
"apiVersion": "1.0.0",
|
||||
"apis": [
|
||||
{
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"nickname": "get_login_info",
|
||||
"notes": "All login stages MUST be mentioned if there is >1 login type.",
|
||||
"summary": "Get the login mechanism to use when logging in.",
|
||||
"type": "LoginFlows"
|
||||
},
|
||||
{
|
||||
"method": "POST",
|
||||
"nickname": "submit_login",
|
||||
"notes": "If this is part of a multi-stage login, there MUST be a 'session' key.",
|
||||
"parameters": [
|
||||
{
|
||||
"description": "A login submission",
|
||||
"name": "body",
|
||||
"paramType": "body",
|
||||
"required": true,
|
||||
"type": "LoginSubmission"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 400,
|
||||
"message": "Bad login type"
|
||||
},
|
||||
{
|
||||
"code": 400,
|
||||
"message": "Missing JSON keys"
|
||||
}
|
||||
],
|
||||
"summary": "Submit a login action.",
|
||||
"type": "LoginResult"
|
||||
}
|
||||
],
|
||||
"path": "/login"
|
||||
}
|
||||
],
|
||||
"basePath": "http://localhost:8008/_matrix/client/api/v1",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"models": {
|
||||
"LoginFlows": {
|
||||
"id": "LoginFlows",
|
||||
"properties": {
|
||||
"flows": {
|
||||
"description": "A list of valid login flows.",
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "LoginInfo"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"LoginInfo": {
|
||||
"id": "LoginInfo",
|
||||
"properties": {
|
||||
"stages": {
|
||||
"description": "Multi-stage login only: An array of all the login types required to login.",
|
||||
"items": {
|
||||
"$ref": "string"
|
||||
},
|
||||
"type": "array"
|
||||
},
|
||||
"type": {
|
||||
"description": "The login type that must be used when logging in.",
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"LoginResult": {
|
||||
"id": "LoginResult",
|
||||
"properties": {
|
||||
"access_token": {
|
||||
"description": "The access token for this user's login if this is the final stage of the login process.",
|
||||
"type": "string"
|
||||
},
|
||||
"user_id": {
|
||||
"description": "The user's fully-qualified user ID.",
|
||||
"type": "string"
|
||||
},
|
||||
"next": {
|
||||
"description": "Multi-stage login only: The next login type to submit.",
|
||||
"type": "string"
|
||||
},
|
||||
"session": {
|
||||
"description": "Multi-stage login only: The session token to send when submitting the next login type.",
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"LoginSubmission": {
|
||||
"id": "LoginSubmission",
|
||||
"properties": {
|
||||
"type": {
|
||||
"description": "The type of login being submitted.",
|
||||
"type": "string"
|
||||
},
|
||||
"session": {
|
||||
"description": "Multi-stage login only: The session token from an earlier login stage.",
|
||||
"type": "string"
|
||||
},
|
||||
"_login_type_defined_keys_": {
|
||||
"description": "Keys as defined by the specified login type, e.g. \"user\", \"password\""
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"produces": [
|
||||
"application/json"
|
||||
],
|
||||
"resourcePath": "/login",
|
||||
"swaggerVersion": "1.2"
|
||||
}
|
||||
|
|
@ -1,164 +0,0 @@
|
|||
{
|
||||
"apiVersion": "1.0.0",
|
||||
"swaggerVersion": "1.2",
|
||||
"basePath": "http://localhost:8008/_matrix/client/api/v1",
|
||||
"resourcePath": "/presence",
|
||||
"produces": [
|
||||
"application/json"
|
||||
],
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"apis": [
|
||||
{
|
||||
"path": "/presence/{userId}/status",
|
||||
"operations": [
|
||||
{
|
||||
"method": "PUT",
|
||||
"summary": "Update this user's presence state.",
|
||||
"notes": "This can only be done by the logged in user.",
|
||||
"type": "void",
|
||||
"nickname": "update_presence",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The new presence state",
|
||||
"required": true,
|
||||
"type": "PresenceUpdate",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose presence to set.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get this user's presence state.",
|
||||
"notes": "Get this user's presence state.",
|
||||
"type": "PresenceUpdate",
|
||||
"nickname": "get_presence",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose presence to get.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/presence/list/{userId}",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Retrieve a list of presences for all of this user's friends.",
|
||||
"notes": "",
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "Presence"
|
||||
},
|
||||
"nickname": "get_presence_list",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose presence list to get.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Add or remove users from this presence list.",
|
||||
"notes": "Add or remove users from this presence list.",
|
||||
"type": "void",
|
||||
"nickname": "modify_presence_list",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose presence list is being modified.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The modifications to make to this presence list.",
|
||||
"required": true,
|
||||
"type": "PresenceListModifications",
|
||||
"paramType": "body"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"models": {
|
||||
"PresenceUpdate": {
|
||||
"id": "PresenceUpdate",
|
||||
"properties": {
|
||||
"presence": {
|
||||
"type": "string",
|
||||
"description": "Enum: The presence state.",
|
||||
"enum": [
|
||||
"offline",
|
||||
"unavailable",
|
||||
"online",
|
||||
"free_for_chat"
|
||||
]
|
||||
},
|
||||
"status_msg": {
|
||||
"type": "string",
|
||||
"description": "The user-defined message associated with this presence state."
|
||||
}
|
||||
},
|
||||
"subTypes": [
|
||||
"Presence"
|
||||
]
|
||||
},
|
||||
"Presence": {
|
||||
"id": "Presence",
|
||||
"properties": {
|
||||
"last_active_ago": {
|
||||
"type": "integer",
|
||||
"format": "int64",
|
||||
"description": "The last time this user performed an action on their home server."
|
||||
},
|
||||
"user_id": {
|
||||
"type": "string",
|
||||
"description": "The fully qualified user ID"
|
||||
}
|
||||
}
|
||||
},
|
||||
"PresenceListModifications": {
|
||||
"id": "PresenceListModifications",
|
||||
"properties": {
|
||||
"invite": {
|
||||
"type": "array",
|
||||
"description": "A list of user IDs to add to the list.",
|
||||
"items": {
|
||||
"type": "string",
|
||||
"description": "A fully qualified user ID."
|
||||
}
|
||||
},
|
||||
"drop": {
|
||||
"type": "array",
|
||||
"description": "A list of user IDs to remove from the list.",
|
||||
"items": {
|
||||
"type": "string",
|
||||
"description": "A fully qualified user ID."
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,122 +0,0 @@
|
|||
{
|
||||
"apiVersion": "1.0.0",
|
||||
"swaggerVersion": "1.2",
|
||||
"basePath": "http://localhost:8008/_matrix/client/api/v1",
|
||||
"resourcePath": "/profile",
|
||||
"produces": [
|
||||
"application/json"
|
||||
],
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"apis": [
|
||||
{
|
||||
"path": "/profile/{userId}/displayname",
|
||||
"operations": [
|
||||
{
|
||||
"method": "PUT",
|
||||
"summary": "Set a display name.",
|
||||
"notes": "This can only be done by the logged in user.",
|
||||
"type": "void",
|
||||
"nickname": "set_display_name",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The new display name for this user.",
|
||||
"required": true,
|
||||
"type": "DisplayName",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose display name to set.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get a display name.",
|
||||
"notes": "This can be done by anyone.",
|
||||
"type": "DisplayName",
|
||||
"nickname": "get_display_name",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose display name to get.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/profile/{userId}/avatar_url",
|
||||
"operations": [
|
||||
{
|
||||
"method": "PUT",
|
||||
"summary": "Set an avatar URL.",
|
||||
"notes": "This can only be done by the logged in user.",
|
||||
"type": "void",
|
||||
"nickname": "set_avatar_url",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The new avatar url for this user.",
|
||||
"required": true,
|
||||
"type": "AvatarUrl",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose avatar url to set.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get an avatar url.",
|
||||
"notes": "This can be done by anyone.",
|
||||
"type": "AvatarUrl",
|
||||
"nickname": "get_avatar_url",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose avatar url to get.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"models": {
|
||||
"DisplayName": {
|
||||
"id": "DisplayName",
|
||||
"properties": {
|
||||
"displayname": {
|
||||
"type": "string",
|
||||
"description": "The textual display name"
|
||||
}
|
||||
}
|
||||
},
|
||||
"AvatarUrl": {
|
||||
"id": "AvatarUrl",
|
||||
"properties": {
|
||||
"avatar_url": {
|
||||
"type": "string",
|
||||
"description": "A url to an image representing an avatar."
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,120 +0,0 @@
|
|||
{
|
||||
"apiVersion": "1.0.0",
|
||||
"apis": [
|
||||
{
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"nickname": "get_registration_info",
|
||||
"notes": "All login stages MUST be mentioned if there is >1 login type.",
|
||||
"summary": "Get the login mechanism to use when registering.",
|
||||
"type": "RegistrationFlows"
|
||||
},
|
||||
{
|
||||
"method": "POST",
|
||||
"nickname": "submit_registration",
|
||||
"notes": "If this is part of a multi-stage registration, there MUST be a 'session' key.",
|
||||
"parameters": [
|
||||
{
|
||||
"description": "A registration submission",
|
||||
"name": "body",
|
||||
"paramType": "body",
|
||||
"required": true,
|
||||
"type": "RegistrationSubmission"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 400,
|
||||
"message": "Bad login type"
|
||||
},
|
||||
{
|
||||
"code": 400,
|
||||
"message": "Missing JSON keys"
|
||||
}
|
||||
],
|
||||
"summary": "Submit a registration action.",
|
||||
"type": "RegistrationResult"
|
||||
}
|
||||
],
|
||||
"path": "/register"
|
||||
}
|
||||
],
|
||||
"basePath": "http://localhost:8008/_matrix/client/api/v1",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"models": {
|
||||
"RegistrationFlows": {
|
||||
"id": "RegistrationFlows",
|
||||
"properties": {
|
||||
"flows": {
|
||||
"description": "A list of valid registration flows.",
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "RegistrationInfo"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"RegistrationInfo": {
|
||||
"id": "RegistrationInfo",
|
||||
"properties": {
|
||||
"stages": {
|
||||
"description": "Multi-stage registration only: An array of all the login types required to registration.",
|
||||
"items": {
|
||||
"$ref": "string"
|
||||
},
|
||||
"type": "array"
|
||||
},
|
||||
"type": {
|
||||
"description": "The first login type that must be used when logging in.",
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"RegistrationResult": {
|
||||
"id": "RegistrationResult",
|
||||
"properties": {
|
||||
"access_token": {
|
||||
"description": "The access token for this user's registration if this is the final stage of the registration process.",
|
||||
"type": "string"
|
||||
},
|
||||
"user_id": {
|
||||
"description": "The user's fully-qualified user ID.",
|
||||
"type": "string"
|
||||
},
|
||||
"next": {
|
||||
"description": "Multi-stage registration only: The next registration type to submit.",
|
||||
"type": "string"
|
||||
},
|
||||
"session": {
|
||||
"description": "Multi-stage registration only: The session token to send when submitting the next registration type.",
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"RegistrationSubmission": {
|
||||
"id": "RegistrationSubmission",
|
||||
"properties": {
|
||||
"type": {
|
||||
"description": "The type of registration being submitted.",
|
||||
"type": "string"
|
||||
},
|
||||
"session": {
|
||||
"description": "Multi-stage registration only: The session token from an earlier registration stage.",
|
||||
"type": "string"
|
||||
},
|
||||
"_registration_type_defined_keys_": {
|
||||
"description": "Keys as defined by the specified registration type, e.g. \"user\", \"password\""
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"produces": [
|
||||
"application/json"
|
||||
],
|
||||
"resourcePath": "/register",
|
||||
"swaggerVersion": "1.2"
|
||||
}
|
||||
|
|
@ -1,977 +0,0 @@
|
|||
{
|
||||
"apiVersion": "1.0.0",
|
||||
"swaggerVersion": "1.2",
|
||||
"basePath": "http://localhost:8008/_matrix/client/api/v1",
|
||||
"resourcePath": "/rooms",
|
||||
"produces": [
|
||||
"application/json"
|
||||
],
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"authorizations": {
|
||||
"token": []
|
||||
},
|
||||
"apis": [
|
||||
{
|
||||
"path": "/rooms/{roomId}/send/{eventType}",
|
||||
"operations": [
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Send a generic non-state event to this room.",
|
||||
"notes": "This operation can also be done as a PUT by suffixing /{txnId}.",
|
||||
"type": "EventId",
|
||||
"nickname": "send_non_state_event",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The event contents",
|
||||
"required": true,
|
||||
"type": "EventContent",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to send the message in.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "eventType",
|
||||
"description": "The type of event to send.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/state/{eventType}/{stateKey}",
|
||||
"operations": [
|
||||
{
|
||||
"method": "PUT",
|
||||
"summary": "Send a generic state event to this room.",
|
||||
"notes": "The state key can be omitted, such that you can PUT to /rooms/{roomId}/state/{eventType}. The state key defaults to a 0 length string in this case.",
|
||||
"type": "void",
|
||||
"nickname": "send_state_event",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The event contents",
|
||||
"required": true,
|
||||
"type": "EventContent",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to send the message in.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "eventType",
|
||||
"description": "The type of event to send.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "stateKey",
|
||||
"description": "An identifier used to specify clobbering semantics. State events with the same (roomId, eventType, stateKey) will be replaced.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/send/m.room.message",
|
||||
"operations": [
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Send a message in this room.",
|
||||
"notes": "This operation can also be done as a PUT by suffixing /{txnId}.",
|
||||
"type": "EventId",
|
||||
"nickname": "send_message",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The message contents",
|
||||
"required": true,
|
||||
"type": "Message",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to send the message in.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/state/m.room.topic",
|
||||
"operations": [
|
||||
{
|
||||
"method": "PUT",
|
||||
"summary": "Set the topic for this room.",
|
||||
"notes": "Set the topic for this room.",
|
||||
"type": "void",
|
||||
"nickname": "set_topic",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The topic contents",
|
||||
"required": true,
|
||||
"type": "Topic",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to set the topic in.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get the topic for this room.",
|
||||
"notes": "Get the topic for this room.",
|
||||
"type": "Topic",
|
||||
"nickname": "get_topic",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to get topic in.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 404,
|
||||
"message": "Topic not found."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/state/m.room.name",
|
||||
"operations": [
|
||||
{
|
||||
"method": "PUT",
|
||||
"summary": "Set the name of this room.",
|
||||
"notes": "Set the name of this room.",
|
||||
"type": "void",
|
||||
"nickname": "set_room_name",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The name contents",
|
||||
"required": true,
|
||||
"type": "RoomName",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to set the name of.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get the room's name.",
|
||||
"notes": "",
|
||||
"type": "RoomName",
|
||||
"nickname": "get_room_name",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to get the name of.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 404,
|
||||
"message": "Name not found."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/send/m.room.message.feedback",
|
||||
"operations": [
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Send feedback to a message.",
|
||||
"notes": "This operation can also be done as a PUT by suffixing /{txnId}.",
|
||||
"type": "EventId",
|
||||
"nickname": "send_feedback",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The feedback contents",
|
||||
"required": true,
|
||||
"type": "Feedback",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to send the feedback in.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 400,
|
||||
"message": "Bad feedback type."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/invite",
|
||||
"operations": [
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Invite a user to this room.",
|
||||
"notes": "This operation can also be done as a PUT by suffixing /{txnId}.",
|
||||
"type": "void",
|
||||
"nickname": "invite",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room which has this user.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The user to invite.",
|
||||
"required": true,
|
||||
"type": "InviteRequest",
|
||||
"paramType": "body"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/join",
|
||||
"operations": [
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Join this room.",
|
||||
"notes": "This operation can also be done as a PUT by suffixing /{txnId}.",
|
||||
"type": "void",
|
||||
"nickname": "join_room",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to join.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "body",
|
||||
"required": true,
|
||||
"type": "JoinRequest",
|
||||
"paramType": "body"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/leave",
|
||||
"operations": [
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Leave this room.",
|
||||
"notes": "This operation can also be done as a PUT by suffixing /{txnId}.",
|
||||
"type": "void",
|
||||
"nickname": "leave",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to leave.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "body",
|
||||
"required": true,
|
||||
"type": "LeaveRequest",
|
||||
"paramType": "body"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/ban",
|
||||
"operations": [
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Ban a user in the room.",
|
||||
"notes": "This operation can also be done as a PUT by suffixing /{txnId}. The caller must have the required power level to do this operation.",
|
||||
"type": "void",
|
||||
"nickname": "ban",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room which has the user to ban.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The user to ban.",
|
||||
"required": true,
|
||||
"type": "BanRequest",
|
||||
"paramType": "body"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/state/m.room.member/{userId}",
|
||||
"operations": [
|
||||
{
|
||||
"method": "PUT",
|
||||
"summary": "Change the membership state for a user in a room.",
|
||||
"notes": "Change the membership state for a user in a room.",
|
||||
"type": "void",
|
||||
"nickname": "set_membership",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The new membership state",
|
||||
"required": true,
|
||||
"type": "Member",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose membership is being changed.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room which has this user.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 400,
|
||||
"message": "No membership key."
|
||||
},
|
||||
{
|
||||
"code": 400,
|
||||
"message": "Bad membership value."
|
||||
},
|
||||
{
|
||||
"code": 403,
|
||||
"message": "When inviting: You are not in the room."
|
||||
},
|
||||
{
|
||||
"code": 403,
|
||||
"message": "When inviting: <target> is already in the room."
|
||||
},
|
||||
{
|
||||
"code": 403,
|
||||
"message": "When joining: Cannot force another user to join."
|
||||
},
|
||||
{
|
||||
"code": 403,
|
||||
"message": "When joining: You are not invited to this room."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get the membership state of a user in a room.",
|
||||
"notes": "Get the membership state of a user in a room.",
|
||||
"type": "Member",
|
||||
"nickname": "get_membership",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose membership state you want to get.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room which has this user.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 404,
|
||||
"message": "Member not found."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/join/{roomAliasOrId}",
|
||||
"operations": [
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Join a room via a room alias or room ID.",
|
||||
"notes": "Join a room via a room alias or room ID.",
|
||||
"type": "JoinRoomInfo",
|
||||
"nickname": "join",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomAliasOrId",
|
||||
"description": "The room alias or room ID to join.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 400,
|
||||
"message": "Bad room alias."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/createRoom",
|
||||
"operations": [
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Create a room.",
|
||||
"notes": "Create a room.",
|
||||
"type": "RoomInfo",
|
||||
"nickname": "create_room",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The desired configuration for the room. This operation can also be done as a PUT by suffixing /{txnId}.",
|
||||
"required": true,
|
||||
"type": "RoomConfig",
|
||||
"paramType": "body"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 400,
|
||||
"message": "Body must be JSON."
|
||||
},
|
||||
{
|
||||
"code": 400,
|
||||
"message": "Room alias already taken."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/messages",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get a list of messages for this room.",
|
||||
"notes": "Get a list of messages for this room.",
|
||||
"type": "MessagePaginationChunk",
|
||||
"nickname": "get_messages",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to get messages in.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "from",
|
||||
"description": "The token to start getting results from.",
|
||||
"required": false,
|
||||
"type": "string",
|
||||
"paramType": "query"
|
||||
},
|
||||
{
|
||||
"name": "to",
|
||||
"description": "The token to stop getting results at.",
|
||||
"required": false,
|
||||
"type": "string",
|
||||
"paramType": "query"
|
||||
},
|
||||
{
|
||||
"name": "limit",
|
||||
"description": "The maximum number of messages to return.",
|
||||
"required": false,
|
||||
"type": "integer",
|
||||
"paramType": "query"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/members",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get a list of members for this room.",
|
||||
"notes": "Get a list of members for this room.",
|
||||
"type": "MemberPaginationChunk",
|
||||
"nickname": "get_members",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to get a list of members from.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "from",
|
||||
"description": "The token to start getting results from.",
|
||||
"required": false,
|
||||
"type": "string",
|
||||
"paramType": "query"
|
||||
},
|
||||
{
|
||||
"name": "to",
|
||||
"description": "The token to stop getting results at.",
|
||||
"required": false,
|
||||
"type": "string",
|
||||
"paramType": "query"
|
||||
},
|
||||
{
|
||||
"name": "limit",
|
||||
"description": "The maximum number of members to return.",
|
||||
"required": false,
|
||||
"type": "integer",
|
||||
"paramType": "query"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/state",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get a list of all the current state events for this room.",
|
||||
"notes": "This is equivalent to the events returned under the 'state' key for this room in /initialSync.",
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "Event"
|
||||
},
|
||||
"nickname": "get_state_events",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to get a list of current state events from.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/initialSync",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get all the current information for this room, including messages and state events.",
|
||||
"notes": "NOT YET IMPLEMENTED.",
|
||||
"type": "InitialSyncRoomData",
|
||||
"nickname": "get_room_sync_data",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to get information for.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"models": {
|
||||
"Topic": {
|
||||
"id": "Topic",
|
||||
"properties": {
|
||||
"topic": {
|
||||
"type": "string",
|
||||
"description": "The topic text"
|
||||
}
|
||||
}
|
||||
},
|
||||
"RoomName": {
|
||||
"id": "RoomName",
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string",
|
||||
"description": "The human-readable name for the room. Can contain spaces."
|
||||
}
|
||||
}
|
||||
},
|
||||
"Message": {
|
||||
"id": "Message",
|
||||
"properties": {
|
||||
"msgtype": {
|
||||
"type": "string",
|
||||
"description": "The type of message being sent, e.g. \"m.text\"",
|
||||
"required": true
|
||||
},
|
||||
"_msgtype_defined_keys_": {
|
||||
"description": "Additional keys as defined by the msgtype, e.g. \"body\""
|
||||
}
|
||||
}
|
||||
},
|
||||
"Feedback": {
|
||||
"id": "Feedback",
|
||||
"properties": {
|
||||
"target_event_id": {
|
||||
"type": "string",
|
||||
"description": "The event ID being acknowledged.",
|
||||
"required": true
|
||||
},
|
||||
"type": {
|
||||
"type": "string",
|
||||
"description": "The type of feedback. Either 'delivered' or 'read'.",
|
||||
"required": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"Member": {
|
||||
"id": "Member",
|
||||
"properties": {
|
||||
"membership": {
|
||||
"type": "string",
|
||||
"description": "Enum: The membership state of this member.",
|
||||
"enum": [
|
||||
"invite",
|
||||
"join",
|
||||
"leave",
|
||||
"ban"
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"RoomInfo": {
|
||||
"id": "RoomInfo",
|
||||
"properties": {
|
||||
"room_id": {
|
||||
"type": "string",
|
||||
"description": "The allocated room ID.",
|
||||
"required": true
|
||||
},
|
||||
"room_alias": {
|
||||
"type": "string",
|
||||
"description": "The alias for the room.",
|
||||
"required": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"JoinRoomInfo": {
|
||||
"id": "JoinRoomInfo",
|
||||
"properties": {
|
||||
"room_id": {
|
||||
"type": "string",
|
||||
"description": "The room ID joined, if joined via a room alias only.",
|
||||
"required": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"RoomConfig": {
|
||||
"id": "RoomConfig",
|
||||
"properties": {
|
||||
"visibility": {
|
||||
"type": "string",
|
||||
"description": "Enum: The room visibility.",
|
||||
"required": false,
|
||||
"enum": [
|
||||
"public",
|
||||
"private"
|
||||
]
|
||||
},
|
||||
"room_alias_name": {
|
||||
"type": "string",
|
||||
"description": "The alias to give the new room.",
|
||||
"required": false
|
||||
},
|
||||
"name": {
|
||||
"type": "string",
|
||||
"description": "Sets the name of the room. Send a m.room.name event after creating the room with the 'name' key specified.",
|
||||
"required": false
|
||||
},
|
||||
"topic": {
|
||||
"type": "string",
|
||||
"description": "Sets the topic for the room. Send a m.room.topic event after creating the room with the 'topic' key specified.",
|
||||
"required": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"PaginationRequest": {
|
||||
"id": "PaginationRequest",
|
||||
"properties": {
|
||||
"from": {
|
||||
"type": "string",
|
||||
"description": "The token to start getting results from."
|
||||
},
|
||||
"to": {
|
||||
"type": "string",
|
||||
"description": "The token to stop getting results at."
|
||||
},
|
||||
"limit": {
|
||||
"type": "integer",
|
||||
"description": "The maximum number of entries to return."
|
||||
}
|
||||
}
|
||||
},
|
||||
"PaginationChunk": {
|
||||
"id": "PaginationChunk",
|
||||
"properties": {
|
||||
"start": {
|
||||
"type": "string",
|
||||
"description": "A token which correlates to the first value in \"chunk\" for paginating.",
|
||||
"required": true
|
||||
},
|
||||
"end": {
|
||||
"type": "string",
|
||||
"description": "A token which correlates to the last value in \"chunk\" for paginating.",
|
||||
"required": true
|
||||
}
|
||||
},
|
||||
"subTypes": [
|
||||
"MessagePaginationChunk"
|
||||
]
|
||||
},
|
||||
"MessagePaginationChunk": {
|
||||
"id": "MessagePaginationChunk",
|
||||
"properties": {
|
||||
"chunk": {
|
||||
"type": "array",
|
||||
"description": "A list of message events.",
|
||||
"items": {
|
||||
"$ref": "MessageEvent"
|
||||
},
|
||||
"required": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"MemberPaginationChunk": {
|
||||
"id": "MemberPaginationChunk",
|
||||
"properties": {
|
||||
"chunk": {
|
||||
"type": "array",
|
||||
"description": "A list of member events.",
|
||||
"items": {
|
||||
"$ref": "MemberEvent"
|
||||
},
|
||||
"required": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"Event": {
|
||||
"id": "Event",
|
||||
"properties": {
|
||||
"event_id": {
|
||||
"type": "string",
|
||||
"description": "An ID which uniquely identifies this event. This is automatically set by the server.",
|
||||
"required": true
|
||||
},
|
||||
"room_id": {
|
||||
"type": "string",
|
||||
"description": "The room in which this event occurred. This is automatically set by the server.",
|
||||
"required": true
|
||||
},
|
||||
"type": {
|
||||
"type": "string",
|
||||
"description": "The event type.",
|
||||
"required": true
|
||||
}
|
||||
},
|
||||
"subTypes": [
|
||||
"MessageEvent"
|
||||
]
|
||||
},
|
||||
"EventId": {
|
||||
"id": "EventId",
|
||||
"properties": {
|
||||
"event_id": {
|
||||
"type": "string",
|
||||
"description": "The allocated event ID for this event.",
|
||||
"required": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"EventContent": {
|
||||
"id": "EventContent",
|
||||
"properties": {
|
||||
"__event_content_keys__": {
|
||||
"type": "string",
|
||||
"description": "Event-specific content keys and values.",
|
||||
"required": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"MessageEvent": {
|
||||
"id": "MessageEvent",
|
||||
"properties": {
|
||||
"content": {
|
||||
"type": "Message"
|
||||
}
|
||||
}
|
||||
},
|
||||
"MemberEvent": {
|
||||
"id": "MemberEvent",
|
||||
"properties": {
|
||||
"content": {
|
||||
"type": "Member"
|
||||
}
|
||||
}
|
||||
},
|
||||
"InviteRequest": {
|
||||
"id": "InviteRequest",
|
||||
"properties": {
|
||||
"user_id": {
|
||||
"type": "string",
|
||||
"description": "The fully-qualified user ID."
|
||||
}
|
||||
}
|
||||
},
|
||||
"JoinRequest": {
|
||||
"id": "JoinRequest",
|
||||
"properties": {}
|
||||
},
|
||||
"LeaveRequest": {
|
||||
"id": "LeaveRequest",
|
||||
"properties": {}
|
||||
},
|
||||
"BanRequest": {
|
||||
"id": "BanRequest",
|
||||
"properties": {
|
||||
"user_id": {
|
||||
"type": "string",
|
||||
"description": "The fully-qualified user ID."
|
||||
},
|
||||
"reason": {
|
||||
"type": "string",
|
||||
"description": "The reason for the ban."
|
||||
}
|
||||
}
|
||||
},
|
||||
"InitialSyncRoomData": {
|
||||
"id": "InitialSyncRoomData",
|
||||
"properties": {
|
||||
"membership": {
|
||||
"type": "string",
|
||||
"description": "This user's membership state in this room.",
|
||||
"required": true
|
||||
},
|
||||
"room_id": {
|
||||
"type": "string",
|
||||
"description": "The ID of this room.",
|
||||
"required": true
|
||||
},
|
||||
"messages": {
|
||||
"type": "MessagePaginationChunk",
|
||||
"description": "The most recent messages for this room, governed by the limit parameter.",
|
||||
"required": false
|
||||
},
|
||||
"state": {
|
||||
"type": "array",
|
||||
"description": "A list of state events representing the current state of the room.",
|
||||
"required": false,
|
||||
"items": {
|
||||
"$ref": "Event"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1 +0,0 @@
|
|||
NCjcRSEG
|
|
@ -1,79 +0,0 @@
|
|||
This document outlines the format for human-readable IDs within matrix.
|
||||
|
||||
Overview
|
||||
--------
|
||||
UTF-8 is quickly becoming the standard character encoding set on the web. As
|
||||
such, Matrix requires that all strings MUST be encoded as UTF-8. However,
|
||||
using Unicode as the character set for human-readable IDs is troublesome. There
|
||||
are many different characters which appear identical to each other, but would
|
||||
identify different users. In addition, there are non-printable characters which
|
||||
cannot be rendered by the end-user. This opens up a security vulnerability with
|
||||
phishing/spoofing of IDs, commonly known as a homograph attack.
|
||||
|
||||
Web browers encountered this problem when International Domain Names were
|
||||
introduced. A variety of checks were put in place in order to protect users. If
|
||||
an address failed the check, the raw punycode would be displayed to disambiguate
|
||||
the address. Similar checks are performed by home servers in Matrix. However,
|
||||
Matrix does not use punycode representations, and so does not show raw punycode
|
||||
on a failed check. Instead, home servers must outright reject these misleading
|
||||
IDs.
|
||||
|
||||
Types of human-readable IDs
|
||||
---------------------------
|
||||
There are two main human-readable IDs in question:
|
||||
|
||||
- Room aliases
|
||||
- User IDs
|
||||
|
||||
Room aliases look like ``#localpart:domain``. These aliases point to opaque
|
||||
non human-readable room IDs. These pointers can change, so there is already an
|
||||
issue present with the same ID pointing to a different destination at a later
|
||||
date.
|
||||
|
||||
User IDs look like ``@localpart:domain``. These represent actual end-users, and
|
||||
unlike room aliases, there is no layer of indirection. This presents a much
|
||||
greater concern with homograph attacks.
|
||||
|
||||
Checks
|
||||
------
|
||||
- Similar to web browsers.
|
||||
- blacklisted chars (e.g. non-printable characters)
|
||||
- mix of language sets from 'preferred' language not allowed.
|
||||
- Language sets from CLDR dataset.
|
||||
- Treated in segments (localpart, domain)
|
||||
- Additional restrictions for ease of processing IDs.
|
||||
- Room alias localparts MUST NOT have ``#`` or ``:``.
|
||||
- User ID localparts MUST NOT have ``@`` or ``:``.
|
||||
|
||||
Rejecting
|
||||
---------
|
||||
- Home servers MUST reject room aliases which do not pass the check, both on
|
||||
GETs and PUTs.
|
||||
- Home servers MUST reject user ID localparts which do not pass the check, both
|
||||
on creation and on events.
|
||||
- Any home server whose domain does not pass this check, MUST use their punycode
|
||||
domain name instead of the IDN, to prevent other home servers rejecting you.
|
||||
- Error code is ``M_FAILED_HUMAN_ID_CHECK``. (generic enough for both failing
|
||||
due to homograph attacks, and failing due to including ``:`` s, etc)
|
||||
- Error message MAY go into further information about which characters were
|
||||
rejected and why.
|
||||
- Error message SHOULD contain a ``failed_keys`` key which contains an array
|
||||
of strings which represent the keys which failed the check e.g::
|
||||
|
||||
failed_keys: [ user_id, room_alias ]
|
||||
|
||||
Other considerations
|
||||
--------------------
|
||||
- Basic security: Informational key on the event attached by HS to say "unsafe
|
||||
ID". Problem: clients can just ignore it, and since it will appear only very
|
||||
rarely, easy to forget when implementing clients.
|
||||
- Moderate security: Requires client handshake. Forces clients to implement
|
||||
a check, else they cannot communicate with the misleading ID. However, this is
|
||||
extra overhead in both client implementations and round-trips.
|
||||
- High security: Outright rejection of the ID at the point of creation /
|
||||
receiving event. Point of creation rejection is preferable to avoid the ID
|
||||
entering the system in the first place. However, malicious HSes can just allow
|
||||
the ID. Hence, other home servers must reject them if they see them in events.
|
||||
Client never sees the problem ID, provided the HS is correctly implemented.
|
||||
- High security decided; client doesn't need to worry about it, no additional
|
||||
protocol complexity aside from rejection of an event.
|
|
@ -1,43 +0,0 @@
|
|||
===================
|
||||
Documentation Style
|
||||
===================
|
||||
|
||||
A brief single sentence to describe what this file contains; in this case a
|
||||
description of the style to write documentation in.
|
||||
|
||||
|
||||
Sections
|
||||
========
|
||||
|
||||
Each section should be separated from the others by two blank lines. Headings
|
||||
should be underlined using a row of equals signs (===). Paragraphs should be
|
||||
separated by a single blank line, and wrap to no further than 80 columns.
|
||||
|
||||
[[TODO(username): if you want to leave some unanswered questions, notes for
|
||||
further consideration, or other kinds of comment, use a TODO section. Make sure
|
||||
to notate it with your name so we know who to ask about it!]]
|
||||
|
||||
Subsections
|
||||
-----------
|
||||
|
||||
If required, subsections can use a row of dashes to underline their header. A
|
||||
single blank line between subsections of a single section.
|
||||
|
||||
|
||||
Bullet Lists
|
||||
============
|
||||
|
||||
* Bullet lists can use asterisks with a single space either side.
|
||||
|
||||
* Another blank line between list elements.
|
||||
|
||||
|
||||
Definition Lists
|
||||
================
|
||||
|
||||
Terms:
|
||||
Start in the first column, ending with a colon
|
||||
|
||||
Definitions:
|
||||
Take a two space indent, following immediately from the term without a blank
|
||||
line before it, but having a blank line afterwards.
|
151
docs/server-server/signing.rst
Normal file
151
docs/server-server/signing.rst
Normal file
|
@ -0,0 +1,151 @@
|
|||
Signing JSON
|
||||
============
|
||||
|
||||
JSON is signed by encoding the JSON object without ``signatures`` or ``meta``
|
||||
keys using a canonical encoding. The JSON bytes are then signed using the
|
||||
signature algorithm and the signature encoded using base64 with the padding
|
||||
stripped. The resulting base64 signature is added to an object under the
|
||||
*signing key identifier* which is added to the ``signatures`` object under the
|
||||
name of the server signing it which is added back to the original JSON object
|
||||
along with the ``meta`` object.
|
||||
|
||||
The *signing key identifier* is the concatenation of the *signing algorithm*
|
||||
and a *key version*. The *signing algorithm* identifies the algorithm used to
|
||||
sign the JSON. The currently support value for *signing algorithm* is
|
||||
``ed25519`` as implemented by NACL (http://nacl.cr.yp.to/). The *key version*
|
||||
is used to distinguish between different signing keys used by the same entity.
|
||||
|
||||
The ``meta`` object and the ``signatures`` object are not covered by the
|
||||
signature. Therefore intermediate servers can add metadata such as time stamps
|
||||
and additional signatures.
|
||||
|
||||
|
||||
::
|
||||
|
||||
{
|
||||
"name": "example.org",
|
||||
"signing_keys": {
|
||||
"ed25519:1": "XSl0kuyvrXNj6A+7/tkrB9sxSbRi08Of5uRhxOqZtEQ"
|
||||
},
|
||||
"meta": {
|
||||
"retrieved_ts_ms": 922834800000
|
||||
},
|
||||
"signatures": {
|
||||
"example.org": {
|
||||
"ed25519:1": "s76RUgajp8w172am0zQb/iPTHsRnb4SkrzGoeCOSFfcBY2V/1c8QfrmdXHpvnc2jK5BD1WiJIxiMW95fMjK7Bw"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
::
|
||||
|
||||
def sign_json(json_object, signing_key, signing_name):
|
||||
signatures = json_object.pop("signatures", {})
|
||||
meta = json_object.pop("meta", None)
|
||||
|
||||
signed = signing_key.sign(encode_canonical_json(json_object))
|
||||
signature_base64 = encode_base64(signed.signature)
|
||||
|
||||
key_id = "%s:%s" % (signing_key.alg, signing_key.version)
|
||||
signatures.setdefault(sigature_name, {})[key_id] = signature_base64
|
||||
|
||||
json_object["signatures"] = signatures
|
||||
if meta is not None:
|
||||
json_object["meta"] = meta
|
||||
|
||||
return json_object
|
||||
|
||||
Checking for a Signature
|
||||
------------------------
|
||||
|
||||
To check if an entity has signed a JSON object a server does the following
|
||||
|
||||
1. Checks if the ``signatures`` object contains an entry with the name of the
|
||||
entity. If the entry is missing then the check fails.
|
||||
2. Removes any *signing key identifiers* from the entry with algorithms it
|
||||
doesn't understand. If there are no *signing key identifiers* left then the
|
||||
check fails.
|
||||
3. Looks up *verification keys* for the remaining *signing key identifiers*
|
||||
either from a local cache or by consulting a trusted key server. If it
|
||||
cannot find a *verification key* then the check fails.
|
||||
4. Decodes the base64 encoded signature bytes. If base64 decoding fails then
|
||||
the check fails.
|
||||
5. Checks the signature bytes using the *verification key*. If this fails then
|
||||
the check fails. Otherwise the check succeeds.
|
||||
|
||||
Canonical JSON
|
||||
--------------
|
||||
|
||||
The canonical JSON encoding for a value is the shortest UTF-8 JSON encoding
|
||||
with dictionary keys lexicographically sorted by unicode codepoint. Numbers in
|
||||
the JSON value must be integers in the range [-(2**53)+1, (2**53)-1].
|
||||
|
||||
::
|
||||
|
||||
import json
|
||||
|
||||
def canonical_json(value):
|
||||
return json.dumps(
|
||||
value,
|
||||
ensure_ascii=False,
|
||||
separators=(',',':'),
|
||||
sort_keys=True,
|
||||
).encode("UTF-8")
|
||||
|
||||
Grammar
|
||||
+++++++
|
||||
|
||||
Adapted from the grammar in http://tools.ietf.org/html/rfc7159 removing
|
||||
insignificant whitespace, fractions, exponents and redundant character escapes
|
||||
|
||||
::
|
||||
|
||||
value = false / null / true / object / array / number / string
|
||||
false = %x66.61.6c.73.65
|
||||
null = %x6e.75.6c.6c
|
||||
true = %x74.72.75.65
|
||||
object = %x7B [ member *( %x2C member ) ] %7D
|
||||
member = string %x3A value
|
||||
array = %x5B [ value *( %x2C value ) ] %5B
|
||||
number = [ %x2D ] int
|
||||
int = %x30 / ( %x31-39 *digit )
|
||||
digit = %x30-39
|
||||
string = %x22 *char %x22
|
||||
char = unescaped / %x5C escaped
|
||||
unescaped = %x20-21 / %x23-5B / %x5D-10FFFF
|
||||
escaped = %x22 ; " quotation mark U+0022
|
||||
/ %x5C ; \ reverse solidus U+005C
|
||||
/ %x62 ; b backspace U+0008
|
||||
/ %x66 ; f form feed U+000C
|
||||
/ %x6E ; n line feed U+000A
|
||||
/ %x72 ; r carriage return U+000D
|
||||
/ %x74 ; t tab U+0009
|
||||
/ %x75.30.30.30 (%x30-37 / %x62 / %x65-66) ; u000X
|
||||
/ %x75.30.30.31 (%x30-39 / %x61-66) ; u001X
|
||||
|
||||
Signing Events
|
||||
==============
|
||||
|
||||
Signing events is a more complicated process since servers can choose to redact
|
||||
non-essential event contents. Before signing the event it is encoded as
|
||||
Canonical JSON and hashed using SHA-256. The resulting hash is then stored
|
||||
in the event JSON in a ``hash`` object under a ``sha256`` key. Then all
|
||||
non-essential keys are stripped from the event object, and the resulting object
|
||||
which included the ``hash`` key is signed using the JSON signing algorithm.
|
||||
|
||||
Servers can then transmit the entire event or the event with the non-essential
|
||||
keys removed. Receiving servers can then check the entire event if it is
|
||||
present by computing the SHA-256 of the event excluding the ``hash`` object, or
|
||||
by using the ``hash`` object included in the event if keys have been redacted.
|
||||
|
||||
New hash functions can be introduced by adding additional keys to the ``hash``
|
||||
object. Since the ``hash`` object cannot be redacted a server shouldn't allow
|
||||
too many hashes to be listed, otherwise a server might embed illict data within
|
||||
the ``hash`` object. For similar reasons a server shouldn't allow hash values
|
||||
that are too long.
|
||||
|
||||
[[TODO(markjh): We might want to specify a maximum number of keys for the
|
||||
``hash`` and we might want to specify the maximum output size of a hash]]
|
||||
|
||||
[[TODO(markjh) We might want to allow the server to omit the output of well
|
||||
known hash functions like SHA-256 when none of the keys have been redacted]]
|
File diff suppressed because it is too large
Load diff
|
@ -110,7 +110,7 @@ $('.register').live('click', function() {
|
|||
url: "http://localhost:8008/_matrix/client/api/v1/register",
|
||||
type: "POST",
|
||||
contentType: "application/json; charset=utf-8",
|
||||
data: JSON.stringify({ user_id: user, password: password }),
|
||||
data: JSON.stringify({ user: user, password: password, type: "m.login.password" }),
|
||||
dataType: "json",
|
||||
success: function(data) {
|
||||
onLoggedIn(data);
|
||||
|
|
|
@ -14,7 +14,7 @@ $('.register').live('click', function() {
|
|||
url: "http://localhost:8008/_matrix/client/api/v1/register",
|
||||
type: "POST",
|
||||
contentType: "application/json; charset=utf-8",
|
||||
data: JSON.stringify({ user_id: user, password: password }),
|
||||
data: JSON.stringify({ user: user, password: password, type: "m.login.password" }),
|
||||
dataType: "json",
|
||||
success: function(data) {
|
||||
showLoggedIn(data);
|
||||
|
|
|
@ -1,510 +0,0 @@
|
|||
/*
|
||||
* basic.css
|
||||
* ~~~~~~~~~
|
||||
*
|
||||
* Sphinx stylesheet -- basic theme.
|
||||
*
|
||||
* :copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
* :license: BSD, see LICENSE for details.
|
||||
*
|
||||
*/
|
||||
|
||||
/* -- main layout ----------------------------------------------------------- */
|
||||
|
||||
div.clearer {
|
||||
clear: both;
|
||||
}
|
||||
|
||||
/* -- relbar ---------------------------------------------------------------- */
|
||||
|
||||
div.related {
|
||||
width: 100%;
|
||||
font-size: 90%;
|
||||
}
|
||||
|
||||
div.related h3 {
|
||||
display: none;
|
||||
}
|
||||
|
||||
div.related ul {
|
||||
margin: 0;
|
||||
padding: 0 0 0 10px;
|
||||
list-style: none;
|
||||
}
|
||||
|
||||
div.related li {
|
||||
display: inline;
|
||||
}
|
||||
|
||||
div.related li.right {
|
||||
float: right;
|
||||
margin-right: 5px;
|
||||
}
|
||||
|
||||
/* -- sidebar --------------------------------------------------------------- */
|
||||
|
||||
div.sphinxsidebarwrapper {
|
||||
padding: 10px 5px 0 10px;
|
||||
}
|
||||
|
||||
div.sphinxsidebar {
|
||||
float: left;
|
||||
width: 230px;
|
||||
margin-left: -100%;
|
||||
font-size: 90%;
|
||||
}
|
||||
|
||||
div.sphinxsidebar ul {
|
||||
list-style: none;
|
||||
}
|
||||
|
||||
div.sphinxsidebar ul ul,
|
||||
div.sphinxsidebar ul.want-points {
|
||||
margin-left: 20px;
|
||||
list-style: square;
|
||||
}
|
||||
|
||||
div.sphinxsidebar ul ul {
|
||||
margin-top: 0;
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
div.sphinxsidebar form {
|
||||
margin-top: 10px;
|
||||
}
|
||||
|
||||
div.sphinxsidebar input {
|
||||
border: 1px solid #98dbcc;
|
||||
font-family: sans-serif;
|
||||
font-size: 1em;
|
||||
}
|
||||
|
||||
img {
|
||||
border: 0;
|
||||
}
|
||||
|
||||
/* -- search page ----------------------------------------------------------- */
|
||||
|
||||
ul.search {
|
||||
margin: 10px 0 0 20px;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
ul.search li {
|
||||
padding: 5px 0 5px 20px;
|
||||
background-image: url(file.png);
|
||||
background-repeat: no-repeat;
|
||||
background-position: 0 7px;
|
||||
}
|
||||
|
||||
ul.search li a {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
ul.search li div.context {
|
||||
color: #888;
|
||||
margin: 2px 0 0 30px;
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
ul.keywordmatches li.goodmatch a {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
/* -- index page ------------------------------------------------------------ */
|
||||
|
||||
table.contentstable {
|
||||
width: 90%;
|
||||
}
|
||||
|
||||
table.contentstable p.biglink {
|
||||
line-height: 150%;
|
||||
}
|
||||
|
||||
a.biglink {
|
||||
font-size: 1.3em;
|
||||
}
|
||||
|
||||
span.linkdescr {
|
||||
font-style: italic;
|
||||
padding-top: 5px;
|
||||
font-size: 90%;
|
||||
}
|
||||
|
||||
/* -- general index --------------------------------------------------------- */
|
||||
|
||||
table.indextable {
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
table.indextable td {
|
||||
text-align: left;
|
||||
vertical-align: top;
|
||||
}
|
||||
|
||||
table.indextable dl, table.indextable dd {
|
||||
margin-top: 0;
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
table.indextable tr.pcap {
|
||||
height: 10px;
|
||||
}
|
||||
|
||||
table.indextable tr.cap {
|
||||
margin-top: 10px;
|
||||
background-color: #f2f2f2;
|
||||
}
|
||||
|
||||
img.toggler {
|
||||
margin-right: 3px;
|
||||
margin-top: 3px;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
div.modindex-jumpbox {
|
||||
border-top: 1px solid #ddd;
|
||||
border-bottom: 1px solid #ddd;
|
||||
margin: 1em 0 1em 0;
|
||||
padding: 0.4em;
|
||||
}
|
||||
|
||||
div.genindex-jumpbox {
|
||||
border-top: 1px solid #ddd;
|
||||
border-bottom: 1px solid #ddd;
|
||||
margin: 1em 0 1em 0;
|
||||
padding: 0.4em;
|
||||
}
|
||||
|
||||
/* -- general body styles --------------------------------------------------- */
|
||||
|
||||
a.headerlink {
|
||||
visibility: hidden;
|
||||
}
|
||||
|
||||
h1:hover > a.headerlink,
|
||||
h2:hover > a.headerlink,
|
||||
h3:hover > a.headerlink,
|
||||
h4:hover > a.headerlink,
|
||||
h5:hover > a.headerlink,
|
||||
h6:hover > a.headerlink,
|
||||
dt:hover > a.headerlink {
|
||||
visibility: visible;
|
||||
}
|
||||
|
||||
div.document p.caption {
|
||||
text-align: inherit;
|
||||
}
|
||||
|
||||
div.document td {
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
.field-list ul {
|
||||
padding-left: 1em;
|
||||
}
|
||||
|
||||
.first {
|
||||
margin-top: 0 !important;
|
||||
}
|
||||
|
||||
p.rubric {
|
||||
margin-top: 30px;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.align-left {
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
.align-center {
|
||||
clear: both;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.align-right {
|
||||
text-align: right;
|
||||
}
|
||||
|
||||
/* -- sidebars -------------------------------------------------------------- */
|
||||
|
||||
div.sidebar {
|
||||
margin: 0 0 0.5em 1em;
|
||||
border: 1px solid #ddb;
|
||||
padding: 7px 7px 0 7px;
|
||||
background-color: #ffe;
|
||||
width: 40%;
|
||||
float: right;
|
||||
}
|
||||
|
||||
p.sidebar-title {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
/* -- topics ---------------------------------------------------------------- */
|
||||
|
||||
div.topic {
|
||||
border: 1px solid #ccc;
|
||||
padding: 7px 7px 0 7px;
|
||||
margin: 10px 0 10px 0;
|
||||
}
|
||||
|
||||
p.topic-title {
|
||||
font-size: 1.1em;
|
||||
font-weight: bold;
|
||||
margin-top: 10px;
|
||||
}
|
||||
|
||||
/* -- admonitions ----------------------------------------------------------- */
|
||||
|
||||
div.admonition {
|
||||
margin-top: 10px;
|
||||
margin-bottom: 10px;
|
||||
padding: 7px;
|
||||
}
|
||||
|
||||
div.admonition dt {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
div.admonition dl {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
p.admonition-title {
|
||||
margin: 0px 10px 5px 0px;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
div.document p.centered {
|
||||
text-align: center;
|
||||
margin-top: 25px;
|
||||
}
|
||||
|
||||
/* -- tables ---------------------------------------------------------------- */
|
||||
|
||||
table.docutils {
|
||||
border: 0;
|
||||
border-collapse: collapse;
|
||||
}
|
||||
|
||||
table.docutils td, table.docutils th {
|
||||
padding: 1px 8px 1px 5px;
|
||||
border-top: 0;
|
||||
border-left: 0;
|
||||
border-right: 0;
|
||||
border-bottom: 1px solid #aaa;
|
||||
}
|
||||
|
||||
table.field-list td, table.field-list th {
|
||||
border: 0 !important;
|
||||
}
|
||||
|
||||
table.footnote td, table.footnote th {
|
||||
border: 0 !important;
|
||||
}
|
||||
|
||||
th {
|
||||
text-align: left;
|
||||
padding-right: 5px;
|
||||
}
|
||||
|
||||
table.citation {
|
||||
border-left: solid 1px gray;
|
||||
margin-left: 1px;
|
||||
}
|
||||
|
||||
table.citation td {
|
||||
border-bottom: none;
|
||||
}
|
||||
|
||||
/* -- other body styles ----------------------------------------------------- */
|
||||
|
||||
ol.arabic {
|
||||
list-style: decimal;
|
||||
}
|
||||
|
||||
ol.loweralpha {
|
||||
list-style: lower-alpha;
|
||||
}
|
||||
|
||||
ol.upperalpha {
|
||||
list-style: upper-alpha;
|
||||
}
|
||||
|
||||
ol.lowerroman {
|
||||
list-style: lower-roman;
|
||||
}
|
||||
|
||||
ol.upperroman {
|
||||
list-style: upper-roman;
|
||||
}
|
||||
|
||||
dl {
|
||||
margin-bottom: 15px;
|
||||
}
|
||||
|
||||
dd p {
|
||||
margin-top: 0px;
|
||||
}
|
||||
|
||||
dd ul, dd table {
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
|
||||
dd {
|
||||
margin-top: 3px;
|
||||
margin-bottom: 10px;
|
||||
margin-left: 30px;
|
||||
}
|
||||
|
||||
dt:target, .highlighted {
|
||||
background-color: #fbe54e;
|
||||
}
|
||||
|
||||
dl.glossary dt {
|
||||
font-weight: bold;
|
||||
font-size: 1.1em;
|
||||
}
|
||||
|
||||
.field-list ul {
|
||||
margin: 0;
|
||||
padding-left: 1em;
|
||||
}
|
||||
|
||||
.field-list p {
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.refcount {
|
||||
color: #060;
|
||||
}
|
||||
|
||||
.optional {
|
||||
font-size: 1.3em;
|
||||
}
|
||||
|
||||
.versionmodified {
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
.system-message {
|
||||
background-color: #fda;
|
||||
padding: 5px;
|
||||
border: 3px solid red;
|
||||
}
|
||||
|
||||
.footnote:target {
|
||||
background-color: #ffa
|
||||
}
|
||||
|
||||
.line-block {
|
||||
display: block;
|
||||
margin-top: 1em;
|
||||
margin-bottom: 1em;
|
||||
}
|
||||
|
||||
.line-block .line-block {
|
||||
margin-top: 0;
|
||||
margin-bottom: 0;
|
||||
margin-left: 1.5em;
|
||||
}
|
||||
|
||||
.guilabel, .menuselection {
|
||||
font-family: sans-serif;
|
||||
}
|
||||
|
||||
.accelerator {
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
.classifier {
|
||||
font-style: oblique;
|
||||
}
|
||||
|
||||
/* -- code displays --------------------------------------------------------- */
|
||||
|
||||
pre {
|
||||
overflow: auto;
|
||||
}
|
||||
|
||||
td.linenos pre {
|
||||
padding: 5px 0px;
|
||||
border: 0;
|
||||
background-color: transparent;
|
||||
color: #aaa;
|
||||
}
|
||||
|
||||
table.highlighttable {
|
||||
margin-left: 0.5em;
|
||||
}
|
||||
|
||||
table.highlighttable td {
|
||||
padding: 0 0.5em 0 0.5em;
|
||||
}
|
||||
|
||||
tt.descname {
|
||||
background-color: transparent;
|
||||
font-weight: bold;
|
||||
font-size: 1.2em;
|
||||
}
|
||||
|
||||
tt.descclassname {
|
||||
background-color: transparent;
|
||||
}
|
||||
|
||||
tt.xref, a tt {
|
||||
background-color: transparent;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
h1 tt, h2 tt, h3 tt, h4 tt, h5 tt, h6 tt {
|
||||
background-color: transparent;
|
||||
}
|
||||
|
||||
.viewcode-link {
|
||||
float: right;
|
||||
}
|
||||
|
||||
.viewcode-back {
|
||||
float: right;
|
||||
font-family: sans-serif;
|
||||
}
|
||||
|
||||
div.viewcode-block:target {
|
||||
margin: -1px -10px;
|
||||
padding: 0 10px;
|
||||
}
|
||||
|
||||
/* -- math display ---------------------------------------------------------- */
|
||||
|
||||
img.math {
|
||||
vertical-align: middle;
|
||||
}
|
||||
|
||||
div.document div.math p {
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
span.eqno {
|
||||
float: right;
|
||||
}
|
||||
|
||||
/* -- printout stylesheet --------------------------------------------------- */
|
||||
|
||||
@media print {
|
||||
div.document,
|
||||
div.documentwrapper,
|
||||
div.bodywrapper {
|
||||
margin: 0 !important;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
div.sphinxsidebar,
|
||||
div.related,
|
||||
div.footer,
|
||||
#top-link {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
|
|
@ -1,14 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
MATRIXDOTORG=$HOME/workspace/matrix.org
|
||||
|
||||
rst2html-2.7.py --stylesheet=basic.css,nature.css ../docs/specification.rst > $MATRIXDOTORG/docs/spec/index.html
|
||||
rst2html-2.7.py --stylesheet=basic.css,nature.css ../docs/client-server/howto.rst > $MATRIXDOTORG/docs/howtos/client-server.html
|
||||
|
||||
perl -pi -e 's#<head>#<head><link rel="stylesheet" href="/site.css">#' $MATRIXDOTORG/docs/spec/index.html $MATRIXDOTORG/docs/howtos/client-server.html
|
||||
|
||||
perl -pi -e 's#<body>#<body><div id="header"><div id="headerContent"> </div></div><div id="page"><div id="wrapper"><div style="text-align: center; padding: 40px;"><a href="/"><img src="/matrix.png" width="305" height="130" alt="[matrix]"/></a></div>#' $MATRIXDOTORG/docs/spec/index.html $MATRIXDOTORG/docs/howtos/client-server.html
|
||||
|
||||
perl -pi -e 's#</body>#</div></div><div id="footer"><div id="footerContent">© 2014 Matrix.org</div></div></body>#' $MATRIXDOTORG/docs/spec/index.html $MATRIXDOTORG/docs/howtos/client-server.html
|
||||
|
||||
scp -r $MATRIXDOTORG/docs matrix@ldc-prd-matrix-001:/sites/matrix
|
|
@ -1,270 +0,0 @@
|
|||
/*
|
||||
* nature.css_t
|
||||
* ~~~~~~~~~~~~
|
||||
*
|
||||
* Sphinx stylesheet -- nature theme.
|
||||
*
|
||||
* :copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
* :license: BSD, see LICENSE for details.
|
||||
*
|
||||
*/
|
||||
|
||||
/* -- page layout ----------------------------------------------------------- */
|
||||
|
||||
body {
|
||||
font-family: Arial, sans-serif;
|
||||
font-size: 100%;
|
||||
/*background-color: #111;*/
|
||||
color: #555;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
div.documentwrapper {
|
||||
float: left;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
div.bodywrapper {
|
||||
margin: 0 0 0 230px;
|
||||
}
|
||||
|
||||
hr {
|
||||
border: 1px solid #B1B4B6;
|
||||
}
|
||||
|
||||
/*
|
||||
div.document {
|
||||
background-color: #eee;
|
||||
}
|
||||
*/
|
||||
|
||||
div.document {
|
||||
background-color: #ffffff;
|
||||
color: #3E4349;
|
||||
padding: 0 30px 30px 30px;
|
||||
font-size: 0.9em;
|
||||
}
|
||||
|
||||
div.footer {
|
||||
color: #555;
|
||||
width: 100%;
|
||||
padding: 13px 0;
|
||||
text-align: center;
|
||||
font-size: 75%;
|
||||
}
|
||||
|
||||
div.footer a {
|
||||
color: #444;
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
div.related {
|
||||
background-color: #6BA81E;
|
||||
line-height: 32px;
|
||||
color: #fff;
|
||||
text-shadow: 0px 1px 0 #444;
|
||||
font-size: 0.9em;
|
||||
}
|
||||
|
||||
div.related a {
|
||||
color: #E2F3CC;
|
||||
}
|
||||
|
||||
div.sphinxsidebar {
|
||||
font-size: 0.75em;
|
||||
line-height: 1.5em;
|
||||
}
|
||||
|
||||
div.sphinxsidebarwrapper{
|
||||
padding: 20px 0;
|
||||
}
|
||||
|
||||
div.sphinxsidebar h3,
|
||||
div.sphinxsidebar h4 {
|
||||
font-family: Arial, sans-serif;
|
||||
color: #222;
|
||||
font-size: 1.2em;
|
||||
font-weight: normal;
|
||||
margin: 0;
|
||||
padding: 5px 10px;
|
||||
background-color: #ddd;
|
||||
text-shadow: 1px 1px 0 white
|
||||
}
|
||||
|
||||
div.sphinxsidebar h4{
|
||||
font-size: 1.1em;
|
||||
}
|
||||
|
||||
div.sphinxsidebar h3 a {
|
||||
color: #444;
|
||||
}
|
||||
|
||||
|
||||
div.sphinxsidebar p {
|
||||
color: #888;
|
||||
padding: 5px 20px;
|
||||
}
|
||||
|
||||
div.sphinxsidebar p.topless {
|
||||
}
|
||||
|
||||
div.sphinxsidebar ul {
|
||||
margin: 10px 20px;
|
||||
padding: 0;
|
||||
color: #000;
|
||||
}
|
||||
|
||||
div.sphinxsidebar a {
|
||||
color: #444;
|
||||
}
|
||||
|
||||
div.sphinxsidebar input {
|
||||
border: 1px solid #ccc;
|
||||
font-family: sans-serif;
|
||||
font-size: 1em;
|
||||
}
|
||||
|
||||
div.sphinxsidebar input[type=text]{
|
||||
margin-left: 20px;
|
||||
}
|
||||
|
||||
/* -- body styles ----------------------------------------------------------- */
|
||||
|
||||
a {
|
||||
color: #005B81;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
a:hover {
|
||||
color: #E32E00;
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
div.document h1,
|
||||
div.document h2,
|
||||
div.document h3,
|
||||
div.document h4,
|
||||
div.document h5,
|
||||
div.document h6 {
|
||||
font-family: Arial, sans-serif;
|
||||
background-color: #BED4EB;
|
||||
font-weight: normal;
|
||||
color: #212224;
|
||||
margin: 30px 0px 10px 0px;
|
||||
padding: 5px 0 5px 10px;
|
||||
text-shadow: 0px 1px 0 white
|
||||
}
|
||||
|
||||
div.document h1 { border-top: 20px solid white; margin-top: 0; font-size: 200%; }
|
||||
div.document h2 { font-size: 150%; background-color: #C8D5E3; }
|
||||
div.document h3 { font-size: 120%; background-color: #D8DEE3; }
|
||||
div.document h4 { font-size: 110%; background-color: #D8DEE3; }
|
||||
div.document h5 { font-size: 100%; background-color: #D8DEE3; }
|
||||
div.document h6 { font-size: 100%; background-color: #D8DEE3; }
|
||||
|
||||
a.headerlink {
|
||||
color: #c60f0f;
|
||||
font-size: 0.8em;
|
||||
padding: 0 4px 0 4px;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
a.headerlink:hover {
|
||||
background-color: #c60f0f;
|
||||
color: white;
|
||||
}
|
||||
|
||||
div.document p, div.document dd, div.document li {
|
||||
line-height: 1.5em;
|
||||
}
|
||||
|
||||
div.admonition p.admonition-title + p {
|
||||
display: inline;
|
||||
}
|
||||
|
||||
div.highlight{
|
||||
background-color: white;
|
||||
}
|
||||
|
||||
div.note {
|
||||
background-color: #eee;
|
||||
border: 1px solid #ccc;
|
||||
}
|
||||
|
||||
div.seealso {
|
||||
background-color: #ffc;
|
||||
border: 1px solid #ff6;
|
||||
}
|
||||
|
||||
div.topic {
|
||||
background-color: #eee;
|
||||
}
|
||||
|
||||
div.warning {
|
||||
background-color: #ffe4e4;
|
||||
border: 1px solid #f66;
|
||||
}
|
||||
|
||||
p.admonition-title {
|
||||
display: inline;
|
||||
}
|
||||
|
||||
p.admonition-title:after {
|
||||
content: ":";
|
||||
}
|
||||
|
||||
pre {
|
||||
padding: 10px;
|
||||
background-color: White;
|
||||
color: #222;
|
||||
line-height: 1.2em;
|
||||
border: 1px solid #C6C9CB;
|
||||
font-size: 1.1em;
|
||||
margin: 1.5em 0 1.5em 0;
|
||||
-webkit-box-shadow: 1px 1px 1px #d8d8d8;
|
||||
-moz-box-shadow: 1px 1px 1px #d8d8d8;
|
||||
}
|
||||
|
||||
tt {
|
||||
background-color: #ecf0f3;
|
||||
color: #222;
|
||||
/* padding: 1px 2px; */
|
||||
font-size: 1.1em;
|
||||
font-family: monospace;
|
||||
}
|
||||
|
||||
.viewcode-back {
|
||||
font-family: Arial, sans-serif;
|
||||
}
|
||||
|
||||
div.viewcode-block:target {
|
||||
background-color: #f4debf;
|
||||
border-top: 1px solid #ac9;
|
||||
border-bottom: 1px solid #ac9;
|
||||
}
|
||||
|
||||
p {
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
ul li dd {
|
||||
margin-top: 0;
|
||||
}
|
||||
|
||||
ul li dl {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
li dl dd {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
dd ul {
|
||||
padding-left: 0;
|
||||
}
|
||||
|
||||
li dd ul {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
4
setup.py
4
setup.py
|
@ -31,7 +31,7 @@ setup(
|
|||
packages=find_packages(exclude=["tests"]),
|
||||
description="Reference Synapse Home Server",
|
||||
install_requires=[
|
||||
"syutil==0.0.1",
|
||||
"syutil==0.0.2",
|
||||
"Twisted>=14.0.0",
|
||||
"service_identity>=1.0.0",
|
||||
"pyyaml",
|
||||
|
@ -41,7 +41,7 @@ setup(
|
|||
"py-bcrypt",
|
||||
],
|
||||
dependency_links=[
|
||||
"git+ssh://git@github.com/matrix-org/syutil.git#egg=syutil-0.0.1",
|
||||
"git+ssh://git@github.com/matrix-org/syutil.git#egg=syutil-0.0.2",
|
||||
],
|
||||
setup_requires=[
|
||||
"setuptools_trial",
|
||||
|
|
|
@ -16,4 +16,4 @@
|
|||
""" This is a reference implementation of a synapse home server.
|
||||
"""
|
||||
|
||||
__version__ = "0.3.4"
|
||||
__version__ = "0.4.0"
|
||||
|
|
|
@ -206,6 +206,7 @@ class Auth(object):
|
|||
|
||||
defer.returnValue(True)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_user_by_req(self, request):
|
||||
""" Get a registered user's ID.
|
||||
|
||||
|
@ -218,7 +219,25 @@ class Auth(object):
|
|||
"""
|
||||
# Can optionally look elsewhere in the request (e.g. headers)
|
||||
try:
|
||||
return self.get_user_by_token(request.args["access_token"][0])
|
||||
access_token = request.args["access_token"][0]
|
||||
user_info = yield self.get_user_by_token(access_token)
|
||||
user = user_info["user"]
|
||||
|
||||
ip_addr = self.hs.get_ip_from_request(request)
|
||||
user_agent = request.requestHeaders.getRawHeaders(
|
||||
"User-Agent",
|
||||
default=[""]
|
||||
)[0]
|
||||
if user and access_token and ip_addr:
|
||||
self.store.insert_client_ip(
|
||||
user=user,
|
||||
access_token=access_token,
|
||||
device_id=user_info["device_id"],
|
||||
ip=ip_addr,
|
||||
user_agent=user_agent
|
||||
)
|
||||
|
||||
defer.returnValue(user)
|
||||
except KeyError:
|
||||
raise AuthError(403, "Missing access token.")
|
||||
|
||||
|
@ -227,21 +246,32 @@ class Auth(object):
|
|||
""" Get a registered user's ID.
|
||||
|
||||
Args:
|
||||
token (str)- The access token to get the user by.
|
||||
token (str): The access token to get the user by.
|
||||
Returns:
|
||||
UserID : User ID object of the user who has that access token.
|
||||
dict : dict that includes the user, device_id, and whether the
|
||||
user is a server admin.
|
||||
Raises:
|
||||
AuthError if no user by that token exists or the token is invalid.
|
||||
"""
|
||||
try:
|
||||
user_id = yield self.store.get_user_by_token(token=token)
|
||||
if not user_id:
|
||||
ret = yield self.store.get_user_by_token(token=token)
|
||||
if not ret:
|
||||
raise StoreError()
|
||||
defer.returnValue(self.hs.parse_userid(user_id))
|
||||
|
||||
user_info = {
|
||||
"admin": bool(ret.get("admin", False)),
|
||||
"device_id": ret.get("device_id"),
|
||||
"user": self.hs.parse_userid(ret.get("name")),
|
||||
}
|
||||
|
||||
defer.returnValue(user_info)
|
||||
except StoreError:
|
||||
raise AuthError(403, "Unrecognised access token.",
|
||||
errcode=Codes.UNKNOWN_TOKEN)
|
||||
|
||||
def is_server_admin(self, user):
|
||||
return self.store.is_server_admin(user)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def _can_send_event(self, event):
|
||||
|
|
|
@ -19,6 +19,7 @@ import logging
|
|||
|
||||
|
||||
class Codes(object):
|
||||
UNAUTHORIZED = "M_UNAUTHORIZED"
|
||||
FORBIDDEN = "M_FORBIDDEN"
|
||||
BAD_JSON = "M_BAD_JSON"
|
||||
NOT_JSON = "M_NOT_JSON"
|
||||
|
|
|
@ -58,8 +58,8 @@ class EventFactory(object):
|
|||
random_string(10), self.hs.hostname
|
||||
)
|
||||
|
||||
if "ts" not in kwargs:
|
||||
kwargs["ts"] = int(self.clock.time_msec())
|
||||
if "origin_server_ts" not in kwargs:
|
||||
kwargs["origin_server_ts"] = int(self.clock.time_msec())
|
||||
|
||||
# The "age" key is a delta timestamp that should be converted into an
|
||||
# absolute timestamp the minute we see it.
|
||||
|
|
|
@ -19,3 +19,4 @@ CLIENT_PREFIX = "/_matrix/client/api/v1"
|
|||
FEDERATION_PREFIX = "/_matrix/federation/v1"
|
||||
WEB_CLIENT_PREFIX = "/_matrix/client"
|
||||
CONTENT_REPO_PREFIX = "/_matrix/content"
|
||||
SERVER_KEY_PREFIX = "/_matrix/key/v1"
|
||||
|
|
|
@ -25,9 +25,11 @@ from twisted.web.static import File
|
|||
from twisted.web.server import Site
|
||||
from synapse.http.server import JsonResource, RootRedirect
|
||||
from synapse.http.content_repository import ContentRepoResource
|
||||
from synapse.http.client import TwistedHttpClient
|
||||
from synapse.http.server_key_resource import LocalKey
|
||||
from synapse.http.client import MatrixHttpClient
|
||||
from synapse.api.urls import (
|
||||
CLIENT_PREFIX, FEDERATION_PREFIX, WEB_CLIENT_PREFIX, CONTENT_REPO_PREFIX
|
||||
CLIENT_PREFIX, FEDERATION_PREFIX, WEB_CLIENT_PREFIX, CONTENT_REPO_PREFIX,
|
||||
SERVER_KEY_PREFIX,
|
||||
)
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.crypto import context_factory
|
||||
|
@ -47,7 +49,7 @@ logger = logging.getLogger(__name__)
|
|||
class SynapseHomeServer(HomeServer):
|
||||
|
||||
def build_http_client(self):
|
||||
return TwistedHttpClient(self)
|
||||
return MatrixHttpClient(self)
|
||||
|
||||
def build_resource_for_client(self):
|
||||
return JsonResource()
|
||||
|
@ -63,6 +65,9 @@ class SynapseHomeServer(HomeServer):
|
|||
self, self.upload_dir, self.auth, self.content_addr
|
||||
)
|
||||
|
||||
def build_resource_for_server_key(self):
|
||||
return LocalKey(self)
|
||||
|
||||
def build_db_pool(self):
|
||||
return adbapi.ConnectionPool(
|
||||
"sqlite3", self.get_db_name(),
|
||||
|
@ -88,7 +93,8 @@ class SynapseHomeServer(HomeServer):
|
|||
desired_tree = [
|
||||
(CLIENT_PREFIX, self.get_resource_for_client()),
|
||||
(FEDERATION_PREFIX, self.get_resource_for_federation()),
|
||||
(CONTENT_REPO_PREFIX, self.get_resource_for_content_repo())
|
||||
(CONTENT_REPO_PREFIX, self.get_resource_for_content_repo()),
|
||||
(SERVER_KEY_PREFIX, self.get_resource_for_server_key()),
|
||||
]
|
||||
if web_client:
|
||||
logger.info("Adding the web client.")
|
||||
|
|
|
@ -123,6 +123,8 @@ class Config(object):
|
|||
# style mode markers into the file, to hint to people that
|
||||
# this is a YAML file.
|
||||
yaml.dump(config, config_file, default_flow_style=False)
|
||||
print "A config file has been generated in %s for server name '%s') with corresponding SSL keys and self-signed certificates. Please review this file and customise it to your needs." % (config_args.config_path, config['server_name'])
|
||||
print "If this server name is incorrect, you will need to regenerate the SSL certificates"
|
||||
sys.exit(0)
|
||||
|
||||
return cls(args)
|
||||
|
|
|
@ -14,7 +14,6 @@
|
|||
# limitations under the License.
|
||||
|
||||
from ._base import Config
|
||||
import os
|
||||
|
||||
class ContentRepositoryConfig(Config):
|
||||
def __init__(self, args):
|
||||
|
|
|
@ -13,10 +13,9 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import nacl.signing
|
||||
import os
|
||||
from ._base import Config
|
||||
from syutil.base64util import encode_base64, decode_base64
|
||||
from ._base import Config, ConfigError
|
||||
import syutil.crypto.signing_key
|
||||
|
||||
|
||||
class ServerConfig(Config):
|
||||
|
@ -70,9 +69,16 @@ class ServerConfig(Config):
|
|||
"content repository")
|
||||
|
||||
def read_signing_key(self, signing_key_path):
|
||||
signing_key_base64 = self.read_file(signing_key_path, "signing_key")
|
||||
signing_key_bytes = decode_base64(signing_key_base64)
|
||||
return nacl.signing.SigningKey(signing_key_bytes)
|
||||
signing_keys = self.read_file(signing_key_path, "signing_key")
|
||||
try:
|
||||
return syutil.crypto.signing_key.read_signing_keys(
|
||||
signing_keys.splitlines(True)
|
||||
)
|
||||
except Exception as e:
|
||||
raise ConfigError(
|
||||
"Error reading signing_key."
|
||||
" Try running again with --generate-config"
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def generate_config(cls, args, config_dir_path):
|
||||
|
@ -86,6 +92,21 @@ class ServerConfig(Config):
|
|||
|
||||
if not os.path.exists(args.signing_key_path):
|
||||
with open(args.signing_key_path, "w") as signing_key_file:
|
||||
key = nacl.signing.SigningKey.generate()
|
||||
signing_key_file.write(encode_base64(key.encode()))
|
||||
|
||||
syutil.crypto.signing_key.write_signing_keys(
|
||||
signing_key_file,
|
||||
(syutil.crypto.SigningKey.generate("auto"),),
|
||||
)
|
||||
else:
|
||||
signing_keys = cls.read_file(args.signing_key_path, "signing_key")
|
||||
if len(signing_keys.split("\n")[0].split()) == 1:
|
||||
# handle keys in the old format.
|
||||
key = syutil.crypto.signing_key.decode_signing_key_base64(
|
||||
syutil.crypto.signing_key.NACL_ED25519,
|
||||
"auto",
|
||||
signing_keys.split("\n")[0]
|
||||
)
|
||||
with open(args.signing_key_path, "w") as signing_key_file:
|
||||
syutil.crypto.signing_key.write_signing_keys(
|
||||
signing_key_file,
|
||||
(key,),
|
||||
)
|
||||
|
|
|
@ -15,9 +15,10 @@
|
|||
|
||||
|
||||
from twisted.web.http import HTTPClient
|
||||
from twisted.internet.protocol import Factory
|
||||
from twisted.internet import defer, reactor
|
||||
from twisted.internet.protocol import ClientFactory
|
||||
from twisted.names.srvconnect import SRVConnector
|
||||
from twisted.internet.endpoints import connectProtocol
|
||||
from synapse.http.endpoint import matrix_endpoint
|
||||
import json
|
||||
import logging
|
||||
|
||||
|
@ -30,15 +31,19 @@ def fetch_server_key(server_name, ssl_context_factory):
|
|||
"""Fetch the keys for a remote server."""
|
||||
|
||||
factory = SynapseKeyClientFactory()
|
||||
endpoint = matrix_endpoint(
|
||||
reactor, server_name, ssl_context_factory, timeout=30
|
||||
)
|
||||
|
||||
SRVConnector(
|
||||
reactor, "matrix", server_name, factory,
|
||||
protocol="tcp", connectFuncName="connectSSL", defaultPort=443,
|
||||
connectFuncKwArgs=dict(contextFactory=ssl_context_factory)).connect()
|
||||
|
||||
server_key, server_certificate = yield factory.remote_key
|
||||
|
||||
defer.returnValue((server_key, server_certificate))
|
||||
for i in range(5):
|
||||
try:
|
||||
protocol = yield endpoint.connect(factory)
|
||||
server_response, server_certificate = yield protocol.remote_key
|
||||
defer.returnValue((server_response, server_certificate))
|
||||
return
|
||||
except Exception as e:
|
||||
logger.exception(e)
|
||||
raise IOError("Cannot get key for %s" % server_name)
|
||||
|
||||
|
||||
class SynapseKeyClientError(Exception):
|
||||
|
@ -51,69 +56,47 @@ class SynapseKeyClientProtocol(HTTPClient):
|
|||
the server and extracts the X.509 certificate for the remote peer from the
|
||||
SSL connection."""
|
||||
|
||||
timeout = 30
|
||||
|
||||
def __init__(self):
|
||||
self.remote_key = defer.Deferred()
|
||||
|
||||
def connectionMade(self):
|
||||
logger.debug("Connected to %s", self.transport.getHost())
|
||||
self.sendCommand(b"GET", b"/key")
|
||||
self.sendCommand(b"GET", b"/_matrix/key/v1/")
|
||||
self.endHeaders()
|
||||
self.timer = reactor.callLater(
|
||||
self.factory.timeout_seconds,
|
||||
self.timeout,
|
||||
self.on_timeout
|
||||
)
|
||||
|
||||
def handleStatus(self, version, status, message):
|
||||
if status != b"200":
|
||||
logger.info("Non-200 response from %s: %s %s",
|
||||
self.transport.getHost(), status, message)
|
||||
#logger.info("Non-200 response from %s: %s %s",
|
||||
# self.transport.getHost(), status, message)
|
||||
self.transport.abortConnection()
|
||||
|
||||
def handleResponse(self, response_body_bytes):
|
||||
try:
|
||||
json_response = json.loads(response_body_bytes)
|
||||
except ValueError:
|
||||
logger.info("Invalid JSON response from %s",
|
||||
self.transport.getHost())
|
||||
#logger.info("Invalid JSON response from %s",
|
||||
# self.transport.getHost())
|
||||
self.transport.abortConnection()
|
||||
return
|
||||
|
||||
certificate = self.transport.getPeerCertificate()
|
||||
self.factory.on_remote_key((json_response, certificate))
|
||||
self.remote_key.callback((json_response, certificate))
|
||||
self.transport.abortConnection()
|
||||
self.timer.cancel()
|
||||
|
||||
def on_timeout(self):
|
||||
logger.debug("Timeout waiting for response from %s",
|
||||
self.transport.getHost())
|
||||
self.remote_key.errback(IOError("Timeout waiting for response"))
|
||||
self.transport.abortConnection()
|
||||
|
||||
|
||||
class SynapseKeyClientFactory(ClientFactory):
|
||||
class SynapseKeyClientFactory(Factory):
|
||||
protocol = SynapseKeyClientProtocol
|
||||
max_retries = 5
|
||||
timeout_seconds = 30
|
||||
|
||||
def __init__(self):
|
||||
self.succeeded = False
|
||||
self.retries = 0
|
||||
self.remote_key = defer.Deferred()
|
||||
|
||||
def on_remote_key(self, key):
|
||||
self.succeeded = True
|
||||
self.remote_key.callback(key)
|
||||
|
||||
def retry_connection(self, connector):
|
||||
self.retries += 1
|
||||
if self.retries < self.max_retries:
|
||||
connector.connector = None
|
||||
connector.connect()
|
||||
else:
|
||||
self.remote_key.errback(
|
||||
SynapseKeyClientError("Max retries exceeded"))
|
||||
|
||||
def clientConnectionFailed(self, connector, reason):
|
||||
logger.info("Connection failed %s", reason)
|
||||
self.retry_connection(connector)
|
||||
|
||||
def clientConnectionLost(self, connector, reason):
|
||||
logger.info("Connection lost %s", reason)
|
||||
if not self.succeeded:
|
||||
self.retry_connection(connector)
|
||||
|
|
155
synapse/crypto/keyring.py
Normal file
155
synapse/crypto/keyring.py
Normal file
|
@ -0,0 +1,155 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014 OpenMarket Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from synapse.crypto.keyclient import fetch_server_key
|
||||
from twisted.internet import defer
|
||||
from syutil.crypto.jsonsign import verify_signed_json, signature_ids
|
||||
from syutil.crypto.signing_key import (
|
||||
is_signing_algorithm_supported, decode_verify_key_bytes
|
||||
)
|
||||
from syutil.base64util import decode_base64, encode_base64
|
||||
from synapse.api.errors import SynapseError, Codes
|
||||
|
||||
from OpenSSL import crypto
|
||||
|
||||
import logging
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Keyring(object):
|
||||
def __init__(self, hs):
|
||||
self.store = hs.get_datastore()
|
||||
self.clock = hs.get_clock()
|
||||
self.hs = hs
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def verify_json_for_server(self, server_name, json_object):
|
||||
logger.debug("Verifying for %s", server_name)
|
||||
key_ids = signature_ids(json_object, server_name)
|
||||
if not key_ids:
|
||||
raise SynapseError(
|
||||
400,
|
||||
"Not signed with a supported algorithm",
|
||||
Codes.UNAUTHORIZED,
|
||||
)
|
||||
try:
|
||||
verify_key = yield self.get_server_verify_key(server_name, key_ids)
|
||||
except IOError:
|
||||
raise SynapseError(
|
||||
502,
|
||||
"Error downloading keys for %s" % (server_name,),
|
||||
Codes.UNAUTHORIZED,
|
||||
)
|
||||
except:
|
||||
raise SynapseError(
|
||||
401,
|
||||
"No key for %s with id %s" % (server_name, key_ids),
|
||||
Codes.UNAUTHORIZED,
|
||||
)
|
||||
try:
|
||||
verify_signed_json(json_object, server_name, verify_key)
|
||||
except:
|
||||
raise SynapseError(
|
||||
401,
|
||||
"Invalid signature for server %s with key %s:%s" % (
|
||||
server_name, verify_key.alg, verify_key.version
|
||||
),
|
||||
Codes.UNAUTHORIZED,
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_server_verify_key(self, server_name, key_ids):
|
||||
"""Finds a verification key for the server with one of the key ids.
|
||||
Args:
|
||||
server_name (str): The name of the server to fetch a key for.
|
||||
keys_ids (list of str): The key_ids to check for.
|
||||
"""
|
||||
|
||||
# Check the datastore to see if we have one cached.
|
||||
cached = yield self.store.get_server_verify_keys(server_name, key_ids)
|
||||
|
||||
if cached:
|
||||
defer.returnValue(cached[0])
|
||||
return
|
||||
|
||||
# Try to fetch the key from the remote server.
|
||||
# TODO(markjh): Ratelimit requests to a given server.
|
||||
|
||||
(response, tls_certificate) = yield fetch_server_key(
|
||||
server_name, self.hs.tls_context_factory
|
||||
)
|
||||
|
||||
# Check the response.
|
||||
|
||||
x509_certificate_bytes = crypto.dump_certificate(
|
||||
crypto.FILETYPE_ASN1, tls_certificate
|
||||
)
|
||||
|
||||
if ("signatures" not in response
|
||||
or server_name not in response["signatures"]):
|
||||
raise ValueError("Key response not signed by remote server")
|
||||
|
||||
if "tls_certificate" not in response:
|
||||
raise ValueError("Key response missing TLS certificate")
|
||||
|
||||
tls_certificate_b64 = response["tls_certificate"]
|
||||
|
||||
if encode_base64(x509_certificate_bytes) != tls_certificate_b64:
|
||||
raise ValueError("TLS certificate doesn't match")
|
||||
|
||||
verify_keys = {}
|
||||
for key_id, key_base64 in response["verify_keys"].items():
|
||||
if is_signing_algorithm_supported(key_id):
|
||||
key_bytes = decode_base64(key_base64)
|
||||
verify_key = decode_verify_key_bytes(key_id, key_bytes)
|
||||
verify_keys[key_id] = verify_key
|
||||
|
||||
for key_id in response["signatures"][server_name]:
|
||||
if key_id not in response["verify_keys"]:
|
||||
raise ValueError(
|
||||
"Key response must include verification keys for all"
|
||||
" signatures"
|
||||
)
|
||||
if key_id in verify_keys:
|
||||
verify_signed_json(
|
||||
response,
|
||||
server_name,
|
||||
verify_keys[key_id]
|
||||
)
|
||||
|
||||
# Cache the result in the datastore.
|
||||
|
||||
time_now_ms = self.clock.time_msec()
|
||||
|
||||
self.store.store_server_certificate(
|
||||
server_name,
|
||||
server_name,
|
||||
time_now_ms,
|
||||
tls_certificate,
|
||||
)
|
||||
|
||||
for key_id, key in verify_keys.items():
|
||||
self.store.store_server_verify_key(
|
||||
server_name, server_name, time_now_ms, key
|
||||
)
|
||||
|
||||
for key_id in key_ids:
|
||||
if key_id in verify_keys:
|
||||
defer.returnValue(verify_keys[key_id])
|
||||
return
|
||||
|
||||
raise ValueError("No verification key found for given key ids")
|
|
@ -1,111 +0,0 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014 OpenMarket Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
from twisted.internet import reactor, ssl
|
||||
from twisted.web import server
|
||||
from twisted.web.resource import Resource
|
||||
from twisted.python.log import PythonLoggingObserver
|
||||
|
||||
from synapse.crypto.resource.key import LocalKey
|
||||
from synapse.crypto.config import load_config
|
||||
|
||||
from syutil.base64util import decode_base64
|
||||
|
||||
from OpenSSL import crypto, SSL
|
||||
|
||||
import logging
|
||||
import nacl.signing
|
||||
import sys
|
||||
|
||||
|
||||
class KeyServerSSLContextFactory(ssl.ContextFactory):
|
||||
"""Factory for PyOpenSSL SSL contexts that are used to handle incoming
|
||||
connections and to make connections to remote servers."""
|
||||
|
||||
def __init__(self, key_server):
|
||||
self._context = SSL.Context(SSL.SSLv23_METHOD)
|
||||
self.configure_context(self._context, key_server)
|
||||
|
||||
@staticmethod
|
||||
def configure_context(context, key_server):
|
||||
context.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3)
|
||||
context.use_certificate(key_server.tls_certificate)
|
||||
context.use_privatekey(key_server.tls_private_key)
|
||||
context.load_tmp_dh(key_server.tls_dh_params_path)
|
||||
context.set_cipher_list("!ADH:HIGH+kEDH:!AECDH:HIGH+kEECDH")
|
||||
|
||||
def getContext(self):
|
||||
return self._context
|
||||
|
||||
|
||||
class KeyServer(object):
|
||||
"""An HTTPS server serving LocalKey and RemoteKey resources."""
|
||||
|
||||
def __init__(self, server_name, tls_certificate_path, tls_private_key_path,
|
||||
tls_dh_params_path, signing_key_path, bind_host, bind_port):
|
||||
self.server_name = server_name
|
||||
self.tls_certificate = self.read_tls_certificate(tls_certificate_path)
|
||||
self.tls_private_key = self.read_tls_private_key(tls_private_key_path)
|
||||
self.tls_dh_params_path = tls_dh_params_path
|
||||
self.signing_key = self.read_signing_key(signing_key_path)
|
||||
self.bind_host = bind_host
|
||||
self.bind_port = int(bind_port)
|
||||
self.ssl_context_factory = KeyServerSSLContextFactory(self)
|
||||
|
||||
@staticmethod
|
||||
def read_tls_certificate(cert_path):
|
||||
with open(cert_path) as cert_file:
|
||||
cert_pem = cert_file.read()
|
||||
return crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem)
|
||||
|
||||
@staticmethod
|
||||
def read_tls_private_key(private_key_path):
|
||||
with open(private_key_path) as private_key_file:
|
||||
private_key_pem = private_key_file.read()
|
||||
return crypto.load_privatekey(crypto.FILETYPE_PEM, private_key_pem)
|
||||
|
||||
@staticmethod
|
||||
def read_signing_key(signing_key_path):
|
||||
with open(signing_key_path) as signing_key_file:
|
||||
signing_key_b64 = signing_key_file.read()
|
||||
signing_key_bytes = decode_base64(signing_key_b64)
|
||||
return nacl.signing.SigningKey(signing_key_bytes)
|
||||
|
||||
def run(self):
|
||||
root = Resource()
|
||||
root.putChild("key", LocalKey(self))
|
||||
site = server.Site(root)
|
||||
reactor.listenSSL(
|
||||
self.bind_port,
|
||||
site,
|
||||
self.ssl_context_factory,
|
||||
interface=self.bind_host
|
||||
)
|
||||
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
observer = PythonLoggingObserver()
|
||||
observer.start()
|
||||
|
||||
reactor.run()
|
||||
|
||||
|
||||
def main():
|
||||
key_server = KeyServer(**load_config(__doc__, sys.argv[1:]))
|
||||
key_server.run()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
|
@ -1,15 +0,0 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014 OpenMarket Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
|
@ -1,161 +0,0 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014 OpenMarket Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
from twisted.web.resource import Resource
|
||||
from twisted.web.server import NOT_DONE_YET
|
||||
from twisted.internet import defer
|
||||
from synapse.http.server import respond_with_json_bytes
|
||||
from synapse.crypto.keyclient import fetch_server_key
|
||||
from syutil.crypto.jsonsign import sign_json, verify_signed_json
|
||||
from syutil.base64util import encode_base64, decode_base64
|
||||
from syutil.jsonutil import encode_canonical_json
|
||||
from OpenSSL import crypto
|
||||
from nacl.signing import VerifyKey
|
||||
import logging
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class LocalKey(Resource):
|
||||
"""HTTP resource containing encoding the TLS X.509 certificate and NACL
|
||||
signature verification keys for this server::
|
||||
|
||||
GET /key HTTP/1.1
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Content-Type: application/json
|
||||
{
|
||||
"server_name": "this.server.example.com"
|
||||
"signature_verify_key": # base64 encoded NACL verification key.
|
||||
"tls_certificate": # base64 ASN.1 DER encoded X.509 tls cert.
|
||||
"signatures": {
|
||||
"this.server.example.com": # NACL signature for this server.
|
||||
}
|
||||
}
|
||||
"""
|
||||
|
||||
def __init__(self, key_server):
|
||||
self.key_server = key_server
|
||||
self.response_body = encode_canonical_json(
|
||||
self.response_json_object(key_server)
|
||||
)
|
||||
Resource.__init__(self)
|
||||
|
||||
@staticmethod
|
||||
def response_json_object(key_server):
|
||||
verify_key_bytes = key_server.signing_key.verify_key.encode()
|
||||
x509_certificate_bytes = crypto.dump_certificate(
|
||||
crypto.FILETYPE_ASN1,
|
||||
key_server.tls_certificate
|
||||
)
|
||||
json_object = {
|
||||
u"server_name": key_server.server_name,
|
||||
u"signature_verify_key": encode_base64(verify_key_bytes),
|
||||
u"tls_certificate": encode_base64(x509_certificate_bytes)
|
||||
}
|
||||
signed_json = sign_json(
|
||||
json_object,
|
||||
key_server.server_name,
|
||||
key_server.signing_key
|
||||
)
|
||||
return signed_json
|
||||
|
||||
def getChild(self, name, request):
|
||||
logger.info("getChild %s %s", name, request)
|
||||
if name == '':
|
||||
return self
|
||||
else:
|
||||
return RemoteKey(name, self.key_server)
|
||||
|
||||
def render_GET(self, request):
|
||||
return respond_with_json_bytes(request, 200, self.response_body)
|
||||
|
||||
|
||||
class RemoteKey(Resource):
|
||||
"""HTTP resource for retreiving the TLS certificate and NACL signature
|
||||
verification keys for a another server. Checks that the reported X.509 TLS
|
||||
certificate matches the one used in the HTTPS connection. Checks that the
|
||||
NACL signature for the remote server is valid. Returns JSON signed by both
|
||||
the remote server and by this server.
|
||||
|
||||
GET /key/remote.server.example.com HTTP/1.1
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Content-Type: application/json
|
||||
{
|
||||
"server_name": "remote.server.example.com"
|
||||
"signature_verify_key": # base64 encoded NACL verification key.
|
||||
"tls_certificate": # base64 ASN.1 DER encoded X.509 tls cert.
|
||||
"signatures": {
|
||||
"remote.server.example.com": # NACL signature for remote server.
|
||||
"this.server.example.com": # NACL signature for this server.
|
||||
}
|
||||
}
|
||||
"""
|
||||
|
||||
isLeaf = True
|
||||
|
||||
def __init__(self, server_name, key_server):
|
||||
self.server_name = server_name
|
||||
self.key_server = key_server
|
||||
Resource.__init__(self)
|
||||
|
||||
def render_GET(self, request):
|
||||
self._async_render_GET(request)
|
||||
return NOT_DONE_YET
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _async_render_GET(self, request):
|
||||
try:
|
||||
server_keys, certificate = yield fetch_server_key(
|
||||
self.server_name,
|
||||
self.key_server.ssl_context_factory
|
||||
)
|
||||
|
||||
resp_server_name = server_keys[u"server_name"]
|
||||
verify_key_b64 = server_keys[u"signature_verify_key"]
|
||||
tls_certificate_b64 = server_keys[u"tls_certificate"]
|
||||
verify_key = VerifyKey(decode_base64(verify_key_b64))
|
||||
|
||||
if resp_server_name != self.server_name:
|
||||
raise ValueError("Wrong server name '%s' != '%s'" %
|
||||
(resp_server_name, self.server_name))
|
||||
|
||||
x509_certificate_bytes = crypto.dump_certificate(
|
||||
crypto.FILETYPE_ASN1,
|
||||
certificate
|
||||
)
|
||||
|
||||
if encode_base64(x509_certificate_bytes) != tls_certificate_b64:
|
||||
raise ValueError("TLS certificate doesn't match")
|
||||
|
||||
verify_signed_json(server_keys, self.server_name, verify_key)
|
||||
|
||||
signed_json = sign_json(
|
||||
server_keys,
|
||||
self.key_server.server_name,
|
||||
self.key_server.signing_key
|
||||
)
|
||||
|
||||
json_bytes = encode_canonical_json(signed_json)
|
||||
respond_with_json_bytes(request, 200, json_bytes)
|
||||
|
||||
except Exception as e:
|
||||
json_bytes = encode_canonical_json({
|
||||
u"error": {u"code": 502, u"message": e.message}
|
||||
})
|
||||
respond_with_json_bytes(request, 502, json_bytes)
|
|
@ -22,6 +22,7 @@ from .transport import TransportLayer
|
|||
|
||||
def initialize_http_replication(homeserver):
|
||||
transport = TransportLayer(
|
||||
homeserver,
|
||||
homeserver.hostname,
|
||||
server=homeserver.get_resource_for_federation(),
|
||||
client=homeserver.get_http_client()
|
||||
|
|
|
@ -96,7 +96,7 @@ class PduCodec(object):
|
|||
if k not in ["event_id", "room_id", "type", "prev_events"]
|
||||
})
|
||||
|
||||
if "ts" not in kwargs:
|
||||
kwargs["ts"] = int(self.clock.time_msec())
|
||||
if "origin_server_ts" not in kwargs:
|
||||
kwargs["origin_server_ts"] = int(self.clock.time_msec())
|
||||
|
||||
return Pdu(**kwargs)
|
||||
|
|
|
@ -157,7 +157,7 @@ class TransactionActions(object):
|
|||
transaction.prev_ids = yield self.store.prep_send_transaction(
|
||||
transaction.transaction_id,
|
||||
transaction.destination,
|
||||
transaction.ts,
|
||||
transaction.origin_server_ts,
|
||||
[(p["pdu_id"], p["origin"]) for p in transaction.pdus]
|
||||
)
|
||||
|
||||
|
|
|
@ -159,7 +159,8 @@ class ReplicationLayer(object):
|
|||
return defer.succeed(None)
|
||||
|
||||
@log_function
|
||||
def make_query(self, destination, query_type, args):
|
||||
def make_query(self, destination, query_type, args,
|
||||
retry_on_dns_fail=True):
|
||||
"""Sends a federation Query to a remote homeserver of the given type
|
||||
and arguments.
|
||||
|
||||
|
@ -174,7 +175,9 @@ class ReplicationLayer(object):
|
|||
a Deferred which will eventually yield a JSON object from the
|
||||
response
|
||||
"""
|
||||
return self.transport_layer.make_query(destination, query_type, args)
|
||||
return self.transport_layer.make_query(
|
||||
destination, query_type, args, retry_on_dns_fail=retry_on_dns_fail
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
|
@ -316,7 +319,7 @@ class ReplicationLayer(object):
|
|||
|
||||
if hasattr(transaction, "edus"):
|
||||
for edu in [Edu(**x) for x in transaction.edus]:
|
||||
self.received_edu(edu.origin, edu.edu_type, edu.content)
|
||||
self.received_edu(transaction.origin, edu.edu_type, edu.content)
|
||||
|
||||
results = yield defer.DeferredList(dl)
|
||||
|
||||
|
@ -418,7 +421,7 @@ class ReplicationLayer(object):
|
|||
return Transaction(
|
||||
origin=self.server_name,
|
||||
pdus=pdus,
|
||||
ts=int(self._clock.time_msec()),
|
||||
origin_server_ts=int(self._clock.time_msec()),
|
||||
destination=None,
|
||||
)
|
||||
|
||||
|
@ -489,7 +492,6 @@ class _TransactionQueue(object):
|
|||
"""
|
||||
|
||||
def __init__(self, hs, transaction_actions, transport_layer):
|
||||
|
||||
self.server_name = hs.hostname
|
||||
self.transaction_actions = transaction_actions
|
||||
self.transport_layer = transport_layer
|
||||
|
@ -587,8 +589,8 @@ class _TransactionQueue(object):
|
|||
logger.debug("TX [%s] Persisting transaction...", destination)
|
||||
|
||||
transaction = Transaction.create_new(
|
||||
ts=self._clock.time_msec(),
|
||||
transaction_id=self._next_txn_id,
|
||||
origin_server_ts=self._clock.time_msec(),
|
||||
transaction_id=str(self._next_txn_id),
|
||||
origin=self.server_name,
|
||||
destination=destination,
|
||||
pdus=pdus,
|
||||
|
@ -606,18 +608,17 @@ class _TransactionQueue(object):
|
|||
|
||||
# FIXME (erikj): This is a bit of a hack to make the Pdu age
|
||||
# keys work
|
||||
def cb(transaction):
|
||||
def json_data_cb():
|
||||
data = transaction.get_dict()
|
||||
now = int(self._clock.time_msec())
|
||||
if "pdus" in transaction:
|
||||
for p in transaction["pdus"]:
|
||||
if "pdus" in data:
|
||||
for p in data["pdus"]:
|
||||
if "age_ts" in p:
|
||||
p["age"] = now - int(p["age_ts"])
|
||||
|
||||
return transaction
|
||||
return data
|
||||
|
||||
code, response = yield self.transport_layer.send_transaction(
|
||||
transaction,
|
||||
on_send_callback=cb,
|
||||
transaction, json_data_cb
|
||||
)
|
||||
|
||||
logger.debug("TX [%s] Sent transaction", destination)
|
||||
|
|
|
@ -24,6 +24,7 @@ over a different (albeit still reliable) protocol.
|
|||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.urls import FEDERATION_PREFIX as PREFIX
|
||||
from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.util.logutils import log_function
|
||||
|
||||
import logging
|
||||
|
@ -54,7 +55,7 @@ class TransportLayer(object):
|
|||
we receive data.
|
||||
"""
|
||||
|
||||
def __init__(self, server_name, server, client):
|
||||
def __init__(self, homeserver, server_name, server, client):
|
||||
"""
|
||||
Args:
|
||||
server_name (str): Local home server host
|
||||
|
@ -63,6 +64,7 @@ class TransportLayer(object):
|
|||
client (synapse.protocol.http.HttpClient): the http client used to
|
||||
send requests
|
||||
"""
|
||||
self.keyring = homeserver.get_keyring()
|
||||
self.server_name = server_name
|
||||
self.server = server
|
||||
self.client = client
|
||||
|
@ -144,7 +146,7 @@ class TransportLayer(object):
|
|||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def send_transaction(self, transaction, on_send_callback=None):
|
||||
def send_transaction(self, transaction, json_data_callback=None):
|
||||
""" Sends the given Transaction to it's destination
|
||||
|
||||
Args:
|
||||
|
@ -163,25 +165,15 @@ class TransportLayer(object):
|
|||
if transaction.destination == self.server_name:
|
||||
raise RuntimeError("Transport layer cannot send to itself!")
|
||||
|
||||
data = transaction.get_dict()
|
||||
|
||||
# FIXME (erikj): This is a bit of a hack to make the Pdu age
|
||||
# keys work
|
||||
def cb(destination, method, path_bytes, producer):
|
||||
if not on_send_callback:
|
||||
return
|
||||
|
||||
transaction = json.loads(producer.body)
|
||||
|
||||
new_transaction = on_send_callback(transaction)
|
||||
|
||||
producer.reset(new_transaction)
|
||||
# FIXME: This is only used by the tests. The actual json sent is
|
||||
# generated by the json_data_callback.
|
||||
json_data = transaction.get_dict()
|
||||
|
||||
code, response = yield self.client.put_json(
|
||||
transaction.destination,
|
||||
path=PREFIX + "/send/%s/" % transaction.transaction_id,
|
||||
data=data,
|
||||
on_send_callback=cb,
|
||||
data=json_data,
|
||||
json_data_callback=json_data_callback,
|
||||
)
|
||||
|
||||
logger.debug(
|
||||
|
@ -193,17 +185,93 @@ class TransportLayer(object):
|
|||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def make_query(self, destination, query_type, args):
|
||||
def make_query(self, destination, query_type, args, retry_on_dns_fail):
|
||||
path = PREFIX + "/query/%s" % query_type
|
||||
|
||||
response = yield self.client.get_json(
|
||||
destination=destination,
|
||||
path=path,
|
||||
args=args
|
||||
args=args,
|
||||
retry_on_dns_fail=retry_on_dns_fail,
|
||||
)
|
||||
|
||||
defer.returnValue(response)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _authenticate_request(self, request):
|
||||
json_request = {
|
||||
"method": request.method,
|
||||
"uri": request.uri,
|
||||
"destination": self.server_name,
|
||||
"signatures": {},
|
||||
}
|
||||
|
||||
content = None
|
||||
origin = None
|
||||
|
||||
if request.method == "PUT":
|
||||
#TODO: Handle other method types? other content types?
|
||||
try:
|
||||
content_bytes = request.content.read()
|
||||
content = json.loads(content_bytes)
|
||||
json_request["content"] = content
|
||||
except:
|
||||
raise SynapseError(400, "Unable to parse JSON", Codes.BAD_JSON)
|
||||
|
||||
def parse_auth_header(header_str):
|
||||
try:
|
||||
params = auth.split(" ")[1].split(",")
|
||||
param_dict = dict(kv.split("=") for kv in params)
|
||||
def strip_quotes(value):
|
||||
if value.startswith("\""):
|
||||
return value[1:-1]
|
||||
else:
|
||||
return value
|
||||
origin = strip_quotes(param_dict["origin"])
|
||||
key = strip_quotes(param_dict["key"])
|
||||
sig = strip_quotes(param_dict["sig"])
|
||||
return (origin, key, sig)
|
||||
except:
|
||||
raise SynapseError(
|
||||
400, "Malformed Authorization header", Codes.UNAUTHORIZED
|
||||
)
|
||||
|
||||
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
|
||||
|
||||
if not auth_headers:
|
||||
raise SynapseError(
|
||||
401, "Missing Authorization headers", Codes.UNAUTHORIZED,
|
||||
)
|
||||
|
||||
for auth in auth_headers:
|
||||
if auth.startswith("X-Matrix"):
|
||||
(origin, key, sig) = parse_auth_header(auth)
|
||||
json_request["origin"] = origin
|
||||
json_request["signatures"].setdefault(origin,{})[key] = sig
|
||||
|
||||
if not json_request["signatures"]:
|
||||
raise SynapseError(
|
||||
401, "Missing Authorization headers", Codes.UNAUTHORIZED,
|
||||
)
|
||||
|
||||
yield self.keyring.verify_json_for_server(origin, json_request)
|
||||
|
||||
defer.returnValue((origin, content))
|
||||
|
||||
def _with_authentication(self, handler):
|
||||
@defer.inlineCallbacks
|
||||
def new_handler(request, *args, **kwargs):
|
||||
try:
|
||||
(origin, content) = yield self._authenticate_request(request)
|
||||
response = yield handler(
|
||||
origin, content, request.args, *args, **kwargs
|
||||
)
|
||||
except:
|
||||
logger.exception("_authenticate_request failed")
|
||||
raise
|
||||
defer.returnValue(response)
|
||||
return new_handler
|
||||
|
||||
@log_function
|
||||
def register_received_handler(self, handler):
|
||||
""" Register a handler that will be fired when we receive data.
|
||||
|
@ -217,7 +285,7 @@ class TransportLayer(object):
|
|||
self.server.register_path(
|
||||
"PUT",
|
||||
re.compile("^" + PREFIX + "/send/([^/]*)/$"),
|
||||
self._on_send_request
|
||||
self._with_authentication(self._on_send_request)
|
||||
)
|
||||
|
||||
@log_function
|
||||
|
@ -235,9 +303,9 @@ class TransportLayer(object):
|
|||
self.server.register_path(
|
||||
"GET",
|
||||
re.compile("^" + PREFIX + "/pull/$"),
|
||||
lambda request: handler.on_pull_request(
|
||||
request.args["origin"][0],
|
||||
request.args["v"]
|
||||
self._with_authentication(
|
||||
lambda origin, content, query:
|
||||
handler.on_pull_request(query["origin"][0], query["v"])
|
||||
)
|
||||
)
|
||||
|
||||
|
@ -246,8 +314,9 @@ class TransportLayer(object):
|
|||
self.server.register_path(
|
||||
"GET",
|
||||
re.compile("^" + PREFIX + "/pdu/([^/]*)/([^/]*)/$"),
|
||||
lambda request, pdu_origin, pdu_id: handler.on_pdu_request(
|
||||
pdu_origin, pdu_id
|
||||
self._with_authentication(
|
||||
lambda origin, content, query, pdu_origin, pdu_id:
|
||||
handler.on_pdu_request(pdu_origin, pdu_id)
|
||||
)
|
||||
)
|
||||
|
||||
|
@ -255,38 +324,47 @@ class TransportLayer(object):
|
|||
self.server.register_path(
|
||||
"GET",
|
||||
re.compile("^" + PREFIX + "/state/([^/]*)/$"),
|
||||
lambda request, context: handler.on_context_state_request(
|
||||
context
|
||||
self._with_authentication(
|
||||
lambda origin, content, query, context:
|
||||
handler.on_context_state_request(context)
|
||||
)
|
||||
)
|
||||
|
||||
self.server.register_path(
|
||||
"GET",
|
||||
re.compile("^" + PREFIX + "/backfill/([^/]*)/$"),
|
||||
lambda request, context: self._on_backfill_request(
|
||||
context, request.args["v"],
|
||||
request.args["limit"]
|
||||
self._with_authentication(
|
||||
lambda origin, content, query, context:
|
||||
self._on_backfill_request(
|
||||
context, query["v"], query["limit"]
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
self.server.register_path(
|
||||
"GET",
|
||||
re.compile("^" + PREFIX + "/context/([^/]*)/$"),
|
||||
lambda request, context: handler.on_context_pdus_request(context)
|
||||
self._with_authentication(
|
||||
lambda origin, content, query, context:
|
||||
handler.on_context_pdus_request(context)
|
||||
)
|
||||
)
|
||||
|
||||
# This is when we receive a server-server Query
|
||||
self.server.register_path(
|
||||
"GET",
|
||||
re.compile("^" + PREFIX + "/query/([^/]*)$"),
|
||||
lambda request, query_type: handler.on_query_request(
|
||||
query_type, {k: v[0] for k, v in request.args.items()}
|
||||
self._with_authentication(
|
||||
lambda origin, content, query, query_type:
|
||||
handler.on_query_request(
|
||||
query_type, {k: v[0] for k, v in query.items()}
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def _on_send_request(self, request, transaction_id):
|
||||
def _on_send_request(self, origin, content, query, transaction_id):
|
||||
""" Called on PUT /send/<transaction_id>/
|
||||
|
||||
Args:
|
||||
|
@ -301,12 +379,7 @@ class TransportLayer(object):
|
|||
"""
|
||||
# Parse the request
|
||||
try:
|
||||
data = request.content.read()
|
||||
|
||||
l = data[:20].encode("string_escape")
|
||||
logger.debug("Got data: \"%s\"", l)
|
||||
|
||||
transaction_data = json.loads(data)
|
||||
transaction_data = content
|
||||
|
||||
logger.debug(
|
||||
"Decoded %s: %s",
|
||||
|
@ -328,9 +401,13 @@ class TransportLayer(object):
|
|||
defer.returnValue((400, {"error": "Invalid transaction"}))
|
||||
return
|
||||
|
||||
try:
|
||||
code, response = yield self.received_handler.on_incoming_transaction(
|
||||
transaction_data
|
||||
)
|
||||
except:
|
||||
logger.exception("on_incoming_transaction failed")
|
||||
raise
|
||||
|
||||
defer.returnValue((code, response))
|
||||
|
||||
|
|
|
@ -40,7 +40,7 @@ class Pdu(JsonEncodedObject):
|
|||
|
||||
{
|
||||
"pdu_id": "78c",
|
||||
"ts": 1404835423000,
|
||||
"origin_server_ts": 1404835423000,
|
||||
"origin": "bar",
|
||||
"prev_ids": [
|
||||
["23b", "foo"],
|
||||
|
@ -55,7 +55,7 @@ class Pdu(JsonEncodedObject):
|
|||
"pdu_id",
|
||||
"context",
|
||||
"origin",
|
||||
"ts",
|
||||
"origin_server_ts",
|
||||
"pdu_type",
|
||||
"destinations",
|
||||
"transaction_id",
|
||||
|
@ -82,7 +82,7 @@ class Pdu(JsonEncodedObject):
|
|||
"pdu_id",
|
||||
"context",
|
||||
"origin",
|
||||
"ts",
|
||||
"origin_server_ts",
|
||||
"pdu_type",
|
||||
"content",
|
||||
]
|
||||
|
@ -118,6 +118,7 @@ class Pdu(JsonEncodedObject):
|
|||
"""
|
||||
if pdu_tuple:
|
||||
d = copy.copy(pdu_tuple.pdu_entry._asdict())
|
||||
d["origin_server_ts"] = d.pop("ts")
|
||||
|
||||
d["content"] = json.loads(d["content_json"])
|
||||
del d["content_json"]
|
||||
|
@ -156,11 +157,15 @@ class Edu(JsonEncodedObject):
|
|||
]
|
||||
|
||||
required_keys = [
|
||||
"origin",
|
||||
"destination",
|
||||
"edu_type",
|
||||
]
|
||||
|
||||
# TODO: SYN-103: Remove "origin" and "destination" keys.
|
||||
# internal_keys = [
|
||||
# "origin",
|
||||
# "destination",
|
||||
# ]
|
||||
|
||||
|
||||
class Transaction(JsonEncodedObject):
|
||||
""" A transaction is a list of Pdus and Edus to be sent to a remote home
|
||||
|
@ -182,10 +187,12 @@ class Transaction(JsonEncodedObject):
|
|||
"transaction_id",
|
||||
"origin",
|
||||
"destination",
|
||||
"ts",
|
||||
"origin_server_ts",
|
||||
"previous_ids",
|
||||
"pdus",
|
||||
"edus",
|
||||
"transaction_id",
|
||||
"destination",
|
||||
]
|
||||
|
||||
internal_keys = [
|
||||
|
@ -197,7 +204,7 @@ class Transaction(JsonEncodedObject):
|
|||
"transaction_id",
|
||||
"origin",
|
||||
"destination",
|
||||
"ts",
|
||||
"origin_server_ts",
|
||||
"pdus",
|
||||
]
|
||||
|
||||
|
@ -219,10 +226,10 @@ class Transaction(JsonEncodedObject):
|
|||
@staticmethod
|
||||
def create_new(pdus, **kwargs):
|
||||
""" Used to create a new transaction. Will auto fill out
|
||||
transaction_id and ts keys.
|
||||
transaction_id and origin_server_ts keys.
|
||||
"""
|
||||
if "ts" not in kwargs:
|
||||
raise KeyError("Require 'ts' to construct a Transaction")
|
||||
if "origin_server_ts" not in kwargs:
|
||||
raise KeyError("Require 'origin_server_ts' to construct a Transaction")
|
||||
if "transaction_id" not in kwargs:
|
||||
raise KeyError(
|
||||
"Require 'transaction_id' to construct a Transaction"
|
||||
|
|
|
@ -25,6 +25,7 @@ from .profile import ProfileHandler
|
|||
from .presence import PresenceHandler
|
||||
from .directory import DirectoryHandler
|
||||
from .typing import TypingNotificationHandler
|
||||
from .admin import AdminHandler
|
||||
|
||||
|
||||
class Handlers(object):
|
||||
|
@ -49,3 +50,4 @@ class Handlers(object):
|
|||
self.login_handler = LoginHandler(hs)
|
||||
self.directory_handler = DirectoryHandler(hs)
|
||||
self.typing_notification_handler = TypingNotificationHandler(hs)
|
||||
self.admin_handler = AdminHandler(hs)
|
||||
|
|
62
synapse/handlers/admin.py
Normal file
62
synapse/handlers/admin.py
Normal file
|
@ -0,0 +1,62 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014 OpenMarket Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from ._base import BaseHandler
|
||||
|
||||
import logging
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class AdminHandler(BaseHandler):
|
||||
|
||||
def __init__(self, hs):
|
||||
super(AdminHandler, self).__init__(hs)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_whois(self, user):
|
||||
res = yield self.store.get_user_ip_and_agents(user)
|
||||
|
||||
d = {}
|
||||
for r in res:
|
||||
device = d.setdefault(r["device_id"], {})
|
||||
session = device.setdefault(r["access_token"], [])
|
||||
session.append({
|
||||
"ip": r["ip"],
|
||||
"user_agent": r["user_agent"],
|
||||
"last_seen": r["last_seen"],
|
||||
})
|
||||
|
||||
ret = {
|
||||
"user_id": user.to_string(),
|
||||
"devices": [
|
||||
{
|
||||
"device_id": k,
|
||||
"sessions": [
|
||||
{
|
||||
# "access_token": x, TODO (erikj)
|
||||
"connections": y,
|
||||
}
|
||||
for x, y in v.items()
|
||||
]
|
||||
}
|
||||
for k, v in d.items()
|
||||
],
|
||||
}
|
||||
|
||||
defer.returnValue(ret)
|
|
@ -18,7 +18,6 @@ from twisted.internet import defer
|
|||
from ._base import BaseHandler
|
||||
|
||||
from synapse.api.errors import SynapseError
|
||||
from synapse.http.client import HttpClient
|
||||
from synapse.api.events.room import RoomAliasesEvent
|
||||
|
||||
import logging
|
||||
|
@ -57,7 +56,6 @@ class DirectoryHandler(BaseHandler):
|
|||
if not servers:
|
||||
raise SynapseError(400, "Failed to get server list")
|
||||
|
||||
|
||||
try:
|
||||
yield self.store.create_room_alias_association(
|
||||
room_alias,
|
||||
|
@ -68,25 +66,19 @@ class DirectoryHandler(BaseHandler):
|
|||
defer.returnValue("Already exists")
|
||||
|
||||
# TODO: Send the room event.
|
||||
yield self._update_room_alias_events(user_id, room_id)
|
||||
|
||||
aliases = yield self.store.get_aliases_for_room(room_id)
|
||||
@defer.inlineCallbacks
|
||||
def delete_association(self, user_id, room_alias):
|
||||
# TODO Check if server admin
|
||||
|
||||
event = self.event_factory.create_event(
|
||||
etype=RoomAliasesEvent.TYPE,
|
||||
state_key=self.hs.hostname,
|
||||
room_id=room_id,
|
||||
user_id=user_id,
|
||||
content={"aliases": aliases},
|
||||
)
|
||||
if not room_alias.is_mine:
|
||||
raise SynapseError(400, "Room alias must be local")
|
||||
|
||||
snapshot = yield self.store.snapshot_room(
|
||||
room_id=room_id,
|
||||
user_id=user_id,
|
||||
)
|
||||
|
||||
yield self.state_handler.handle_new_event(event, snapshot)
|
||||
yield self._on_new_room_event(event, snapshot, extra_users=[user_id])
|
||||
room_id = yield self.store.delete_room_alias(room_alias)
|
||||
|
||||
if room_id:
|
||||
yield self._update_room_alias_events(user_id, room_id)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_association(self, room_alias):
|
||||
|
@ -105,8 +97,8 @@ class DirectoryHandler(BaseHandler):
|
|||
query_type="directory",
|
||||
args={
|
||||
"room_alias": room_alias.to_string(),
|
||||
HttpClient.RETRY_DNS_LOOKUP_FAILURES: False
|
||||
}
|
||||
},
|
||||
retry_on_dns_fail=False,
|
||||
)
|
||||
|
||||
if result and "room_id" in result and "servers" in result:
|
||||
|
@ -142,3 +134,23 @@ class DirectoryHandler(BaseHandler):
|
|||
"room_id": result.room_id,
|
||||
"servers": result.servers,
|
||||
})
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _update_room_alias_events(self, user_id, room_id):
|
||||
aliases = yield self.store.get_aliases_for_room(room_id)
|
||||
|
||||
event = self.event_factory.create_event(
|
||||
etype=RoomAliasesEvent.TYPE,
|
||||
state_key=self.hs.hostname,
|
||||
room_id=room_id,
|
||||
user_id=user_id,
|
||||
content={"aliases": aliases},
|
||||
)
|
||||
|
||||
snapshot = yield self.store.snapshot_room(
|
||||
room_id=room_id,
|
||||
user_id=user_id,
|
||||
)
|
||||
|
||||
yield self.state_handler.handle_new_event(event, snapshot)
|
||||
yield self._on_new_room_event(event, snapshot, extra_users=[user_id])
|
||||
|
|
|
@ -17,7 +17,7 @@ from twisted.internet import defer
|
|||
|
||||
from ._base import BaseHandler
|
||||
from synapse.api.errors import LoginError, Codes
|
||||
from synapse.http.client import PlainHttpClient
|
||||
from synapse.http.client import IdentityServerHttpClient
|
||||
from synapse.util.emailutils import EmailException
|
||||
import synapse.util.emailutils as emailutils
|
||||
|
||||
|
@ -97,7 +97,7 @@ class LoginHandler(BaseHandler):
|
|||
|
||||
@defer.inlineCallbacks
|
||||
def _query_email(self, email):
|
||||
httpCli = PlainHttpClient(self.hs)
|
||||
httpCli = IdentityServerHttpClient(self.hs)
|
||||
data = yield httpCli.get_json(
|
||||
'matrix.org:8090', # TODO FIXME This should be configurable.
|
||||
"/_matrix/identity/api/v1/lookup?medium=email&address=" +
|
||||
|
|
|
@ -64,7 +64,7 @@ class MessageHandler(BaseHandler):
|
|||
defer.returnValue(None)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def send_message(self, event=None, suppress_auth=False, stamp_event=True):
|
||||
def send_message(self, event=None, suppress_auth=False):
|
||||
""" Send a message.
|
||||
|
||||
Args:
|
||||
|
@ -72,7 +72,6 @@ class MessageHandler(BaseHandler):
|
|||
suppress_auth (bool) : True to suppress auth for this message. This
|
||||
is primarily so the home server can inject messages into rooms at
|
||||
will.
|
||||
stamp_event (bool) : True to stamp event content with server keys.
|
||||
Raises:
|
||||
SynapseError if something went wrong.
|
||||
"""
|
||||
|
@ -82,9 +81,6 @@ class MessageHandler(BaseHandler):
|
|||
user = self.hs.parse_userid(event.user_id)
|
||||
assert user.is_mine, "User must be our own: %s" % (user,)
|
||||
|
||||
if stamp_event:
|
||||
event.content["hsob_ts"] = int(self.clock.time_msec())
|
||||
|
||||
snapshot = yield self.store.snapshot_room(event.room_id, event.user_id)
|
||||
|
||||
if not suppress_auth:
|
||||
|
@ -132,7 +128,7 @@ class MessageHandler(BaseHandler):
|
|||
defer.returnValue(chunk)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def store_room_data(self, event=None, stamp_event=True):
|
||||
def store_room_data(self, event=None):
|
||||
""" Stores data for a room.
|
||||
|
||||
Args:
|
||||
|
@ -151,9 +147,6 @@ class MessageHandler(BaseHandler):
|
|||
|
||||
yield self.auth.check(event, snapshot, raises=True)
|
||||
|
||||
if stamp_event:
|
||||
event.content["hsob_ts"] = int(self.clock.time_msec())
|
||||
|
||||
yield self.state_handler.handle_new_event(event, snapshot)
|
||||
|
||||
yield self._on_new_room_event(event, snapshot)
|
||||
|
@ -221,10 +214,7 @@ class MessageHandler(BaseHandler):
|
|||
defer.returnValue(None)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def send_feedback(self, event, stamp_event=True):
|
||||
if stamp_event:
|
||||
event.content["hsob_ts"] = int(self.clock.time_msec())
|
||||
|
||||
def send_feedback(self, event):
|
||||
snapshot = yield self.store.snapshot_room(event.room_id, event.user_id)
|
||||
|
||||
yield self.auth.check(event, snapshot, raises=True)
|
||||
|
|
|
@ -22,7 +22,8 @@ from synapse.api.errors import (
|
|||
)
|
||||
from ._base import BaseHandler
|
||||
import synapse.util.stringutils as stringutils
|
||||
from synapse.http.client import PlainHttpClient
|
||||
from synapse.http.client import IdentityServerHttpClient
|
||||
from synapse.http.client import CaptchaServerHttpClient
|
||||
|
||||
import base64
|
||||
import bcrypt
|
||||
|
@ -154,7 +155,9 @@ class RegistrationHandler(BaseHandler):
|
|||
|
||||
@defer.inlineCallbacks
|
||||
def _threepid_from_creds(self, creds):
|
||||
httpCli = PlainHttpClient(self.hs)
|
||||
# TODO: get this from the homeserver rather than creating a new one for
|
||||
# each request
|
||||
httpCli = IdentityServerHttpClient(self.hs)
|
||||
# XXX: make this configurable!
|
||||
trustedIdServers = ['matrix.org:8090']
|
||||
if not creds['idServer'] in trustedIdServers:
|
||||
|
@ -173,7 +176,7 @@ class RegistrationHandler(BaseHandler):
|
|||
|
||||
@defer.inlineCallbacks
|
||||
def _bind_threepid(self, creds, mxid):
|
||||
httpCli = PlainHttpClient(self.hs)
|
||||
httpCli = IdentityServerHttpClient(self.hs)
|
||||
data = yield httpCli.post_urlencoded_get_json(
|
||||
creds['idServer'],
|
||||
"/_matrix/identity/api/v1/3pid/bind",
|
||||
|
@ -203,7 +206,9 @@ class RegistrationHandler(BaseHandler):
|
|||
|
||||
@defer.inlineCallbacks
|
||||
def _submit_captcha(self, ip_addr, private_key, challenge, response):
|
||||
client = PlainHttpClient(self.hs)
|
||||
# TODO: get this from the homeserver rather than creating a new one for
|
||||
# each request
|
||||
client = CaptchaServerHttpClient(self.hs)
|
||||
data = yield client.post_urlencoded_get_raw(
|
||||
"www.google.com:80",
|
||||
"/recaptcha/api/verify",
|
||||
|
|
|
@ -26,65 +26,18 @@ from syutil.jsonutil import encode_canonical_json
|
|||
|
||||
from synapse.api.errors import CodeMessageException, SynapseError
|
||||
|
||||
from syutil.crypto.jsonsign import sign_json
|
||||
|
||||
from StringIO import StringIO
|
||||
|
||||
import json
|
||||
import logging
|
||||
import urllib
|
||||
import urlparse
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# FIXME: SURELY these should be killed?!
|
||||
_destination_mappings = {
|
||||
"red": "localhost:8080",
|
||||
"blue": "localhost:8081",
|
||||
"green": "localhost:8082",
|
||||
}
|
||||
|
||||
|
||||
class HttpClient(object):
|
||||
""" Interface for talking json over http
|
||||
"""
|
||||
RETRY_DNS_LOOKUP_FAILURES = "__retry_dns"
|
||||
|
||||
def put_json(self, destination, path, data):
|
||||
""" Sends the specifed json data using PUT
|
||||
|
||||
Args:
|
||||
destination (str): The remote server to send the HTTP request
|
||||
to.
|
||||
path (str): The HTTP path.
|
||||
data (dict): A dict containing the data that will be used as
|
||||
the request body. This will be encoded as JSON.
|
||||
|
||||
Returns:
|
||||
Deferred: Succeeds when we get a 2xx HTTP response. The result
|
||||
will be the decoded JSON body. On a 4xx or 5xx error response a
|
||||
CodeMessageException is raised.
|
||||
"""
|
||||
pass
|
||||
|
||||
def get_json(self, destination, path, args=None):
|
||||
""" Get's some json from the given host homeserver and path
|
||||
|
||||
Args:
|
||||
destination (str): The remote server to send the HTTP request
|
||||
to.
|
||||
path (str): The HTTP path.
|
||||
args (dict): A dictionary used to create query strings, defaults to
|
||||
None.
|
||||
**Note**: The value of each key is assumed to be an iterable
|
||||
and *not* a string.
|
||||
|
||||
Returns:
|
||||
Deferred: Succeeds when we get *any* HTTP response.
|
||||
|
||||
The result of the deferred is a tuple of `(code, response)`,
|
||||
where `response` is a dict representing the decoded JSON body.
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
class MatrixHttpAgent(_AgentBase):
|
||||
|
||||
|
@ -109,12 +62,8 @@ class MatrixHttpAgent(_AgentBase):
|
|||
parsed_URI.originForm)
|
||||
|
||||
|
||||
class TwistedHttpClient(HttpClient):
|
||||
""" Wrapper around the twisted HTTP client api.
|
||||
|
||||
Attributes:
|
||||
agent (twisted.web.client.Agent): The twisted Agent used to send the
|
||||
requests.
|
||||
class BaseHttpClient(object):
|
||||
"""Base class for HTTP clients using twisted.
|
||||
"""
|
||||
|
||||
def __init__(self, hs):
|
||||
|
@ -122,111 +71,20 @@ class TwistedHttpClient(HttpClient):
|
|||
self.hs = hs
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def put_json(self, destination, path, data, on_send_callback=None):
|
||||
if destination in _destination_mappings:
|
||||
destination = _destination_mappings[destination]
|
||||
|
||||
response = yield self._create_request(
|
||||
destination.encode("ascii"),
|
||||
"PUT",
|
||||
path.encode("ascii"),
|
||||
producer=_JsonProducer(data),
|
||||
headers_dict={"Content-Type": ["application/json"]},
|
||||
on_send_callback=on_send_callback,
|
||||
)
|
||||
|
||||
logger.debug("Getting resp body")
|
||||
body = yield readBody(response)
|
||||
logger.debug("Got resp body")
|
||||
|
||||
defer.returnValue((response.code, body))
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_json(self, destination, path, args={}):
|
||||
if destination in _destination_mappings:
|
||||
destination = _destination_mappings[destination]
|
||||
|
||||
logger.debug("get_json args: %s", args)
|
||||
|
||||
retry_on_dns_fail = True
|
||||
if HttpClient.RETRY_DNS_LOOKUP_FAILURES in args:
|
||||
# FIXME: This isn't ideal, but the interface exposed in get_json
|
||||
# isn't comprehensive enough to give caller's any control over
|
||||
# their connection mechanics.
|
||||
retry_on_dns_fail = args.pop(HttpClient.RETRY_DNS_LOOKUP_FAILURES)
|
||||
|
||||
query_bytes = urllib.urlencode(args, True)
|
||||
logger.debug("Query bytes: %s Retry DNS: %s", args, retry_on_dns_fail)
|
||||
|
||||
response = yield self._create_request(
|
||||
destination.encode("ascii"),
|
||||
"GET",
|
||||
path.encode("ascii"),
|
||||
query_bytes=query_bytes,
|
||||
retry_on_dns_fail=retry_on_dns_fail
|
||||
)
|
||||
|
||||
body = yield readBody(response)
|
||||
|
||||
defer.returnValue(json.loads(body))
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def post_urlencoded_get_json(self, destination, path, args={}):
|
||||
if destination in _destination_mappings:
|
||||
destination = _destination_mappings[destination]
|
||||
|
||||
logger.debug("post_urlencoded_get_json args: %s", args)
|
||||
query_bytes = urllib.urlencode(args, True)
|
||||
|
||||
response = yield self._create_request(
|
||||
destination.encode("ascii"),
|
||||
"POST",
|
||||
path.encode("ascii"),
|
||||
producer=FileBodyProducer(StringIO(urllib.urlencode(args))),
|
||||
headers_dict={"Content-Type": ["application/x-www-form-urlencoded"]}
|
||||
)
|
||||
|
||||
body = yield readBody(response)
|
||||
|
||||
defer.returnValue(json.loads(body))
|
||||
|
||||
# XXX FIXME : I'm so sorry.
|
||||
@defer.inlineCallbacks
|
||||
def post_urlencoded_get_raw(self, destination, path, accept_partial=False, args={}):
|
||||
if destination in _destination_mappings:
|
||||
destination = _destination_mappings[destination]
|
||||
|
||||
query_bytes = urllib.urlencode(args, True)
|
||||
|
||||
response = yield self._create_request(
|
||||
destination.encode("ascii"),
|
||||
"POST",
|
||||
path.encode("ascii"),
|
||||
producer=FileBodyProducer(StringIO(urllib.urlencode(args))),
|
||||
headers_dict={"Content-Type": ["application/x-www-form-urlencoded"]}
|
||||
)
|
||||
|
||||
try:
|
||||
body = yield readBody(response)
|
||||
defer.returnValue(body)
|
||||
except PartialDownloadError as e:
|
||||
if accept_partial:
|
||||
defer.returnValue(e.response)
|
||||
else:
|
||||
raise e
|
||||
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _create_request(self, destination, method, path_bytes, param_bytes=b"",
|
||||
query_bytes=b"", producer=None, headers_dict={},
|
||||
retry_on_dns_fail=True, on_send_callback=None):
|
||||
def _create_request(self, destination, method, path_bytes,
|
||||
body_callback, headers_dict={}, param_bytes=b"",
|
||||
query_bytes=b"", retry_on_dns_fail=True):
|
||||
""" Creates and sends a request to the given url
|
||||
"""
|
||||
headers_dict[b"User-Agent"] = [b"Synapse"]
|
||||
headers_dict[b"Host"] = [destination]
|
||||
|
||||
logger.debug("Sending request to %s: %s %s;%s?%s",
|
||||
destination, method, path_bytes, param_bytes, query_bytes)
|
||||
url_bytes = urlparse.urlunparse(
|
||||
("", "", path_bytes, param_bytes, query_bytes, "",)
|
||||
)
|
||||
|
||||
logger.debug("Sending request to %s: %s %s",
|
||||
destination, method, url_bytes)
|
||||
|
||||
logger.debug(
|
||||
"Types: %s",
|
||||
|
@ -239,12 +97,11 @@ class TwistedHttpClient(HttpClient):
|
|||
|
||||
retries_left = 5
|
||||
|
||||
# TODO: setup and pass in an ssl_context to enable TLS
|
||||
endpoint = self._getEndpoint(reactor, destination);
|
||||
|
||||
while True:
|
||||
if on_send_callback:
|
||||
on_send_callback(destination, method, path_bytes, producer)
|
||||
|
||||
producer = body_callback(method, url_bytes, headers_dict)
|
||||
|
||||
try:
|
||||
response = yield self.agent.request(
|
||||
|
@ -290,6 +147,134 @@ class TwistedHttpClient(HttpClient):
|
|||
|
||||
defer.returnValue(response)
|
||||
|
||||
|
||||
class MatrixHttpClient(BaseHttpClient):
|
||||
""" Wrapper around the twisted HTTP client api. Implements
|
||||
|
||||
Attributes:
|
||||
agent (twisted.web.client.Agent): The twisted Agent used to send the
|
||||
requests.
|
||||
"""
|
||||
|
||||
RETRY_DNS_LOOKUP_FAILURES = "__retry_dns"
|
||||
|
||||
def __init__(self, hs):
|
||||
self.signing_key = hs.config.signing_key[0]
|
||||
self.server_name = hs.hostname
|
||||
BaseHttpClient.__init__(self, hs)
|
||||
|
||||
def sign_request(self, destination, method, url_bytes, headers_dict,
|
||||
content=None):
|
||||
request = {
|
||||
"method": method,
|
||||
"uri": url_bytes,
|
||||
"origin": self.server_name,
|
||||
"destination": destination,
|
||||
}
|
||||
|
||||
if content is not None:
|
||||
request["content"] = content
|
||||
|
||||
request = sign_json(request, self.server_name, self.signing_key)
|
||||
|
||||
auth_headers = []
|
||||
|
||||
for key,sig in request["signatures"][self.server_name].items():
|
||||
auth_headers.append(bytes(
|
||||
"X-Matrix origin=%s,key=\"%s\",sig=\"%s\"" % (
|
||||
self.server_name, key, sig,
|
||||
)
|
||||
))
|
||||
|
||||
headers_dict[b"Authorization"] = auth_headers
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def put_json(self, destination, path, data={}, json_data_callback=None):
|
||||
""" Sends the specifed json data using PUT
|
||||
|
||||
Args:
|
||||
destination (str): The remote server to send the HTTP request
|
||||
to.
|
||||
path (str): The HTTP path.
|
||||
data (dict): A dict containing the data that will be used as
|
||||
the request body. This will be encoded as JSON.
|
||||
json_data_callback (callable): A callable returning the dict to
|
||||
use as the request body.
|
||||
|
||||
Returns:
|
||||
Deferred: Succeeds when we get a 2xx HTTP response. The result
|
||||
will be the decoded JSON body. On a 4xx or 5xx error response a
|
||||
CodeMessageException is raised.
|
||||
"""
|
||||
|
||||
if not json_data_callback:
|
||||
def json_data_callback():
|
||||
return data
|
||||
|
||||
def body_callback(method, url_bytes, headers_dict):
|
||||
json_data = json_data_callback()
|
||||
self.sign_request(
|
||||
destination, method, url_bytes, headers_dict, json_data
|
||||
)
|
||||
producer = _JsonProducer(json_data)
|
||||
return producer
|
||||
|
||||
response = yield self._create_request(
|
||||
destination.encode("ascii"),
|
||||
"PUT",
|
||||
path.encode("ascii"),
|
||||
body_callback=body_callback,
|
||||
headers_dict={"Content-Type": ["application/json"]},
|
||||
)
|
||||
|
||||
logger.debug("Getting resp body")
|
||||
body = yield readBody(response)
|
||||
logger.debug("Got resp body")
|
||||
|
||||
defer.returnValue((response.code, body))
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_json(self, destination, path, args={}, retry_on_dns_fail=True):
|
||||
""" Get's some json from the given host homeserver and path
|
||||
|
||||
Args:
|
||||
destination (str): The remote server to send the HTTP request
|
||||
to.
|
||||
path (str): The HTTP path.
|
||||
args (dict): A dictionary used to create query strings, defaults to
|
||||
None.
|
||||
**Note**: The value of each key is assumed to be an iterable
|
||||
and *not* a string.
|
||||
|
||||
Returns:
|
||||
Deferred: Succeeds when we get *any* HTTP response.
|
||||
|
||||
The result of the deferred is a tuple of `(code, response)`,
|
||||
where `response` is a dict representing the decoded JSON body.
|
||||
"""
|
||||
logger.debug("get_json args: %s", args)
|
||||
|
||||
query_bytes = urllib.urlencode(args, True)
|
||||
logger.debug("Query bytes: %s Retry DNS: %s", args, retry_on_dns_fail)
|
||||
|
||||
def body_callback(method, url_bytes, headers_dict):
|
||||
self.sign_request(destination, method, url_bytes, headers_dict)
|
||||
return None
|
||||
|
||||
response = yield self._create_request(
|
||||
destination.encode("ascii"),
|
||||
"GET",
|
||||
path.encode("ascii"),
|
||||
query_bytes=query_bytes,
|
||||
body_callback=body_callback,
|
||||
retry_on_dns_fail=retry_on_dns_fail
|
||||
)
|
||||
|
||||
body = yield readBody(response)
|
||||
|
||||
defer.returnValue(json.loads(body))
|
||||
|
||||
|
||||
def _getEndpoint(self, reactor, destination):
|
||||
return matrix_endpoint(
|
||||
reactor, destination, timeout=10,
|
||||
|
@ -297,10 +282,69 @@ class TwistedHttpClient(HttpClient):
|
|||
)
|
||||
|
||||
|
||||
class PlainHttpClient(TwistedHttpClient):
|
||||
class IdentityServerHttpClient(BaseHttpClient):
|
||||
"""Separate HTTP client for talking to the Identity servers since they
|
||||
don't use SRV records and talk x-www-form-urlencoded rather than JSON.
|
||||
"""
|
||||
def _getEndpoint(self, reactor, destination):
|
||||
#TODO: This should be talking TLS
|
||||
return matrix_endpoint(reactor, destination, timeout=10)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def post_urlencoded_get_json(self, destination, path, args={}):
|
||||
logger.debug("post_urlencoded_get_json args: %s", args)
|
||||
query_bytes = urllib.urlencode(args, True)
|
||||
|
||||
def body_callback(method, url_bytes, headers_dict):
|
||||
return FileBodyProducer(StringIO(query_bytes))
|
||||
|
||||
response = yield self._create_request(
|
||||
destination.encode("ascii"),
|
||||
"POST",
|
||||
path.encode("ascii"),
|
||||
body_callback=body_callback,
|
||||
headers_dict={
|
||||
"Content-Type": ["application/x-www-form-urlencoded"]
|
||||
}
|
||||
)
|
||||
|
||||
body = yield readBody(response)
|
||||
|
||||
defer.returnValue(json.loads(body))
|
||||
|
||||
|
||||
class CaptchaServerHttpClient(MatrixHttpClient):
|
||||
"""Separate HTTP client for talking to google's captcha servers"""
|
||||
|
||||
def _getEndpoint(self, reactor, destination):
|
||||
return matrix_endpoint(reactor, destination, timeout=10)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def post_urlencoded_get_raw(self, destination, path, accept_partial=False,
|
||||
args={}):
|
||||
query_bytes = urllib.urlencode(args, True)
|
||||
|
||||
def body_callback(method, url_bytes, headers_dict):
|
||||
return FileBodyProducer(StringIO(query_bytes))
|
||||
|
||||
response = yield self._create_request(
|
||||
destination.encode("ascii"),
|
||||
"POST",
|
||||
path.encode("ascii"),
|
||||
body_callback=body_callback,
|
||||
headers_dict={
|
||||
"Content-Type": ["application/x-www-form-urlencoded"]
|
||||
}
|
||||
)
|
||||
|
||||
try:
|
||||
body = yield readBody(response)
|
||||
defer.returnValue(body)
|
||||
except PartialDownloadError as e:
|
||||
if accept_partial:
|
||||
defer.returnValue(e.response)
|
||||
else:
|
||||
raise e
|
||||
|
||||
def _print_ex(e):
|
||||
if hasattr(e, "reasons") and e.reasons:
|
||||
|
|
89
synapse/http/server_key_resource.py
Normal file
89
synapse/http/server_key_resource.py
Normal file
|
@ -0,0 +1,89 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014 OpenMarket Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
from twisted.web.resource import Resource
|
||||
from synapse.http.server import respond_with_json_bytes
|
||||
from syutil.crypto.jsonsign import sign_json
|
||||
from syutil.base64util import encode_base64
|
||||
from syutil.jsonutil import encode_canonical_json
|
||||
from OpenSSL import crypto
|
||||
import logging
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class LocalKey(Resource):
|
||||
"""HTTP resource containing encoding the TLS X.509 certificate and NACL
|
||||
signature verification keys for this server::
|
||||
|
||||
GET /key HTTP/1.1
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Content-Type: application/json
|
||||
{
|
||||
"server_name": "this.server.example.com"
|
||||
"verify_keys": {
|
||||
"algorithm:version": # base64 encoded NACL verification key.
|
||||
},
|
||||
"tls_certificate": # base64 ASN.1 DER encoded X.509 tls cert.
|
||||
"signatures": {
|
||||
"this.server.example.com": {
|
||||
"algorithm:version": # NACL signature for this server.
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
|
||||
def __init__(self, hs):
|
||||
self.hs = hs
|
||||
self.response_body = encode_canonical_json(
|
||||
self.response_json_object(hs.config)
|
||||
)
|
||||
Resource.__init__(self)
|
||||
|
||||
@staticmethod
|
||||
def response_json_object(server_config):
|
||||
verify_keys = {}
|
||||
for key in server_config.signing_key:
|
||||
verify_key_bytes = key.verify_key.encode()
|
||||
key_id = "%s:%s" % (key.alg, key.version)
|
||||
verify_keys[key_id] = encode_base64(verify_key_bytes)
|
||||
|
||||
x509_certificate_bytes = crypto.dump_certificate(
|
||||
crypto.FILETYPE_ASN1,
|
||||
server_config.tls_certificate
|
||||
)
|
||||
json_object = {
|
||||
u"server_name": server_config.server_name,
|
||||
u"verify_keys": verify_keys,
|
||||
u"tls_certificate": encode_base64(x509_certificate_bytes)
|
||||
}
|
||||
for key in server_config.signing_key:
|
||||
json_object = sign_json(
|
||||
json_object,
|
||||
server_config.server_name,
|
||||
key,
|
||||
)
|
||||
|
||||
return json_object
|
||||
|
||||
def render_GET(self, request):
|
||||
return respond_with_json_bytes(request, 200, self.response_body)
|
||||
|
||||
def getChild(self, name, request):
|
||||
if name == '':
|
||||
return self
|
|
@ -15,7 +15,8 @@
|
|||
|
||||
|
||||
from . import (
|
||||
room, events, register, login, profile, presence, initial_sync, directory, voip
|
||||
room, events, register, login, profile, presence, initial_sync, directory,
|
||||
voip, admin,
|
||||
)
|
||||
|
||||
|
||||
|
@ -43,3 +44,4 @@ class RestServletFactory(object):
|
|||
initial_sync.register_servlets(hs, client_resource)
|
||||
directory.register_servlets(hs, client_resource)
|
||||
voip.register_servlets(hs, client_resource)
|
||||
admin.register_servlets(hs, client_resource)
|
||||
|
|
47
synapse/rest/admin.py
Normal file
47
synapse/rest/admin.py
Normal file
|
@ -0,0 +1,47 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014 OpenMarket Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.errors import AuthError, SynapseError
|
||||
from base import RestServlet, client_path_pattern
|
||||
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class WhoisRestServlet(RestServlet):
|
||||
PATTERN = client_path_pattern("/admin/whois/(?P<user_id>[^/]*)")
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_GET(self, request, user_id):
|
||||
target_user = self.hs.parse_userid(user_id)
|
||||
auth_user = yield self.auth.get_user_by_req(request)
|
||||
is_admin = yield self.auth.is_server_admin(auth_user)
|
||||
|
||||
if not is_admin and target_user != auth_user:
|
||||
raise AuthError(403, "You are not a server admin")
|
||||
|
||||
if not target_user.is_mine:
|
||||
raise SynapseError(400, "Can only whois a local user")
|
||||
|
||||
ret = yield self.handlers.admin_handler.get_whois(target_user)
|
||||
|
||||
defer.returnValue((200, ret))
|
||||
|
||||
|
||||
def register_servlets(hs, http_server):
|
||||
WhoisRestServlet(hs).register(http_server)
|
|
@ -16,7 +16,7 @@
|
|||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.errors import SynapseError, Codes
|
||||
from synapse.api.errors import AuthError, SynapseError, Codes
|
||||
from base import RestServlet, client_path_pattern
|
||||
|
||||
import json
|
||||
|
@ -81,6 +81,24 @@ class ClientDirectoryServer(RestServlet):
|
|||
|
||||
defer.returnValue((200, {}))
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_DELETE(self, request, room_alias):
|
||||
user = yield self.auth.get_user_by_req(request)
|
||||
|
||||
is_admin = yield self.auth.is_server_admin(user)
|
||||
if not is_admin:
|
||||
raise AuthError(403, "You need to be a server admin")
|
||||
|
||||
dir_handler = self.handlers.directory_handler
|
||||
|
||||
room_alias = self.hs.parse_roomalias(urllib.unquote(room_alias))
|
||||
|
||||
yield dir_handler.delete_association(
|
||||
user.to_string(), room_alias
|
||||
)
|
||||
|
||||
defer.returnValue((200, {}))
|
||||
|
||||
|
||||
def _parse_json(request):
|
||||
try:
|
||||
|
|
|
@ -68,7 +68,7 @@ class PresenceStatusRestServlet(RestServlet):
|
|||
yield self.handlers.presence_handler.set_state(
|
||||
target_user=user, auth_user=auth_user, state=state)
|
||||
|
||||
defer.returnValue((200, ""))
|
||||
defer.returnValue((200, {}))
|
||||
|
||||
def on_OPTIONS(self, request):
|
||||
return (200, {})
|
||||
|
@ -141,7 +141,7 @@ class PresenceListRestServlet(RestServlet):
|
|||
|
||||
yield defer.DeferredList(deferreds)
|
||||
|
||||
defer.returnValue((200, ""))
|
||||
defer.returnValue((200, {}))
|
||||
|
||||
def on_OPTIONS(self, request):
|
||||
return (200, {})
|
||||
|
|
|
@ -195,13 +195,7 @@ class RegisterRestServlet(RestServlet):
|
|||
raise SynapseError(400, "Captcha response is required",
|
||||
errcode=Codes.CAPTCHA_NEEDED)
|
||||
|
||||
# May be an X-Forwarding-For header depending on config
|
||||
ip_addr = request.getClientIP()
|
||||
if self.hs.config.captcha_ip_origin_is_x_forwarded:
|
||||
# use the header
|
||||
if request.requestHeaders.hasHeader("X-Forwarded-For"):
|
||||
ip_addr = request.requestHeaders.getRawHeaders(
|
||||
"X-Forwarded-For")[0]
|
||||
ip_addr = self.hs.get_ip_from_request(request)
|
||||
|
||||
handler = self.handlers.registration_handler
|
||||
yield handler.check_recaptcha(
|
||||
|
|
|
@ -34,6 +34,7 @@ from synapse.util.distributor import Distributor
|
|||
from synapse.util.lockutils import LockManager
|
||||
from synapse.streams.events import EventSources
|
||||
from synapse.api.ratelimiting import Ratelimiter
|
||||
from synapse.crypto.keyring import Keyring
|
||||
|
||||
|
||||
class BaseHomeServer(object):
|
||||
|
@ -75,8 +76,10 @@ class BaseHomeServer(object):
|
|||
'resource_for_federation',
|
||||
'resource_for_web_client',
|
||||
'resource_for_content_repo',
|
||||
'resource_for_server_key',
|
||||
'event_sources',
|
||||
'ratelimiter',
|
||||
'keyring',
|
||||
]
|
||||
|
||||
def __init__(self, hostname, **kwargs):
|
||||
|
@ -143,6 +146,18 @@ class BaseHomeServer(object):
|
|||
def serialize_event(self, e):
|
||||
return serialize_event(self, e)
|
||||
|
||||
def get_ip_from_request(self, request):
|
||||
# May be an X-Forwarding-For header depending on config
|
||||
ip_addr = request.getClientIP()
|
||||
if self.config.captcha_ip_origin_is_x_forwarded:
|
||||
# use the header
|
||||
if request.requestHeaders.hasHeader("X-Forwarded-For"):
|
||||
ip_addr = request.requestHeaders.getRawHeaders(
|
||||
"X-Forwarded-For"
|
||||
)[0]
|
||||
|
||||
return ip_addr
|
||||
|
||||
# Build magic accessors for every dependency
|
||||
for depname in BaseHomeServer.DEPENDENCIES:
|
||||
BaseHomeServer._make_dependency_method(depname)
|
||||
|
@ -200,6 +215,9 @@ class HomeServer(BaseHomeServer):
|
|||
def build_ratelimiter(self):
|
||||
return Ratelimiter()
|
||||
|
||||
def build_keyring(self):
|
||||
return Keyring(self)
|
||||
|
||||
def register_servlets(self):
|
||||
""" Register all servlets associated with this HomeServer.
|
||||
"""
|
||||
|
|
|
@ -57,13 +57,14 @@ SCHEMAS = [
|
|||
"presence",
|
||||
"im",
|
||||
"room_aliases",
|
||||
"keys",
|
||||
"redactions",
|
||||
]
|
||||
|
||||
|
||||
# Remember to update this number every time an incompatible change is made to
|
||||
# database schema files, so the users will be informed on server restarts.
|
||||
SCHEMA_VERSION = 4
|
||||
SCHEMA_VERSION = 6
|
||||
|
||||
|
||||
class _RollbackButIsFineException(Exception):
|
||||
|
@ -105,7 +106,7 @@ class DataStore(RoomMemberStore, RoomStore,
|
|||
stream_ordering=stream_ordering,
|
||||
is_new_state=is_new_state,
|
||||
)
|
||||
except _RollbackButIsFineException as e:
|
||||
except _RollbackButIsFineException:
|
||||
pass
|
||||
|
||||
@defer.inlineCallbacks
|
||||
|
@ -154,6 +155,8 @@ class DataStore(RoomMemberStore, RoomStore,
|
|||
|
||||
cols["unrecognized_keys"] = json.dumps(unrec_keys)
|
||||
|
||||
cols["ts"] = cols.pop("origin_server_ts")
|
||||
|
||||
logger.debug("Persisting: %s", repr(cols))
|
||||
|
||||
if pdu.is_state:
|
||||
|
@ -294,6 +297,28 @@ class DataStore(RoomMemberStore, RoomStore,
|
|||
|
||||
defer.returnValue(self.min_token)
|
||||
|
||||
def insert_client_ip(self, user, access_token, device_id, ip, user_agent):
|
||||
return self._simple_insert(
|
||||
"user_ips",
|
||||
{
|
||||
"user": user.to_string(),
|
||||
"access_token": access_token,
|
||||
"device_id": device_id,
|
||||
"ip": ip,
|
||||
"user_agent": user_agent,
|
||||
"last_seen": int(self._clock.time_msec()),
|
||||
}
|
||||
)
|
||||
|
||||
def get_user_ip_and_agents(self, user):
|
||||
return self._simple_select_list(
|
||||
table="user_ips",
|
||||
keyvalues={"user": user.to_string()},
|
||||
retcols=[
|
||||
"device_id", "access_token", "ip", "user_agent", "last_seen"
|
||||
],
|
||||
)
|
||||
|
||||
def snapshot_room(self, room_id, user_id, state_type=None, state_key=None):
|
||||
"""Snapshot the room for an update by a user
|
||||
Args:
|
||||
|
|
|
@ -121,7 +121,7 @@ class SQLBaseStore(object):
|
|||
# "Simple" SQL API methods that operate on a single table with no JOINs,
|
||||
# no complex WHERE clauses, just a dict of values for columns.
|
||||
|
||||
def _simple_insert(self, table, values, or_replace=False):
|
||||
def _simple_insert(self, table, values, or_replace=False, or_ignore=False):
|
||||
"""Executes an INSERT query on the named table.
|
||||
|
||||
Args:
|
||||
|
@ -130,13 +130,16 @@ class SQLBaseStore(object):
|
|||
or_replace : bool; if True performs an INSERT OR REPLACE
|
||||
"""
|
||||
return self.runInteraction(
|
||||
self._simple_insert_txn, table, values, or_replace=or_replace
|
||||
self._simple_insert_txn, table, values, or_replace=or_replace,
|
||||
or_ignore=or_ignore,
|
||||
)
|
||||
|
||||
@log_function
|
||||
def _simple_insert_txn(self, txn, table, values, or_replace=False):
|
||||
def _simple_insert_txn(self, txn, table, values, or_replace=False,
|
||||
or_ignore=False):
|
||||
sql = "%s INTO %s (%s) VALUES(%s)" % (
|
||||
("INSERT OR REPLACE" if or_replace else "INSERT"),
|
||||
("INSERT OR REPLACE" if or_replace else
|
||||
"INSERT OR IGNORE" if or_ignore else "INSERT"),
|
||||
table,
|
||||
", ".join(k for k in values),
|
||||
", ".join("?" for k in values)
|
||||
|
@ -351,6 +354,7 @@ class SQLBaseStore(object):
|
|||
d.pop("stream_ordering", None)
|
||||
d.pop("topological_ordering", None)
|
||||
d.pop("processed", None)
|
||||
d["origin_server_ts"] = d.pop("ts", 0)
|
||||
|
||||
d.update(json.loads(row_dict["unrecognized_keys"]))
|
||||
d["content"] = json.loads(d["content"])
|
||||
|
@ -358,7 +362,7 @@ class SQLBaseStore(object):
|
|||
|
||||
if "age_ts" not in d:
|
||||
# For compatibility
|
||||
d["age_ts"] = d["ts"] if "ts" in d else 0
|
||||
d["age_ts"] = d.get("origin_server_ts", 0)
|
||||
|
||||
return self.event_factory.create_event(
|
||||
etype=d["type"],
|
||||
|
|
|
@ -93,6 +93,36 @@ class DirectoryStore(SQLBaseStore):
|
|||
}
|
||||
)
|
||||
|
||||
def delete_room_alias(self, room_alias):
|
||||
return self.runInteraction(
|
||||
self._delete_room_alias_txn,
|
||||
room_alias,
|
||||
)
|
||||
|
||||
def _delete_room_alias_txn(self, txn, room_alias):
|
||||
cursor = txn.execute(
|
||||
"SELECT room_id FROM room_aliases WHERE room_alias = ?",
|
||||
(room_alias.to_string(),)
|
||||
)
|
||||
|
||||
res = cursor.fetchone()
|
||||
if res:
|
||||
room_id = res[0]
|
||||
else:
|
||||
return None
|
||||
|
||||
txn.execute(
|
||||
"DELETE FROM room_aliases WHERE room_alias = ?",
|
||||
(room_alias.to_string(),)
|
||||
)
|
||||
|
||||
txn.execute(
|
||||
"DELETE FROM room_alias_servers WHERE room_alias = ?",
|
||||
(room_alias.to_string(),)
|
||||
)
|
||||
|
||||
return room_id
|
||||
|
||||
def get_aliases_for_room(self, room_id):
|
||||
return self._simple_select_onecol(
|
||||
"room_aliases",
|
||||
|
|
|
@ -18,7 +18,8 @@ from _base import SQLBaseStore
|
|||
from twisted.internet import defer
|
||||
|
||||
import OpenSSL
|
||||
import nacl.signing
|
||||
from syutil.crypto.signing_key import decode_verify_key_bytes
|
||||
import hashlib
|
||||
|
||||
class KeyStore(SQLBaseStore):
|
||||
"""Persistence for signature verification keys and tls X.509 certificates
|
||||
|
@ -42,62 +43,76 @@ class KeyStore(SQLBaseStore):
|
|||
)
|
||||
defer.returnValue(tls_certificate)
|
||||
|
||||
def store_server_certificate(self, server_name, key_server, ts_now_ms,
|
||||
def store_server_certificate(self, server_name, from_server, time_now_ms,
|
||||
tls_certificate):
|
||||
"""Stores the TLS X.509 certificate for the given server
|
||||
Args:
|
||||
server_name (bytes): The name of the server.
|
||||
key_server (bytes): Where the certificate was looked up
|
||||
ts_now_ms (int): The time now in milliseconds
|
||||
server_name (str): The name of the server.
|
||||
from_server (str): Where the certificate was looked up
|
||||
time_now_ms (int): The time now in milliseconds
|
||||
tls_certificate (OpenSSL.crypto.X509): The X.509 certificate.
|
||||
"""
|
||||
tls_certificate_bytes = OpenSSL.crypto.dump_certificate(
|
||||
OpenSSL.crypto.FILETYPE_ASN1, tls_certificate
|
||||
)
|
||||
fingerprint = hashlib.sha256(tls_certificate_bytes).hexdigest()
|
||||
return self._simple_insert(
|
||||
table="server_tls_certificates",
|
||||
keyvalues={
|
||||
values={
|
||||
"server_name": server_name,
|
||||
"key_server": key_server,
|
||||
"ts_added_ms": ts_now_ms,
|
||||
"tls_certificate": tls_certificate_bytes,
|
||||
"fingerprint": fingerprint,
|
||||
"from_server": from_server,
|
||||
"ts_added_ms": time_now_ms,
|
||||
"tls_certificate": buffer(tls_certificate_bytes),
|
||||
},
|
||||
or_ignore=True,
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_server_verification_key(self, server_name):
|
||||
"""Retrieve the NACL verification key for a given server
|
||||
def get_server_verify_keys(self, server_name, key_ids):
|
||||
"""Retrieve the NACL verification key for a given server for the given
|
||||
key_ids
|
||||
Args:
|
||||
server_name (bytes): The name of the server.
|
||||
server_name (str): The name of the server.
|
||||
key_ids (list of str): List of key_ids to try and look up.
|
||||
Returns:
|
||||
(nacl.signing.VerifyKey): The verification key.
|
||||
(list of VerifyKey): The verification keys.
|
||||
"""
|
||||
verification_key_bytes, = yield self._simple_select_one(
|
||||
table="server_signature_keys",
|
||||
key_values={"server_name": server_name},
|
||||
retcols=("tls_certificate",),
|
||||
sql = (
|
||||
"SELECT key_id, verify_key FROM server_signature_keys"
|
||||
" WHERE server_name = ?"
|
||||
" AND key_id in (" + ",".join("?" for key_id in key_ids) + ")"
|
||||
)
|
||||
verification_key = nacl.signing.VerifyKey(verification_key_bytes)
|
||||
defer.returnValue(verification_key)
|
||||
|
||||
def store_server_verification_key(self, server_name, key_version,
|
||||
key_server, ts_now_ms, verification_key):
|
||||
rows = yield self._execute_and_decode(sql, server_name, *key_ids)
|
||||
|
||||
keys = []
|
||||
for row in rows:
|
||||
key_id = row["key_id"]
|
||||
key_bytes = row["verify_key"]
|
||||
key = decode_verify_key_bytes(key_id, str(key_bytes))
|
||||
keys.append(key)
|
||||
defer.returnValue(keys)
|
||||
|
||||
def store_server_verify_key(self, server_name, from_server, time_now_ms,
|
||||
verify_key):
|
||||
"""Stores a NACL verification key for the given server.
|
||||
Args:
|
||||
server_name (bytes): The name of the server.
|
||||
key_version (bytes): The version of the key for the server.
|
||||
key_server (bytes): Where the verification key was looked up
|
||||
server_name (str): The name of the server.
|
||||
key_id (str): The version of the key for the server.
|
||||
from_server (str): Where the verification key was looked up
|
||||
ts_now_ms (int): The time now in milliseconds
|
||||
verification_key (nacl.signing.VerifyKey): The NACL verify key.
|
||||
verification_key (VerifyKey): The NACL verify key.
|
||||
"""
|
||||
verification_key_bytes = verification_key.encode()
|
||||
verify_key_bytes = verify_key.encode()
|
||||
return self._simple_insert(
|
||||
table="server_signature_keys",
|
||||
key_values={
|
||||
values={
|
||||
"server_name": server_name,
|
||||
"key_version": key_version,
|
||||
"key_server": key_server,
|
||||
"ts_added_ms": ts_now_ms,
|
||||
"verification_key": verification_key_bytes,
|
||||
"key_id": "%s:%s" % (verify_key.alg, verify_key.version),
|
||||
"from_server": from_server,
|
||||
"ts_added_ms": time_now_ms,
|
||||
"verify_key": buffer(verify_key.encode()),
|
||||
},
|
||||
or_ignore=True,
|
||||
)
|
||||
|
|
|
@ -88,27 +88,40 @@ class RegistrationStore(SQLBaseStore):
|
|||
query, user_id
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_user_by_token(self, token):
|
||||
"""Get a user from the given access token.
|
||||
|
||||
Args:
|
||||
token (str): The access token of a user.
|
||||
Returns:
|
||||
str: The user ID of the user.
|
||||
dict: Including the name (user_id), device_id and whether they are
|
||||
an admin.
|
||||
Raises:
|
||||
StoreError if no user was found.
|
||||
"""
|
||||
user_id = yield self.runInteraction(self._query_for_auth,
|
||||
token)
|
||||
defer.returnValue(user_id)
|
||||
return self.runInteraction(
|
||||
self._query_for_auth,
|
||||
token
|
||||
)
|
||||
|
||||
def is_server_admin(self, user):
|
||||
return self._simple_select_one_onecol(
|
||||
table="users",
|
||||
keyvalues={"name": user.to_string()},
|
||||
retcol="admin",
|
||||
)
|
||||
|
||||
def _query_for_auth(self, txn, token):
|
||||
txn.execute("SELECT users.name FROM access_tokens LEFT JOIN users" +
|
||||
" ON users.id = access_tokens.user_id WHERE token = ?",
|
||||
[token])
|
||||
row = txn.fetchone()
|
||||
if row:
|
||||
return row[0]
|
||||
sql = (
|
||||
"SELECT users.name, users.admin, access_tokens.device_id "
|
||||
"FROM users "
|
||||
"INNER JOIN access_tokens on users.id = access_tokens.user_id "
|
||||
"WHERE token = ?"
|
||||
)
|
||||
|
||||
cursor = txn.execute(sql, (token,))
|
||||
rows = self.cursor_to_dict(cursor)
|
||||
if rows:
|
||||
return rows[0]
|
||||
|
||||
raise StoreError(404, "Token not found.")
|
||||
|
|
|
@ -18,7 +18,6 @@ from twisted.internet import defer
|
|||
from ._base import SQLBaseStore
|
||||
|
||||
from synapse.api.constants import Membership
|
||||
from synapse.util.logutils import log_function
|
||||
|
||||
import logging
|
||||
|
||||
|
|
16
synapse/storage/schema/delta/v5.sql
Normal file
16
synapse/storage/schema/delta/v5.sql
Normal file
|
@ -0,0 +1,16 @@
|
|||
|
||||
CREATE TABLE IF NOT EXISTS user_ips (
|
||||
user TEXT NOT NULL,
|
||||
access_token TEXT NOT NULL,
|
||||
device_id TEXT,
|
||||
ip TEXT NOT NULL,
|
||||
user_agent TEXT NOT NULL,
|
||||
last_seen INTEGER NOT NULL,
|
||||
CONSTRAINT user_ip UNIQUE (user, access_token, ip, user_agent) ON CONFLICT REPLACE
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS user_ips_user ON user_ips(user);
|
||||
|
||||
ALTER TABLE users ADD COLUMN admin BOOL DEFAULT 0 NOT NULL;
|
||||
|
||||
PRAGMA user_version = 5;
|
31
synapse/storage/schema/delta/v6.sql
Normal file
31
synapse/storage/schema/delta/v6.sql
Normal file
|
@ -0,0 +1,31 @@
|
|||
/* Copyright 2014 OpenMarket Ltd
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
CREATE TABLE IF NOT EXISTS server_tls_certificates(
|
||||
server_name TEXT, -- Server name.
|
||||
fingerprint TEXT, -- Certificate fingerprint.
|
||||
from_server TEXT, -- Which key server the certificate was fetched from.
|
||||
ts_added_ms INTEGER, -- When the certifcate was added.
|
||||
tls_certificate BLOB, -- DER encoded x509 certificate.
|
||||
CONSTRAINT uniqueness UNIQUE (server_name, fingerprint)
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS server_signature_keys(
|
||||
server_name TEXT, -- Server name.
|
||||
key_id TEXT, -- Key version.
|
||||
from_server TEXT, -- Which key server the key was fetched form.
|
||||
ts_added_ms INTEGER, -- When the key was added.
|
||||
verify_key BLOB, -- NACL verification key.
|
||||
CONSTRAINT uniqueness UNIQUE (server_name, key_id)
|
||||
);
|
|
@ -14,17 +14,18 @@
|
|||
*/
|
||||
CREATE TABLE IF NOT EXISTS server_tls_certificates(
|
||||
server_name TEXT, -- Server name.
|
||||
key_server TEXT, -- Which key server the certificate was fetched from.
|
||||
fingerprint TEXT, -- Certificate fingerprint.
|
||||
from_server TEXT, -- Which key server the certificate was fetched from.
|
||||
ts_added_ms INTEGER, -- When the certifcate was added.
|
||||
tls_certificate BLOB, -- DER encoded x509 certificate.
|
||||
CONSTRAINT uniqueness UNIQUE (server_name)
|
||||
CONSTRAINT uniqueness UNIQUE (server_name, fingerprint)
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS server_signature_keys(
|
||||
server_name TEXT, -- Server name.
|
||||
key_version TEXT, -- Key version.
|
||||
key_server TEXT, -- Which key server the key was fetched form.
|
||||
key_id TEXT, -- Key version.
|
||||
from_server TEXT, -- Which key server the key was fetched form.
|
||||
ts_added_ms INTEGER, -- When the key was added.
|
||||
verification_key BLOB, -- NACL verification key.
|
||||
CONSTRAINT uniqueness UNIQUE (server_name, key_version)
|
||||
verify_key BLOB, -- NACL verification key.
|
||||
CONSTRAINT uniqueness UNIQUE (server_name, key_id)
|
||||
);
|
||||
|
|
|
@ -17,6 +17,7 @@ CREATE TABLE IF NOT EXISTS users(
|
|||
name TEXT,
|
||||
password_hash TEXT,
|
||||
creation_ts INTEGER,
|
||||
admin BOOL DEFAULT 0 NOT NULL,
|
||||
UNIQUE(name) ON CONFLICT ROLLBACK
|
||||
);
|
||||
|
||||
|
@ -29,3 +30,16 @@ CREATE TABLE IF NOT EXISTS access_tokens(
|
|||
FOREIGN KEY(user_id) REFERENCES users(id),
|
||||
UNIQUE(token) ON CONFLICT ROLLBACK
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS user_ips (
|
||||
user TEXT NOT NULL,
|
||||
access_token TEXT NOT NULL,
|
||||
device_id TEXT,
|
||||
ip TEXT NOT NULL,
|
||||
user_agent TEXT NOT NULL,
|
||||
last_seen INTEGER NOT NULL,
|
||||
CONSTRAINT user_ip UNIQUE (user, access_token, ip, user_agent) ON CONFLICT REPLACE
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS user_ips_user ON user_ips(user);
|
||||
|
||||
|
|
|
@ -87,7 +87,8 @@ class TransactionStore(SQLBaseStore):
|
|||
|
||||
txn.execute(query, (code, response_json, transaction_id, origin))
|
||||
|
||||
def prep_send_transaction(self, transaction_id, destination, ts, pdu_list):
|
||||
def prep_send_transaction(self, transaction_id, destination,
|
||||
origin_server_ts, pdu_list):
|
||||
"""Persists an outgoing transaction and calculates the values for the
|
||||
previous transaction id list.
|
||||
|
||||
|
@ -97,7 +98,7 @@ class TransactionStore(SQLBaseStore):
|
|||
Args:
|
||||
transaction_id (str)
|
||||
destination (str)
|
||||
ts (int)
|
||||
origin_server_ts (int)
|
||||
pdu_list (list)
|
||||
|
||||
Returns:
|
||||
|
@ -106,11 +107,11 @@ class TransactionStore(SQLBaseStore):
|
|||
|
||||
return self.runInteraction(
|
||||
self._prep_send_transaction,
|
||||
transaction_id, destination, ts, pdu_list
|
||||
transaction_id, destination, origin_server_ts, pdu_list
|
||||
)
|
||||
|
||||
def _prep_send_transaction(self, txn, transaction_id, destination, ts,
|
||||
pdu_list):
|
||||
def _prep_send_transaction(self, txn, transaction_id, destination,
|
||||
origin_server_ts, pdu_list):
|
||||
|
||||
# First we find out what the prev_txs should be.
|
||||
# Since we know that we are only sending one transaction at a time,
|
||||
|
@ -131,7 +132,7 @@ class TransactionStore(SQLBaseStore):
|
|||
None,
|
||||
transaction_id=transaction_id,
|
||||
destination=destination,
|
||||
ts=ts,
|
||||
ts=origin_server_ts,
|
||||
response_code=0,
|
||||
response_json=None
|
||||
))
|
||||
|
|
4
synctl
4
synctl
|
@ -1,6 +1,6 @@
|
|||
#!/bin/bash
|
||||
|
||||
SYNAPSE="synapse/app/homeserver.py"
|
||||
SYNAPSE="python -m synapse.app.homeserver"
|
||||
|
||||
CONFIGFILE="homeserver.yaml"
|
||||
PIDFILE="homeserver.pid"
|
||||
|
@ -14,7 +14,7 @@ case "$1" in
|
|||
start)
|
||||
if [ ! -f "$CONFIGFILE" ]; then
|
||||
echo "No config file found"
|
||||
echo "To generate a config file, run 'python --generate-config'"
|
||||
echo "To generate a config file, run '$SYNAPSE -c $CONFIGFILE --generate-config --server-name=<server name>'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
|
|
@ -19,7 +19,7 @@ from tests import unittest
|
|||
# python imports
|
||||
from mock import Mock, ANY
|
||||
|
||||
from ..utils import MockHttpResource, MockClock
|
||||
from ..utils import MockHttpResource, MockClock, MockKey
|
||||
|
||||
from synapse.server import HomeServer
|
||||
from synapse.federation import initialize_http_replication
|
||||
|
@ -64,6 +64,8 @@ class FederationTestCase(unittest.TestCase):
|
|||
self.mock_persistence.get_received_txn_response.return_value = (
|
||||
defer.succeed(None)
|
||||
)
|
||||
self.mock_config = Mock()
|
||||
self.mock_config.signing_key = [MockKey()]
|
||||
self.clock = MockClock()
|
||||
hs = HomeServer("test",
|
||||
resource_for_federation=self.mock_resource,
|
||||
|
@ -71,6 +73,8 @@ class FederationTestCase(unittest.TestCase):
|
|||
db_pool=None,
|
||||
datastore=self.mock_persistence,
|
||||
clock=self.clock,
|
||||
config=self.mock_config,
|
||||
keyring=Mock(),
|
||||
)
|
||||
self.federation = initialize_http_replication(hs)
|
||||
self.distributor = hs.get_distributor()
|
||||
|
@ -154,7 +158,7 @@ class FederationTestCase(unittest.TestCase):
|
|||
origin="red",
|
||||
destinations=["remote"],
|
||||
context="my-context",
|
||||
ts=123456789002,
|
||||
origin_server_ts=123456789002,
|
||||
pdu_type="m.test",
|
||||
content={"testing": "content here"},
|
||||
depth=1,
|
||||
|
@ -166,14 +170,14 @@ class FederationTestCase(unittest.TestCase):
|
|||
"remote",
|
||||
path="/_matrix/federation/v1/send/1000000/",
|
||||
data={
|
||||
"ts": 1000000,
|
||||
"origin_server_ts": 1000000,
|
||||
"origin": "test",
|
||||
"pdus": [
|
||||
{
|
||||
"origin": "red",
|
||||
"pdu_id": "abc123def456",
|
||||
"prev_pdus": [],
|
||||
"ts": 123456789002,
|
||||
"origin_server_ts": 123456789002,
|
||||
"context": "my-context",
|
||||
"pdu_type": "m.test",
|
||||
"is_state": False,
|
||||
|
@ -182,7 +186,7 @@ class FederationTestCase(unittest.TestCase):
|
|||
},
|
||||
]
|
||||
},
|
||||
on_send_callback=ANY,
|
||||
json_data_callback=ANY,
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
|
@ -203,10 +207,11 @@ class FederationTestCase(unittest.TestCase):
|
|||
path="/_matrix/federation/v1/send/1000000/",
|
||||
data={
|
||||
"origin": "test",
|
||||
"ts": 1000000,
|
||||
"origin_server_ts": 1000000,
|
||||
"pdus": [],
|
||||
"edus": [
|
||||
{
|
||||
# TODO: SYN-103: Remove "origin" and "destination"
|
||||
"origin": "test",
|
||||
"destination": "remote",
|
||||
"edu_type": "m.test",
|
||||
|
@ -214,9 +219,10 @@ class FederationTestCase(unittest.TestCase):
|
|||
}
|
||||
],
|
||||
},
|
||||
on_send_callback=ANY,
|
||||
json_data_callback=ANY,
|
||||
)
|
||||
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_recv_edu(self):
|
||||
recv_observer = Mock()
|
||||
|
@ -228,7 +234,7 @@ class FederationTestCase(unittest.TestCase):
|
|||
"/_matrix/federation/v1/send/1001000/",
|
||||
"""{
|
||||
"origin": "remote",
|
||||
"ts": 1001000,
|
||||
"origin_server_ts": 1001000,
|
||||
"pdus": [],
|
||||
"edus": [
|
||||
{
|
||||
|
@ -253,7 +259,7 @@ class FederationTestCase(unittest.TestCase):
|
|||
response = yield self.federation.make_query(
|
||||
destination="remote",
|
||||
query_type="a-question",
|
||||
args={"one": "1", "two": "2"}
|
||||
args={"one": "1", "two": "2"},
|
||||
)
|
||||
|
||||
self.assertEquals({"your": "response"}, response)
|
||||
|
@ -261,7 +267,8 @@ class FederationTestCase(unittest.TestCase):
|
|||
self.mock_http_client.get_json.assert_called_with(
|
||||
destination="remote",
|
||||
path="/_matrix/federation/v1/query/a-question",
|
||||
args={"one": "1", "two": "2"}
|
||||
args={"one": "1", "two": "2"},
|
||||
retry_on_dns_fail=True,
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
|
|
|
@ -68,7 +68,7 @@ class PduCodecTestCase(unittest.TestCase):
|
|||
context="rooooom",
|
||||
pdu_type="m.room.message",
|
||||
origin="bar.com",
|
||||
ts=12345,
|
||||
origin_server_ts=12345,
|
||||
depth=5,
|
||||
prev_pdus=[("alice", "bob.com")],
|
||||
is_state=False,
|
||||
|
@ -123,7 +123,7 @@ class PduCodecTestCase(unittest.TestCase):
|
|||
context="rooooom",
|
||||
pdu_type="m.room.topic",
|
||||
origin="bar.com",
|
||||
ts=12345,
|
||||
origin_server_ts=12345,
|
||||
depth=5,
|
||||
prev_pdus=[("alice", "bob.com")],
|
||||
is_state=True,
|
||||
|
|
|
@ -20,7 +20,6 @@ from twisted.internet import defer
|
|||
from mock import Mock
|
||||
|
||||
from synapse.server import HomeServer
|
||||
from synapse.http.client import HttpClient
|
||||
from synapse.handlers.directory import DirectoryHandler
|
||||
from synapse.storage.directory import RoomAliasMapping
|
||||
|
||||
|
@ -95,8 +94,8 @@ class DirectoryTestCase(unittest.TestCase):
|
|||
query_type="directory",
|
||||
args={
|
||||
"room_alias": "#another:remote",
|
||||
HttpClient.RETRY_DNS_LOOKUP_FAILURES: False
|
||||
}
|
||||
},
|
||||
retry_on_dns_fail=False,
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
|
|
|
@ -26,12 +26,16 @@ from synapse.federation.units import Pdu
|
|||
|
||||
from mock import NonCallableMock, ANY
|
||||
|
||||
from ..utils import get_mock_call_args
|
||||
from ..utils import get_mock_call_args, MockKey
|
||||
|
||||
|
||||
class FederationTestCase(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
|
||||
self.mock_config = NonCallableMock()
|
||||
self.mock_config.signing_key = [MockKey()]
|
||||
|
||||
self.hostname = "test"
|
||||
hs = HomeServer(
|
||||
self.hostname,
|
||||
|
@ -48,6 +52,7 @@ class FederationTestCase(unittest.TestCase):
|
|||
"room_member_handler",
|
||||
"federation_handler",
|
||||
]),
|
||||
config=self.mock_config,
|
||||
)
|
||||
|
||||
self.datastore = hs.get_datastore()
|
||||
|
@ -63,7 +68,7 @@ class FederationTestCase(unittest.TestCase):
|
|||
pdu_type=MessageEvent.TYPE,
|
||||
context="foo",
|
||||
content={"msgtype": u"fooo"},
|
||||
ts=0,
|
||||
origin_server_ts=0,
|
||||
pdu_id="a",
|
||||
origin="b",
|
||||
)
|
||||
|
@ -90,7 +95,7 @@ class FederationTestCase(unittest.TestCase):
|
|||
target_host=self.hostname,
|
||||
context=room_id,
|
||||
content={},
|
||||
ts=0,
|
||||
origin_server_ts=0,
|
||||
pdu_id="a",
|
||||
origin="b",
|
||||
)
|
||||
|
@ -122,7 +127,7 @@ class FederationTestCase(unittest.TestCase):
|
|||
state_key="@red:not%s" % self.hostname,
|
||||
context=room_id,
|
||||
content={},
|
||||
ts=0,
|
||||
origin_server_ts=0,
|
||||
pdu_id="a",
|
||||
origin="b",
|
||||
)
|
||||
|
|
|
@ -17,11 +17,12 @@
|
|||
from tests import unittest
|
||||
from twisted.internet import defer, reactor
|
||||
|
||||
from mock import Mock, call, ANY
|
||||
from mock import Mock, call, ANY, NonCallableMock, patch
|
||||
import json
|
||||
|
||||
from tests.utils import (
|
||||
MockHttpResource, MockClock, DeferredMockCallable, SQLiteMemoryDbPool
|
||||
MockHttpResource, MockClock, DeferredMockCallable, SQLiteMemoryDbPool,
|
||||
MockKey
|
||||
)
|
||||
|
||||
from synapse.server import HomeServer
|
||||
|
@ -38,10 +39,11 @@ ONLINE = PresenceState.ONLINE
|
|||
def _expect_edu(destination, edu_type, content, origin="test"):
|
||||
return {
|
||||
"origin": origin,
|
||||
"ts": 1000000,
|
||||
"origin_server_ts": 1000000,
|
||||
"pdus": [],
|
||||
"edus": [
|
||||
{
|
||||
# TODO: SYN-103: Remove "origin" and "destination" keys.
|
||||
"origin": origin,
|
||||
"destination": destination,
|
||||
"edu_type": edu_type,
|
||||
|
@ -58,7 +60,6 @@ class JustPresenceHandlers(object):
|
|||
def __init__(self, hs):
|
||||
self.presence_handler = PresenceHandler(hs)
|
||||
|
||||
|
||||
class PresenceStateTestCase(unittest.TestCase):
|
||||
""" Tests presence management. """
|
||||
|
||||
|
@ -67,12 +68,17 @@ class PresenceStateTestCase(unittest.TestCase):
|
|||
db_pool = SQLiteMemoryDbPool()
|
||||
yield db_pool.prepare()
|
||||
|
||||
self.mock_config = NonCallableMock()
|
||||
self.mock_config.signing_key = [MockKey()]
|
||||
|
||||
hs = HomeServer("test",
|
||||
clock=MockClock(),
|
||||
db_pool=db_pool,
|
||||
handlers=None,
|
||||
resource_for_federation=Mock(),
|
||||
http_client=None,
|
||||
config=self.mock_config,
|
||||
keyring=Mock(),
|
||||
)
|
||||
hs.handlers = JustPresenceHandlers(hs)
|
||||
|
||||
|
@ -214,6 +220,9 @@ class PresenceInvitesTestCase(unittest.TestCase):
|
|||
db_pool = SQLiteMemoryDbPool()
|
||||
yield db_pool.prepare()
|
||||
|
||||
self.mock_config = NonCallableMock()
|
||||
self.mock_config.signing_key = [MockKey()]
|
||||
|
||||
hs = HomeServer("test",
|
||||
clock=MockClock(),
|
||||
db_pool=db_pool,
|
||||
|
@ -221,6 +230,8 @@ class PresenceInvitesTestCase(unittest.TestCase):
|
|||
resource_for_client=Mock(),
|
||||
resource_for_federation=self.mock_federation_resource,
|
||||
http_client=self.mock_http_client,
|
||||
config=self.mock_config,
|
||||
keyring=Mock(),
|
||||
)
|
||||
hs.handlers = JustPresenceHandlers(hs)
|
||||
|
||||
|
@ -290,7 +301,7 @@ class PresenceInvitesTestCase(unittest.TestCase):
|
|||
"observed_user": "@cabbage:elsewhere",
|
||||
}
|
||||
),
|
||||
on_send_callback=ANY,
|
||||
json_data_callback=ANY,
|
||||
),
|
||||
defer.succeed((200, "OK"))
|
||||
)
|
||||
|
@ -319,7 +330,7 @@ class PresenceInvitesTestCase(unittest.TestCase):
|
|||
"observed_user": "@apple:test",
|
||||
}
|
||||
),
|
||||
on_send_callback=ANY,
|
||||
json_data_callback=ANY,
|
||||
),
|
||||
defer.succeed((200, "OK"))
|
||||
)
|
||||
|
@ -355,7 +366,7 @@ class PresenceInvitesTestCase(unittest.TestCase):
|
|||
"observed_user": "@durian:test",
|
||||
}
|
||||
),
|
||||
on_send_callback=ANY,
|
||||
json_data_callback=ANY,
|
||||
),
|
||||
defer.succeed((200, "OK"))
|
||||
)
|
||||
|
@ -503,6 +514,9 @@ class PresencePushTestCase(unittest.TestCase):
|
|||
|
||||
self.mock_federation_resource = MockHttpResource()
|
||||
|
||||
self.mock_config = NonCallableMock()
|
||||
self.mock_config.signing_key = [MockKey()]
|
||||
|
||||
hs = HomeServer("test",
|
||||
clock=self.clock,
|
||||
db_pool=None,
|
||||
|
@ -520,6 +534,8 @@ class PresencePushTestCase(unittest.TestCase):
|
|||
resource_for_client=Mock(),
|
||||
resource_for_federation=self.mock_federation_resource,
|
||||
http_client=self.mock_http_client,
|
||||
config=self.mock_config,
|
||||
keyring=Mock(),
|
||||
)
|
||||
hs.handlers = JustPresenceHandlers(hs)
|
||||
|
||||
|
@ -771,7 +787,7 @@ class PresencePushTestCase(unittest.TestCase):
|
|||
],
|
||||
}
|
||||
),
|
||||
on_send_callback=ANY,
|
||||
json_data_callback=ANY,
|
||||
),
|
||||
defer.succeed((200, "OK"))
|
||||
)
|
||||
|
@ -787,7 +803,7 @@ class PresencePushTestCase(unittest.TestCase):
|
|||
],
|
||||
}
|
||||
),
|
||||
on_send_callback=ANY,
|
||||
json_data_callback=ANY,
|
||||
),
|
||||
defer.succeed((200, "OK"))
|
||||
)
|
||||
|
@ -913,7 +929,7 @@ class PresencePushTestCase(unittest.TestCase):
|
|||
],
|
||||
}
|
||||
),
|
||||
on_send_callback=ANY,
|
||||
json_data_callback=ANY,
|
||||
),
|
||||
defer.succeed((200, "OK"))
|
||||
)
|
||||
|
@ -928,7 +944,7 @@ class PresencePushTestCase(unittest.TestCase):
|
|||
],
|
||||
}
|
||||
),
|
||||
on_send_callback=ANY,
|
||||
json_data_callback=ANY,
|
||||
),
|
||||
defer.succeed((200, "OK"))
|
||||
)
|
||||
|
@ -958,7 +974,7 @@ class PresencePushTestCase(unittest.TestCase):
|
|||
],
|
||||
}
|
||||
),
|
||||
on_send_callback=ANY,
|
||||
json_data_callback=ANY,
|
||||
),
|
||||
defer.succeed((200, "OK"))
|
||||
)
|
||||
|
@ -995,6 +1011,9 @@ class PresencePollingTestCase(unittest.TestCase):
|
|||
|
||||
self.mock_federation_resource = MockHttpResource()
|
||||
|
||||
self.mock_config = NonCallableMock()
|
||||
self.mock_config.signing_key = [MockKey()]
|
||||
|
||||
hs = HomeServer("test",
|
||||
clock=MockClock(),
|
||||
db_pool=None,
|
||||
|
@ -1009,6 +1028,8 @@ class PresencePollingTestCase(unittest.TestCase):
|
|||
resource_for_client=Mock(),
|
||||
resource_for_federation=self.mock_federation_resource,
|
||||
http_client=self.mock_http_client,
|
||||
config=self.mock_config,
|
||||
keyring=Mock(),
|
||||
)
|
||||
hs.handlers = JustPresenceHandlers(hs)
|
||||
|
||||
|
@ -1155,7 +1176,7 @@ class PresencePollingTestCase(unittest.TestCase):
|
|||
"poll": [ "@potato:remote" ],
|
||||
},
|
||||
),
|
||||
on_send_callback=ANY,
|
||||
json_data_callback=ANY,
|
||||
),
|
||||
defer.succeed((200, "OK"))
|
||||
)
|
||||
|
@ -1168,7 +1189,7 @@ class PresencePollingTestCase(unittest.TestCase):
|
|||
"push": [ {"user_id": "@clementine:test" }],
|
||||
},
|
||||
),
|
||||
on_send_callback=ANY,
|
||||
json_data_callback=ANY,
|
||||
),
|
||||
defer.succeed((200, "OK"))
|
||||
)
|
||||
|
@ -1197,7 +1218,7 @@ class PresencePollingTestCase(unittest.TestCase):
|
|||
"push": [ {"user_id": "@fig:test" }],
|
||||
},
|
||||
),
|
||||
on_send_callback=ANY,
|
||||
json_data_callback=ANY,
|
||||
),
|
||||
defer.succeed((200, "OK"))
|
||||
)
|
||||
|
@ -1230,7 +1251,7 @@ class PresencePollingTestCase(unittest.TestCase):
|
|||
"unpoll": [ "@potato:remote" ],
|
||||
},
|
||||
),
|
||||
on_send_callback=ANY,
|
||||
json_data_callback=ANY,
|
||||
),
|
||||
defer.succeed((200, "OK"))
|
||||
)
|
||||
|
@ -1262,7 +1283,7 @@ class PresencePollingTestCase(unittest.TestCase):
|
|||
],
|
||||
},
|
||||
),
|
||||
on_send_callback=ANY,
|
||||
json_data_callback=ANY,
|
||||
),
|
||||
defer.succeed((200, "OK"))
|
||||
)
|
||||
|
|
|
@ -24,6 +24,7 @@ from synapse.api.constants import Membership
|
|||
from synapse.handlers.room import RoomMemberHandler, RoomCreationHandler
|
||||
from synapse.handlers.profile import ProfileHandler
|
||||
from synapse.server import HomeServer
|
||||
from ..utils import MockKey
|
||||
|
||||
from mock import Mock, NonCallableMock
|
||||
|
||||
|
@ -31,6 +32,8 @@ from mock import Mock, NonCallableMock
|
|||
class RoomMemberHandlerTestCase(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
self.mock_config = NonCallableMock()
|
||||
self.mock_config.signing_key = [MockKey()]
|
||||
self.hostname = "red"
|
||||
hs = HomeServer(
|
||||
self.hostname,
|
||||
|
@ -38,7 +41,6 @@ class RoomMemberHandlerTestCase(unittest.TestCase):
|
|||
ratelimiter=NonCallableMock(spec_set=[
|
||||
"send_message",
|
||||
]),
|
||||
config=NonCallableMock(),
|
||||
datastore=NonCallableMock(spec_set=[
|
||||
"persist_event",
|
||||
"get_joined_hosts_for_room",
|
||||
|
@ -57,6 +59,7 @@ class RoomMemberHandlerTestCase(unittest.TestCase):
|
|||
]),
|
||||
auth=NonCallableMock(spec_set=["check"]),
|
||||
state_handler=NonCallableMock(spec_set=["handle_new_event"]),
|
||||
config=self.mock_config,
|
||||
)
|
||||
|
||||
self.federation = NonCallableMock(spec_set=[
|
||||
|
|
|
@ -20,7 +20,7 @@ from twisted.internet import defer
|
|||
from mock import Mock, call, ANY
|
||||
import json
|
||||
|
||||
from ..utils import MockHttpResource, MockClock, DeferredMockCallable
|
||||
from ..utils import MockHttpResource, MockClock, DeferredMockCallable, MockKey
|
||||
|
||||
from synapse.server import HomeServer
|
||||
from synapse.handlers.typing import TypingNotificationHandler
|
||||
|
@ -29,10 +29,11 @@ from synapse.handlers.typing import TypingNotificationHandler
|
|||
def _expect_edu(destination, edu_type, content, origin="test"):
|
||||
return {
|
||||
"origin": origin,
|
||||
"ts": 1000000,
|
||||
"origin_server_ts": 1000000,
|
||||
"pdus": [],
|
||||
"edus": [
|
||||
{
|
||||
# TODO: SYN-103: Remove "origin" and "destination" keys.
|
||||
"origin": origin,
|
||||
"destination": destination,
|
||||
"edu_type": edu_type,
|
||||
|
@ -61,6 +62,9 @@ class TypingNotificationsTestCase(unittest.TestCase):
|
|||
|
||||
self.mock_federation_resource = MockHttpResource()
|
||||
|
||||
self.mock_config = Mock()
|
||||
self.mock_config.signing_key = [MockKey()]
|
||||
|
||||
hs = HomeServer("test",
|
||||
clock=self.clock,
|
||||
db_pool=None,
|
||||
|
@ -75,6 +79,8 @@ class TypingNotificationsTestCase(unittest.TestCase):
|
|||
resource_for_client=Mock(),
|
||||
resource_for_federation=self.mock_federation_resource,
|
||||
http_client=self.mock_http_client,
|
||||
config=self.mock_config,
|
||||
keyring=Mock(),
|
||||
)
|
||||
hs.handlers = JustTypingNotificationHandlers(hs)
|
||||
|
||||
|
@ -170,7 +176,7 @@ class TypingNotificationsTestCase(unittest.TestCase):
|
|||
"typing": True,
|
||||
}
|
||||
),
|
||||
on_send_callback=ANY,
|
||||
json_data_callback=ANY,
|
||||
),
|
||||
defer.succeed((200, "OK"))
|
||||
)
|
||||
|
@ -221,7 +227,7 @@ class TypingNotificationsTestCase(unittest.TestCase):
|
|||
"typing": False,
|
||||
}
|
||||
),
|
||||
on_send_callback=ANY,
|
||||
json_data_callback=ANY,
|
||||
),
|
||||
defer.succeed((200, "OK"))
|
||||
)
|
||||
|
|
|
@ -20,7 +20,7 @@ from twisted.internet import defer
|
|||
|
||||
from mock import Mock
|
||||
|
||||
from ..utils import MockHttpResource
|
||||
from ..utils import MockHttpResource, MockKey
|
||||
|
||||
from synapse.api.constants import PresenceState
|
||||
from synapse.handlers.presence import PresenceHandler
|
||||
|
@ -45,16 +45,19 @@ class PresenceStateTestCase(unittest.TestCase):
|
|||
|
||||
def setUp(self):
|
||||
self.mock_resource = MockHttpResource(prefix=PATH_PREFIX)
|
||||
|
||||
self.mock_config = Mock()
|
||||
self.mock_config.signing_key = [MockKey()]
|
||||
hs = HomeServer("test",
|
||||
db_pool=None,
|
||||
datastore=Mock(spec=[
|
||||
"get_presence_state",
|
||||
"set_presence_state",
|
||||
"insert_client_ip",
|
||||
]),
|
||||
http_client=None,
|
||||
resource_for_client=self.mock_resource,
|
||||
resource_for_federation=self.mock_resource,
|
||||
config=self.mock_config,
|
||||
)
|
||||
hs.handlers = JustPresenceHandlers(hs)
|
||||
|
||||
|
@ -65,7 +68,11 @@ class PresenceStateTestCase(unittest.TestCase):
|
|||
self.datastore.get_presence_list = get_presence_list
|
||||
|
||||
def _get_user_by_token(token=None):
|
||||
return hs.parse_userid(myid)
|
||||
return {
|
||||
"user": hs.parse_userid(myid),
|
||||
"admin": False,
|
||||
"device_id": None,
|
||||
}
|
||||
|
||||
hs.get_auth().get_user_by_token = _get_user_by_token
|
||||
|
||||
|
@ -119,6 +126,8 @@ class PresenceListTestCase(unittest.TestCase):
|
|||
|
||||
def setUp(self):
|
||||
self.mock_resource = MockHttpResource(prefix=PATH_PREFIX)
|
||||
self.mock_config = Mock()
|
||||
self.mock_config.signing_key = [MockKey()]
|
||||
|
||||
hs = HomeServer("test",
|
||||
db_pool=None,
|
||||
|
@ -131,10 +140,12 @@ class PresenceListTestCase(unittest.TestCase):
|
|||
"set_presence_list_accepted",
|
||||
"del_presence_list",
|
||||
"get_presence_list",
|
||||
"insert_client_ip",
|
||||
]),
|
||||
http_client=None,
|
||||
resource_for_client=self.mock_resource,
|
||||
resource_for_federation=self.mock_resource
|
||||
resource_for_federation=self.mock_resource,
|
||||
config=self.mock_config,
|
||||
)
|
||||
hs.handlers = JustPresenceHandlers(hs)
|
||||
|
||||
|
@ -147,7 +158,11 @@ class PresenceListTestCase(unittest.TestCase):
|
|||
self.datastore.has_presence_state = has_presence_state
|
||||
|
||||
def _get_user_by_token(token=None):
|
||||
return hs.parse_userid(myid)
|
||||
return {
|
||||
"user": hs.parse_userid(myid),
|
||||
"admin": False,
|
||||
"device_id": None,
|
||||
}
|
||||
|
||||
room_member_handler = hs.handlers.room_member_handler = Mock(
|
||||
spec=[
|
||||
|
@ -225,6 +240,9 @@ class PresenceEventStreamTestCase(unittest.TestCase):
|
|||
def setUp(self):
|
||||
self.mock_resource = MockHttpResource(prefix=PATH_PREFIX)
|
||||
|
||||
self.mock_config = Mock()
|
||||
self.mock_config.signing_key = [MockKey()]
|
||||
|
||||
# HIDEOUS HACKERY
|
||||
# TODO(paul): This should be injected in via the HomeServer DI system
|
||||
from synapse.streams.events import (
|
||||
|
@ -255,6 +273,7 @@ class PresenceEventStreamTestCase(unittest.TestCase):
|
|||
"cancel_call_later",
|
||||
"time_msec",
|
||||
]),
|
||||
config=self.mock_config,
|
||||
)
|
||||
|
||||
hs.get_clock().time_msec.return_value = 1000000
|
||||
|
|
|
@ -50,10 +50,10 @@ class ProfileTestCase(unittest.TestCase):
|
|||
datastore=None,
|
||||
)
|
||||
|
||||
def _get_user_by_token(token=None):
|
||||
def _get_user_by_req(request=None):
|
||||
return hs.parse_userid(myid)
|
||||
|
||||
hs.get_auth().get_user_by_token = _get_user_by_token
|
||||
hs.get_auth().get_user_by_req = _get_user_by_req
|
||||
|
||||
hs.get_handlers().profile_handler = self.mock_handler
|
||||
|
||||
|
|
|
@ -69,7 +69,11 @@ class RoomPermissionsTestCase(RestTestCase):
|
|||
hs.get_handlers().federation_handler = Mock()
|
||||
|
||||
def _get_user_by_token(token=None):
|
||||
return hs.parse_userid(self.auth_user_id)
|
||||
return {
|
||||
"user": hs.parse_userid(self.auth_user_id),
|
||||
"admin": False,
|
||||
"device_id": None,
|
||||
}
|
||||
hs.get_auth().get_user_by_token = _get_user_by_token
|
||||
|
||||
self.auth_user_id = self.rmcreator_id
|
||||
|
@ -425,7 +429,11 @@ class RoomsMemberListTestCase(RestTestCase):
|
|||
self.auth_user_id = self.user_id
|
||||
|
||||
def _get_user_by_token(token=None):
|
||||
return hs.parse_userid(self.auth_user_id)
|
||||
return {
|
||||
"user": hs.parse_userid(self.auth_user_id),
|
||||
"admin": False,
|
||||
"device_id": None,
|
||||
}
|
||||
hs.get_auth().get_user_by_token = _get_user_by_token
|
||||
|
||||
synapse.rest.room.register_servlets(hs, self.mock_resource)
|
||||
|
@ -508,7 +516,11 @@ class RoomsCreateTestCase(RestTestCase):
|
|||
hs.get_handlers().federation_handler = Mock()
|
||||
|
||||
def _get_user_by_token(token=None):
|
||||
return hs.parse_userid(self.auth_user_id)
|
||||
return {
|
||||
"user": hs.parse_userid(self.auth_user_id),
|
||||
"admin": False,
|
||||
"device_id": None,
|
||||
}
|
||||
hs.get_auth().get_user_by_token = _get_user_by_token
|
||||
|
||||
synapse.rest.room.register_servlets(hs, self.mock_resource)
|
||||
|
@ -605,7 +617,11 @@ class RoomTopicTestCase(RestTestCase):
|
|||
hs.get_handlers().federation_handler = Mock()
|
||||
|
||||
def _get_user_by_token(token=None):
|
||||
return hs.parse_userid(self.auth_user_id)
|
||||
return {
|
||||
"user": hs.parse_userid(self.auth_user_id),
|
||||
"admin": False,
|
||||
"device_id": None,
|
||||
}
|
||||
hs.get_auth().get_user_by_token = _get_user_by_token
|
||||
|
||||
synapse.rest.room.register_servlets(hs, self.mock_resource)
|
||||
|
@ -715,7 +731,16 @@ class RoomMemberStateTestCase(RestTestCase):
|
|||
hs.get_handlers().federation_handler = Mock()
|
||||
|
||||
def _get_user_by_token(token=None):
|
||||
return hs.parse_userid(self.auth_user_id)
|
||||
return {
|
||||
"user": hs.parse_userid(self.auth_user_id),
|
||||
"admin": False,
|
||||
"device_id": None,
|
||||
}
|
||||
return {
|
||||
"user": hs.parse_userid(self.auth_user_id),
|
||||
"admin": False,
|
||||
"device_id": None,
|
||||
}
|
||||
hs.get_auth().get_user_by_token = _get_user_by_token
|
||||
|
||||
synapse.rest.room.register_servlets(hs, self.mock_resource)
|
||||
|
@ -847,7 +872,11 @@ class RoomMessagesTestCase(RestTestCase):
|
|||
hs.get_handlers().federation_handler = Mock()
|
||||
|
||||
def _get_user_by_token(token=None):
|
||||
return hs.parse_userid(self.auth_user_id)
|
||||
return {
|
||||
"user": hs.parse_userid(self.auth_user_id),
|
||||
"admin": False,
|
||||
"device_id": None,
|
||||
}
|
||||
hs.get_auth().get_user_by_token = _get_user_by_token
|
||||
|
||||
synapse.rest.room.register_servlets(hs, self.mock_resource)
|
||||
|
|
|
@ -30,7 +30,8 @@ class DirectoryStoreTestCase(unittest.TestCase):
|
|||
db_pool = SQLiteMemoryDbPool()
|
||||
yield db_pool.prepare()
|
||||
|
||||
hs = HomeServer("test",
|
||||
hs = HomeServer(
|
||||
"test",
|
||||
db_pool=db_pool,
|
||||
)
|
||||
|
||||
|
@ -60,9 +61,25 @@ class DirectoryStoreTestCase(unittest.TestCase):
|
|||
servers=["test"],
|
||||
)
|
||||
|
||||
|
||||
self.assertObjectHasAttributes(
|
||||
{"room_id": self.room.to_string(),
|
||||
"servers": ["test"]},
|
||||
{
|
||||
"room_id": self.room.to_string(),
|
||||
"servers": ["test"],
|
||||
},
|
||||
(yield self.store.get_association_from_room_alias(self.alias))
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_delete_alias(self):
|
||||
yield self.store.create_room_alias_association(
|
||||
room_alias=self.alias,
|
||||
room_id=self.room.to_string(),
|
||||
servers=["test"],
|
||||
)
|
||||
|
||||
room_id = yield self.store.delete_room_alias(self.alias)
|
||||
self.assertEqual(self.room.to_string(), room_id)
|
||||
|
||||
self.assertIsNone(
|
||||
(yield self.store.get_association_from_room_alias(self.alias))
|
||||
)
|
||||
|
|
|
@ -53,7 +53,7 @@ class RegistrationStoreTestCase(unittest.TestCase):
|
|||
)
|
||||
|
||||
self.assertEquals(
|
||||
self.user_id,
|
||||
{"admin": 0, "device_id": None, "name": self.user_id},
|
||||
(yield self.store.get_user_by_token(self.tokens[0]))
|
||||
)
|
||||
|
||||
|
@ -63,7 +63,7 @@ class RegistrationStoreTestCase(unittest.TestCase):
|
|||
yield self.store.add_access_token_to_user(self.user_id, self.tokens[1])
|
||||
|
||||
self.assertEquals(
|
||||
self.user_id,
|
||||
{"admin": 0, "device_id": None, "name": self.user_id},
|
||||
(yield self.store.get_user_by_token(self.tokens[1]))
|
||||
)
|
||||
|
||||
|
|
|
@ -599,7 +599,7 @@ def new_fake_pdu(pdu_id, context, pdu_type, state_key, prev_state_id,
|
|||
prev_state_id=prev_state_id,
|
||||
origin="example.com",
|
||||
context="context",
|
||||
ts=1405353060021,
|
||||
origin_server_ts=1405353060021,
|
||||
depth=depth,
|
||||
content_json="{}",
|
||||
unrecognized_keys="{}",
|
||||
|
|
|
@ -76,6 +76,13 @@ class MockHttpResource(HttpServer):
|
|||
mock_content.configure_mock(**config)
|
||||
mock_request.content = mock_content
|
||||
|
||||
mock_request.method = http_method
|
||||
mock_request.uri = path
|
||||
|
||||
mock_request.requestHeaders.getRawHeaders.return_value=[
|
||||
"X-Matrix origin=test,key=,sig="
|
||||
]
|
||||
|
||||
# return the right path if the event requires it
|
||||
mock_request.path = path
|
||||
|
||||
|
@ -108,6 +115,21 @@ class MockHttpResource(HttpServer):
|
|||
self.callbacks.append((method, path_pattern, callback))
|
||||
|
||||
|
||||
class MockKey(object):
|
||||
alg = "mock_alg"
|
||||
version = "mock_version"
|
||||
|
||||
@property
|
||||
def verify_key(self):
|
||||
return self
|
||||
|
||||
def sign(self, message):
|
||||
return b"\x9a\x87$"
|
||||
|
||||
def verify(self, message, sig):
|
||||
assert sig == b"\x9a\x87$"
|
||||
|
||||
|
||||
class MockClock(object):
|
||||
now = 1000
|
||||
|
||||
|
@ -167,7 +189,11 @@ class MemoryDataStore(object):
|
|||
|
||||
def get_user_by_token(self, token):
|
||||
try:
|
||||
return self.tokens_to_users[token]
|
||||
return {
|
||||
"name": self.tokens_to_users[token],
|
||||
"admin": 0,
|
||||
"device_id": None,
|
||||
}
|
||||
except:
|
||||
raise StoreError(400, "User does not exist.")
|
||||
|
||||
|
@ -264,6 +290,9 @@ class MemoryDataStore(object):
|
|||
def get_ops_levels(self, room_id):
|
||||
return defer.succeed((5, 5, 5))
|
||||
|
||||
def insert_client_ip(self, user, device_id, access_token, ip, user_agent):
|
||||
return defer.succeed(None)
|
||||
|
||||
|
||||
def _format_call(args, kwargs):
|
||||
return ", ".join(
|
||||
|
|
|
@ -80,4 +80,53 @@ angular.module('matrixWebClient')
|
|||
return function(text) {
|
||||
return $sce.trustAsHtml(text);
|
||||
};
|
||||
}])
|
||||
// Exactly the same as ngSanitize's linky but instead of pushing sanitized
|
||||
// text in the addText function, we just push the raw text.
|
||||
.filter('unsanitizedLinky', ['$sanitize', function($sanitize) {
|
||||
var LINKY_URL_REGEXP =
|
||||
/((ftp|https?):\/\/|(mailto:)?[A-Za-z0-9._%+-]+@)\S*[^\s.;,(){}<>"]/,
|
||||
MAILTO_REGEXP = /^mailto:/;
|
||||
|
||||
return function(text, target) {
|
||||
if (!text) return text;
|
||||
var match;
|
||||
var raw = text;
|
||||
var html = [];
|
||||
var url;
|
||||
var i;
|
||||
while ((match = raw.match(LINKY_URL_REGEXP))) {
|
||||
// We can not end in these as they are sometimes found at the end of the sentence
|
||||
url = match[0];
|
||||
// if we did not match ftp/http/mailto then assume mailto
|
||||
if (match[2] == match[3]) url = 'mailto:' + url;
|
||||
i = match.index;
|
||||
addText(raw.substr(0, i));
|
||||
addLink(url, match[0].replace(MAILTO_REGEXP, ''));
|
||||
raw = raw.substring(i + match[0].length);
|
||||
}
|
||||
addText(raw);
|
||||
return $sanitize(html.join(''));
|
||||
|
||||
function addText(text) {
|
||||
if (!text) {
|
||||
return;
|
||||
}
|
||||
html.push(text);
|
||||
}
|
||||
|
||||
function addLink(url, text) {
|
||||
html.push('<a ');
|
||||
if (angular.isDefined(target)) {
|
||||
html.push('target="');
|
||||
html.push(target);
|
||||
html.push('" ');
|
||||
}
|
||||
html.push('href="');
|
||||
html.push(url);
|
||||
html.push('">');
|
||||
addText(text);
|
||||
html.push('</a>');
|
||||
}
|
||||
};
|
||||
}]);
|
|
@ -121,7 +121,9 @@
|
|||
<span ng-show='msg.content.msgtype === "m.text"'
|
||||
class="message"
|
||||
ng-class="containsBingWord(msg.content.body) && msg.user_id != state.user_id ? msg.echo_msg_state + ' messageBing' : msg.echo_msg_state"
|
||||
ng-bind-html="((msg.content.msgtype === 'm.text') ? msg.content.body : '') | linky:'_blank'"/>
|
||||
ng-bind-html="(msg.content.msgtype === 'm.text' && msg.type === 'm.room.message' && msg.content.format === 'org.matrix.custom.html') ?
|
||||
(msg.content.formatted_body | unsanitizedLinky) :
|
||||
(msg.content.msgtype === 'm.text' && msg.type === 'm.room.message') ? (msg.content.body | linky:'_blank') : '' "/>
|
||||
|
||||
<span ng-show='msg.type === "m.call.invite" && msg.user_id == state.user_id'>Outgoing Call{{ isWebRTCSupported ? '' : ' (But your browser does not support VoIP)' }}</span>
|
||||
<span ng-show='msg.type === "m.call.invite" && msg.user_id != state.user_id'>Incoming Call{{ isWebRTCSupported ? '' : ' (But your browser does not support VoIP)' }}</span>
|
||||
|
|
Loading…
Reference in a new issue