0
0
Fork 1
mirror of https://mau.dev/maunium/synapse.git synced 2024-11-13 21:41:30 +01:00

Merge branch 'master' of github.com:matrix-org/synapse into sql_refactor

Conflicts:
	synapse/storage/_base.py
This commit is contained in:
Erik Johnston 2014-08-14 10:01:04 +01:00
commit 10294b6082
35 changed files with 412 additions and 188 deletions

View file

@ -10,7 +10,7 @@ VoIP[1]. The basics you need to know to get up and running are:
- Matrix user IDs look like ``@matthew:matrix.org`` (although in the future - Matrix user IDs look like ``@matthew:matrix.org`` (although in the future
you will normally refer to yourself and others using a 3PID: email you will normally refer to yourself and others using a 3PID: email
address, phone number, etc rather than manipulating matrix user IDs) address, phone number, etc rather than manipulating Matrix user IDs)
The overall architecture is:: The overall architecture is::
@ -40,8 +40,8 @@ To get up and running:
About Matrix About Matrix
============ ============
Matrix specifies a set of pragmatic RESTful HTTP JSON APIs for VoIP and IM as an Matrix specifies a set of pragmatic RESTful HTTP JSON APIs as an open standard,
open standard, providing: which handle:
- Creating and managing fully distributed chat rooms with no - Creating and managing fully distributed chat rooms with no
single points of control or failure single points of control or failure
@ -147,7 +147,7 @@ Setting up Federation
In order for other homeservers to send messages to your server, it will need to In order for other homeservers to send messages to your server, it will need to
be publicly visible on the internet, and they will need to know its host name. be publicly visible on the internet, and they will need to know its host name.
You have two choices here, which will influence the form of your matrix user You have two choices here, which will influence the form of your Matrix user
IDs: IDs:
1) Use the machine's own hostname as available on public DNS in the form of its 1) Use the machine's own hostname as available on public DNS in the form of its
@ -231,14 +231,15 @@ synapse sandbox running on localhost)
Logging In To An Existing Account Logging In To An Existing Account
--------------------------------- ---------------------------------
Just enter the ``@localpart:my.domain.here`` matrix user ID and password into the form and click the Login button. Just enter the ``@localpart:my.domain.here`` Matrix user ID and password into
the form and click the Login button.
Identity Servers Identity Servers
================ ================
The job of authenticating 3PIDs and tracking which 3PIDs are associated with a The job of authenticating 3PIDs and tracking which 3PIDs are associated with a
given matrix user is very security-sensitive, as there is obvious risk of spam given Matrix user is very security-sensitive, as there is obvious risk of spam
if it is too easy to sign up for Matrix accounts or harvest 3PID data. Meanwhile if it is too easy to sign up for Matrix accounts or harvest 3PID data. Meanwhile
the job of publishing the end-to-end encryption public keys for Matrix users is the job of publishing the end-to-end encryption public keys for Matrix users is
also very security-sensitive for similar reasons. also very security-sensitive for similar reasons.

View file

@ -24,7 +24,7 @@ Where the bottom (the transport layer) is what talks to the internet via HTTP, a
* duplicate pdu_id's - i.e., it makes sure we ignore them. * duplicate pdu_id's - i.e., it makes sure we ignore them.
* responding to requests for a given pdu_id * responding to requests for a given pdu_id
* responding to requests for all metadata for a given context (i.e. room) * responding to requests for all metadata for a given context (i.e. room)
* handling incoming pagination requests * handling incoming backfill requests
So it has to parse incoming messages to discover which are metadata and which aren't, and has to correctly clobber existing metadata where appropriate. So it has to parse incoming messages to discover which are metadata and which aren't, and has to correctly clobber existing metadata where appropriate.

View file

@ -155,9 +155,9 @@ To fetch all the state of a given context:
PDUs that encode the state. PDUs that encode the state.
To paginate events on a given context: To backfill events on a given context:
GET .../paginate/:context/ GET .../backfill/:context/
Query args: v, limit Query args: v, limit
Response: JSON encoding of a single Transaction containing multiple PDUs Response: JSON encoding of a single Transaction containing multiple PDUs

View file

@ -11,6 +11,11 @@ medium-term goal we should encourage the unification of this terminology.
Terms Terms
===== =====
Backfilling:
The process of synchronising historic state from one home server to another,
to backfill the event storage so that scrollback can be presented to the
client(s). (Formerly, and confusingly, called 'pagination')
Context: Context:
A single human-level entity of interest (currently, a chat room) A single human-level entity of interest (currently, a chat room)
@ -28,11 +33,6 @@ Event:
[[NOTE(paul): The current server-server implementation calls these simply [[NOTE(paul): The current server-server implementation calls these simply
"messages" but the term is too ambiguous here; I've called them Events]] "messages" but the term is too ambiguous here; I've called them Events]]
Pagination:
The process of synchronising historic state from one home server to another,
to backfill the event storage so that scrollback can be presented to the
client(s).
PDU (Persistent Data Unit): PDU (Persistent Data Unit):
A message that relates to a single context, irrespective of the server that A message that relates to a single context, irrespective of the server that
is communicating it. PDUs either encode a single Event, or a single State is communicating it. PDUs either encode a single Event, or a single State

View file

@ -1,11 +1,11 @@
Versioning is, like, hard for paginating backwards because of the number of Home Servers involved. Versioning is, like, hard for backfilling backwards because of the number of Home Servers involved.
The way we solve this is by doing versioning as an acyclic directed graph of PDUs. For pagination purposes, this is done on a per context basis. The way we solve this is by doing versioning as an acyclic directed graph of PDUs. For backfilling purposes, this is done on a per context basis.
When we send a PDU we include all PDUs that have been received for that context that hasn't been subsequently listed in a later PDU. The trivial case is a simple list of PDUs, e.g. A <- B <- C. However, if two servers send out a PDU at the same to, both B and C would point at A - a later PDU would then list both B and C. When we send a PDU we include all PDUs that have been received for that context that hasn't been subsequently listed in a later PDU. The trivial case is a simple list of PDUs, e.g. A <- B <- C. However, if two servers send out a PDU at the same to, both B and C would point at A - a later PDU would then list both B and C.
Problems with opaque version strings: Problems with opaque version strings:
- How do you do clustering without mandating that a cluster can only have one transaction in flight to a given remote home server at a time. - How do you do clustering without mandating that a cluster can only have one transaction in flight to a given remote home server at a time.
If you have multiple transactions sent at once, then you might drop one transaction, receive anotherwith a version that is later than the dropped transaction and which point ARGH WE LOST A TRANSACTION. If you have multiple transactions sent at once, then you might drop one transaction, receive another with a version that is later than the dropped transaction and which point ARGH WE LOST A TRANSACTION.
- How do you do pagination? A version string defines a point in a stream w.r.t. a single home server, not a point in the context. - How do you do backfilling? A version string defines a point in a stream w.r.t. a single home server, not a point in the context.
We only need to store the ends of the directed graph, we DO NOT need to do the whole one table of nodes and one of edges. We only need to store the ends of the directed graph, we DO NOT need to do the whole one table of nodes and one of edges.

View file

@ -104,12 +104,12 @@ class InputOutput(object):
#self.print_line("OK.") #self.print_line("OK.")
return return
m = re.match("^paginate (\S+)$", line) m = re.match("^backfill (\S+)$", line)
if m: if m:
# we want to paginate a room # we want to backfill a room
room_name, = m.groups() room_name, = m.groups()
self.print_line("paginate %s" % room_name) self.print_line("backfill %s" % room_name)
self.server.paginate(room_name) self.server.backfill(room_name)
return return
self.print_line("Unrecognized command") self.print_line("Unrecognized command")
@ -307,7 +307,7 @@ class HomeServer(ReplicationHandler):
except Exception as e: except Exception as e:
logger.exception(e) logger.exception(e)
def paginate(self, room_name, limit=5): def backfill(self, room_name, limit=5):
room = self.joined_rooms.get(room_name) room = self.joined_rooms.get(room_name)
if not room: if not room:
@ -315,7 +315,7 @@ class HomeServer(ReplicationHandler):
dest = room.oldest_server dest = room.oldest_server
return self.replication_layer.paginate(dest, room_name, limit) return self.replication_layer.backfill(dest, room_name, limit)
def _get_room_remote_servers(self, room_name): def _get_room_remote_servers(self, room_name):
return [i for i in self.joined_rooms.setdefault(room_name,).servers] return [i for i in self.joined_rooms.setdefault(room_name,).servers]

View file

@ -14,6 +14,7 @@
# limitations under the License. # limitations under the License.
"""This module contains classes for authenticating the user.""" """This module contains classes for authenticating the user."""
from twisted.internet import defer from twisted.internet import defer
from synapse.api.constants import Membership from synapse.api.constants import Membership

View file

@ -75,8 +75,8 @@ class FederationEventHandler(object):
@log_function @log_function
@defer.inlineCallbacks @defer.inlineCallbacks
def backfill(self, room_id, limit): def backfill(self, room_id, limit):
# TODO: Work out which destinations to ask for pagination # TODO: Work out which destinations to ask for backfill
# self.replication_layer.paginate(dest, room_id, limit) # self.replication_layer.backfill(dest, room_id, limit)
pass pass
@log_function @log_function

View file

@ -114,14 +114,14 @@ class PduActions(object):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def paginate(self, context, pdu_list, limit): def backfill(self, context, pdu_list, limit):
""" For a given list of PDU id and origins return the proceeding """ For a given list of PDU id and origins return the proceeding
`limit` `Pdu`s in the given `context`. `limit` `Pdu`s in the given `context`.
Returns: Returns:
Deferred: Results in a list of `Pdu`s. Deferred: Results in a list of `Pdu`s.
""" """
results = yield self.store.get_pagination( results = yield self.store.get_backfill(
context, pdu_list, limit context, pdu_list, limit
) )
@ -131,7 +131,7 @@ class PduActions(object):
def is_new(self, pdu): def is_new(self, pdu):
""" When we receive a `Pdu` from a remote home server, we want to """ When we receive a `Pdu` from a remote home server, we want to
figure out whether it is `new`, i.e. it is not some historic PDU that figure out whether it is `new`, i.e. it is not some historic PDU that
we haven't seen simply because we haven't paginated back that far. we haven't seen simply because we haven't backfilled back that far.
Returns: Returns:
Deferred: Results in a `bool` Deferred: Results in a `bool`

View file

@ -118,7 +118,7 @@ class ReplicationLayer(object):
*Note:* The home server should always call `send_pdu` even if it knows *Note:* The home server should always call `send_pdu` even if it knows
that it does not need to be replicated to other home servers. This is that it does not need to be replicated to other home servers. This is
in case e.g. someone else joins via a remote home server and then in case e.g. someone else joins via a remote home server and then
paginates. backfills.
TODO: Figure out when we should actually resolve the deferred. TODO: Figure out when we should actually resolve the deferred.
@ -179,13 +179,13 @@ class ReplicationLayer(object):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def paginate(self, dest, context, limit): def backfill(self, dest, context, limit):
"""Requests some more historic PDUs for the given context from the """Requests some more historic PDUs for the given context from the
given destination server. given destination server.
Args: Args:
dest (str): The remote home server to ask. dest (str): The remote home server to ask.
context (str): The context to paginate back on. context (str): The context to backfill.
limit (int): The maximum number of PDUs to return. limit (int): The maximum number of PDUs to return.
Returns: Returns:
@ -193,16 +193,16 @@ class ReplicationLayer(object):
""" """
extremities = yield self.store.get_oldest_pdus_in_context(context) extremities = yield self.store.get_oldest_pdus_in_context(context)
logger.debug("paginate extrem=%s", extremities) logger.debug("backfill extrem=%s", extremities)
# If there are no extremeties then we've (probably) reached the start. # If there are no extremeties then we've (probably) reached the start.
if not extremities: if not extremities:
return return
transaction_data = yield self.transport_layer.paginate( transaction_data = yield self.transport_layer.backfill(
dest, context, extremities, limit) dest, context, extremities, limit)
logger.debug("paginate transaction_data=%s", repr(transaction_data)) logger.debug("backfill transaction_data=%s", repr(transaction_data))
transaction = Transaction(**transaction_data) transaction = Transaction(**transaction_data)
@ -281,9 +281,9 @@ class ReplicationLayer(object):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def on_paginate_request(self, context, versions, limit): def on_backfill_request(self, context, versions, limit):
pdus = yield self.pdu_actions.paginate(context, versions, limit) pdus = yield self.pdu_actions.backfill(context, versions, limit)
defer.returnValue((200, self._transaction_from_pdus(pdus).get_dict())) defer.returnValue((200, self._transaction_from_pdus(pdus).get_dict()))
@ -427,7 +427,7 @@ class ReplicationLayer(object):
# Get missing pdus if necessary. # Get missing pdus if necessary.
is_new = yield self.pdu_actions.is_new(pdu) is_new = yield self.pdu_actions.is_new(pdu)
if is_new and not pdu.outlier: if is_new and not pdu.outlier:
# We only paginate backwards to the min depth. # We only backfill backwards to the min depth.
min_depth = yield self.store.get_min_depth_for_context(pdu.context) min_depth = yield self.store.get_min_depth_for_context(pdu.context)
if min_depth and pdu.depth > min_depth: if min_depth and pdu.depth > min_depth:

View file

@ -112,7 +112,7 @@ class TransportLayer(object):
return self._do_request_for_transaction(destination, subpath) return self._do_request_for_transaction(destination, subpath)
@log_function @log_function
def paginate(self, dest, context, pdu_tuples, limit): def backfill(self, dest, context, pdu_tuples, limit):
""" Requests `limit` previous PDUs in a given context before list of """ Requests `limit` previous PDUs in a given context before list of
PDUs. PDUs.
@ -126,14 +126,14 @@ class TransportLayer(object):
Deferred: Results in a dict received from the remote homeserver. Deferred: Results in a dict received from the remote homeserver.
""" """
logger.debug( logger.debug(
"paginate dest=%s, context=%s, pdu_tuples=%s, limit=%s", "backfill dest=%s, context=%s, pdu_tuples=%s, limit=%s",
dest, context, repr(pdu_tuples), str(limit) dest, context, repr(pdu_tuples), str(limit)
) )
if not pdu_tuples: if not pdu_tuples:
return return
subpath = "/paginate/%s/" % context subpath = "/backfill/%s/" % context
args = {"v": ["%s,%s" % (i, o) for i, o in pdu_tuples]} args = {"v": ["%s,%s" % (i, o) for i, o in pdu_tuples]}
args["limit"] = limit args["limit"] = limit
@ -251,8 +251,8 @@ class TransportLayer(object):
self.server.register_path( self.server.register_path(
"GET", "GET",
re.compile("^" + PREFIX + "/paginate/([^/]*)/$"), re.compile("^" + PREFIX + "/backfill/([^/]*)/$"),
lambda request, context: self._on_paginate_request( lambda request, context: self._on_backfill_request(
context, request.args["v"], context, request.args["v"],
request.args["limit"] request.args["limit"]
) )
@ -352,7 +352,7 @@ class TransportLayer(object):
defer.returnValue(data) defer.returnValue(data)
@log_function @log_function
def _on_paginate_request(self, context, v_list, limits): def _on_backfill_request(self, context, v_list, limits):
if not limits: if not limits:
return defer.succeed( return defer.succeed(
(400, {"error": "Did not include limit param"}) (400, {"error": "Did not include limit param"})
@ -362,7 +362,7 @@ class TransportLayer(object):
versions = [v.split(",", 1) for v in v_list] versions = [v.split(",", 1) for v in v_list]
return self.request_handler.on_paginate_request( return self.request_handler.on_backfill_request(
context, versions, limit) context, versions, limit)
@ -371,14 +371,14 @@ class TransportReceivedHandler(object):
""" """
def on_incoming_transaction(self, transaction): def on_incoming_transaction(self, transaction):
""" Called on PUT /send/<transaction_id>, or on response to a request """ Called on PUT /send/<transaction_id>, or on response to a request
that we sent (e.g. a pagination request) that we sent (e.g. a backfill request)
Args: Args:
transaction (synapse.transaction.Transaction): The transaction that transaction (synapse.transaction.Transaction): The transaction that
was sent to us. was sent to us.
Returns: Returns:
twisted.internet.defer.Deferred: A deferred that get's fired when twisted.internet.defer.Deferred: A deferred that gets fired when
the transaction has finished being processed. the transaction has finished being processed.
The result should be a tuple in the form of The result should be a tuple in the form of
@ -438,14 +438,14 @@ class TransportRequestHandler(object):
def on_context_state_request(self, context): def on_context_state_request(self, context):
""" Called on GET /state/<context>/ """ Called on GET /state/<context>/
Get's hit when someone wants all the *current* state for a given Gets hit when someone wants all the *current* state for a given
contexts. contexts.
Args: Args:
context (str): The name of the context that we're interested in. context (str): The name of the context that we're interested in.
Returns: Returns:
twisted.internet.defer.Deferred: A deferred that get's fired when twisted.internet.defer.Deferred: A deferred that gets fired when
the transaction has finished being processed. the transaction has finished being processed.
The result should be a tuple in the form of The result should be a tuple in the form of
@ -457,20 +457,20 @@ class TransportRequestHandler(object):
""" """
pass pass
def on_paginate_request(self, context, versions, limit): def on_backfill_request(self, context, versions, limit):
""" Called on GET /paginate/<context>/?v=...&limit=... """ Called on GET /backfill/<context>/?v=...&limit=...
Get's hit when we want to paginate backwards on a given context from Gets hit when we want to backfill backwards on a given context from
the given point. the given point.
Args: Args:
context (str): The context to paginate on context (str): The context to backfill
versions (list): A list of 2-tuple's representing where to paginate versions (list): A list of 2-tuples representing where to backfill
from, in the form `(pdu_id, origin)` from, in the form `(pdu_id, origin)`
limit (int): How many pdus to return. limit (int): How many pdus to return.
Returns: Returns:
Deferred: Resultsin a tuple in the form of Deferred: Results in a tuple in the form of
`(response_code, respond_body)`, where `response_body` is a python `(response_code, respond_body)`, where `response_body` is a python
dict that will get serialized to JSON. dict that will get serialized to JSON.

View file

@ -35,9 +35,11 @@ class DirectoryHandler(BaseHandler):
def __init__(self, hs): def __init__(self, hs):
super(DirectoryHandler, self).__init__(hs) super(DirectoryHandler, self).__init__(hs)
self.hs = hs
self.http_client = hs.get_http_client() self.federation = hs.get_replication_layer()
self.clock = hs.get_clock() self.federation.register_query_handler(
"directory", self.on_directory_query
)
@defer.inlineCallbacks @defer.inlineCallbacks
def create_association(self, room_alias, room_id, servers): def create_association(self, room_alias, room_id, servers):
@ -58,9 +60,7 @@ class DirectoryHandler(BaseHandler):
) )
@defer.inlineCallbacks @defer.inlineCallbacks
def get_association(self, room_alias, local_only=False): def get_association(self, room_alias):
# TODO(erikj): Do auth
room_id = None room_id = None
if room_alias.is_mine: if room_alias.is_mine:
result = yield self.store.get_association_from_room_alias( result = yield self.store.get_association_from_room_alias(
@ -70,21 +70,12 @@ class DirectoryHandler(BaseHandler):
if result: if result:
room_id = result.room_id room_id = result.room_id
servers = result.servers servers = result.servers
elif not local_only: else:
path = "%s/ds/room/%s?local_only=1" % ( result = yield self.federation.make_query(
PREFIX,
urllib.quote(room_alias.to_string())
)
result = None
try:
result = yield self.http_client.get_json(
destination=room_alias.domain, destination=room_alias.domain,
path=path, query_type="directory",
args={"room_alias": room_alias.to_string()},
) )
except:
# TODO(erikj): Handle this better?
logger.exception("Failed to get remote room alias")
if result and "room_id" in result and "servers" in result: if result and "room_id" in result and "servers" in result:
room_id = result["room_id"] room_id = result["room_id"]
@ -99,3 +90,20 @@ class DirectoryHandler(BaseHandler):
"servers": servers, "servers": servers,
}) })
return return
@defer.inlineCallbacks
def on_directory_query(self, args):
room_alias = self.hs.parse_roomalias(args["room_alias"])
if not room_alias.is_mine:
raise SynapseError(
400, "Room Alias is not hosted on this Home Server"
)
result = yield self.store.get_association_from_room_alias(
room_alias
)
defer.returnValue({
"room_id": result.room_id,
"servers": result.servers,
})

View file

@ -56,6 +56,8 @@ class PresenceHandler(BaseHandler):
self.homeserver = hs self.homeserver = hs
self.clock = hs.get_clock()
distributor = hs.get_distributor() distributor = hs.get_distributor()
distributor.observe("registered_user", self.registered_user) distributor.observe("registered_user", self.registered_user)
@ -168,14 +170,15 @@ class PresenceHandler(BaseHandler):
state = yield self.store.get_presence_state( state = yield self.store.get_presence_state(
target_user.localpart target_user.localpart
) )
defer.returnValue(state)
else: else:
raise SynapseError(404, "Presence information not visible") raise SynapseError(404, "Presence information not visible")
else: else:
# TODO(paul): Have remote server send us permissions set # TODO(paul): Have remote server send us permissions set
defer.returnValue( state = self._get_or_offline_usercache(target_user).get_state()
self._get_or_offline_usercache(target_user).get_state()
) if "mtime" in state:
state["mtime_age"] = self.clock.time_msec() - state.pop("mtime")
defer.returnValue(state)
@defer.inlineCallbacks @defer.inlineCallbacks
def set_state(self, target_user, auth_user, state): def set_state(self, target_user, auth_user, state):
@ -209,6 +212,8 @@ class PresenceHandler(BaseHandler):
), ),
]) ])
state["mtime"] = self.clock.time_msec()
now_online = state["state"] != PresenceState.OFFLINE now_online = state["state"] != PresenceState.OFFLINE
was_polling = target_user in self._user_cachemap was_polling = target_user in self._user_cachemap
@ -361,6 +366,8 @@ class PresenceHandler(BaseHandler):
observed_user = self.hs.parse_userid(p.pop("observed_user_id")) observed_user = self.hs.parse_userid(p.pop("observed_user_id"))
p["observed_user"] = observed_user p["observed_user"] = observed_user
p.update(self._get_or_offline_usercache(observed_user).get_state()) p.update(self._get_or_offline_usercache(observed_user).get_state())
if "mtime" in p:
p["mtime_age"] = self.clock.time_msec() - p.pop("mtime")
defer.returnValue(presence) defer.returnValue(presence)
@ -546,10 +553,15 @@ class PresenceHandler(BaseHandler):
def _push_presence_remote(self, user, destination, state=None): def _push_presence_remote(self, user, destination, state=None):
if state is None: if state is None:
state = yield self.store.get_presence_state(user.localpart) state = yield self.store.get_presence_state(user.localpart)
yield self.distributor.fire( yield self.distributor.fire(
"collect_presencelike_data", user, state "collect_presencelike_data", user, state
) )
if "mtime" in state:
state = dict(state)
state["mtime_age"] = self.clock.time_msec() - state.pop("mtime")
yield self.federation.send_edu( yield self.federation.send_edu(
destination=destination, destination=destination,
edu_type="m.presence", edu_type="m.presence",
@ -585,6 +597,9 @@ class PresenceHandler(BaseHandler):
state = dict(push) state = dict(push)
del state["user_id"] del state["user_id"]
if "mtime_age" in state:
state["mtime"] = self.clock.time_msec() - state.pop("mtime_age")
statuscache = self._get_or_make_usercache(user) statuscache = self._get_or_make_usercache(user)
self._user_cachemap_latest_serial += 1 self._user_cachemap_latest_serial += 1
@ -631,9 +646,14 @@ class PresenceHandler(BaseHandler):
def push_update_to_clients(self, observer_user, observed_user, def push_update_to_clients(self, observer_user, observed_user,
statuscache): statuscache):
state = statuscache.make_event(user=observed_user, clock=self.clock)
self.notifier.on_new_user_event( self.notifier.on_new_user_event(
observer_user.to_string(), observer_user.to_string(),
event_data=statuscache.make_event(user=observed_user), event_data=statuscache.make_event(
user=observed_user,
clock=self.clock
),
stream_type=PresenceStreamData, stream_type=PresenceStreamData,
store_id=statuscache.serial store_id=statuscache.serial
) )
@ -652,8 +672,10 @@ class PresenceStreamData(StreamData):
if from_key < cachemap[k].serial <= to_key] if from_key < cachemap[k].serial <= to_key]
if updates: if updates:
clock = self.presence.clock
latest_serial = max([x[1].serial for x in updates]) latest_serial = max([x[1].serial for x in updates])
data = [x[1].make_event(user=x[0]) for x in updates] data = [x[1].make_event(user=x[0], clock=clock) for x in updates]
return ((data, latest_serial)) return ((data, latest_serial))
else: else:
return (([], self.presence._user_cachemap_latest_serial)) return (([], self.presence._user_cachemap_latest_serial))
@ -674,6 +696,8 @@ class UserPresenceCache(object):
self.serial = None self.serial = None
def update(self, state, serial): def update(self, state, serial):
assert("mtime_age" not in state)
self.state.update(state) self.state.update(state)
# Delete keys that are now 'None' # Delete keys that are now 'None'
for k in self.state.keys(): for k in self.state.keys():
@ -691,8 +715,11 @@ class UserPresenceCache(object):
# clone it so caller can't break our cache # clone it so caller can't break our cache
return dict(self.state) return dict(self.state)
def make_event(self, user): def make_event(self, user, clock):
content = self.get_state() content = self.get_state()
content["user_id"] = user.to_string() content["user_id"] = user.to_string()
if "mtime" in content:
content["mtime_age"] = clock.time_msec() - content.pop("mtime")
return {"type": "m.presence", "content": content} return {"type": "m.presence", "content": content}

View file

@ -32,7 +32,7 @@ import urllib
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# FIXME: SURELY these should be killed?!
_destination_mappings = { _destination_mappings = {
"red": "localhost:8080", "red": "localhost:8080",
"blue": "localhost:8081", "blue": "localhost:8081",
@ -147,7 +147,7 @@ class TwistedHttpClient(HttpClient):
destination.encode("ascii"), destination.encode("ascii"),
"GET", "GET",
path.encode("ascii"), path.encode("ascii"),
query_bytes query_bytes=query_bytes
) )
body = yield readBody(response) body = yield readBody(response)

View file

@ -16,7 +16,6 @@
from twisted.internet import defer from twisted.internet import defer
from synapse.types import RoomAlias, RoomID
from base import RestServlet, client_path_pattern from base import RestServlet, client_path_pattern
import json import json
@ -36,17 +35,10 @@ class ClientDirectoryServer(RestServlet):
@defer.inlineCallbacks @defer.inlineCallbacks
def on_GET(self, request, room_alias): def on_GET(self, request, room_alias):
# TODO(erikj): Handle request room_alias = self.hs.parse_roomalias(urllib.unquote(room_alias))
local_only = "local_only" in request.args
room_alias = urllib.unquote(room_alias)
room_alias_obj = RoomAlias.from_string(room_alias, self.hs)
dir_handler = self.handlers.directory_handler dir_handler = self.handlers.directory_handler
res = yield dir_handler.get_association( res = yield dir_handler.get_association(room_alias)
room_alias_obj,
local_only=local_only
)
defer.returnValue((200, res)) defer.returnValue((200, res))
@ -57,10 +49,9 @@ class ClientDirectoryServer(RestServlet):
logger.debug("Got content: %s", content) logger.debug("Got content: %s", content)
room_alias = urllib.unquote(room_alias) room_alias = self.hs.parse_roomalias(urllib.unquote(room_alias))
room_alias_obj = RoomAlias.from_string(room_alias, self.hs)
logger.debug("Got room name: %s", room_alias_obj.to_string()) logger.debug("Got room name: %s", room_alias.to_string())
room_id = content["room_id"] room_id = content["room_id"]
servers = content["servers"] servers = content["servers"]
@ -75,7 +66,7 @@ class ClientDirectoryServer(RestServlet):
try: try:
yield dir_handler.create_association( yield dir_handler.create_association(
room_alias_obj, room_id, servers room_alias, room_id, servers
) )
except: except:
logger.exception("Failed to create association") logger.exception("Failed to create association")

View file

@ -22,7 +22,6 @@ from synapse.api.events.room import (RoomTopicEvent, MessageEvent,
RoomMemberEvent, FeedbackEvent) RoomMemberEvent, FeedbackEvent)
from synapse.api.constants import Feedback, Membership from synapse.api.constants import Feedback, Membership
from synapse.api.streams import PaginationConfig from synapse.api.streams import PaginationConfig
from synapse.types import RoomAlias
import json import json
import logging import logging
@ -150,10 +149,7 @@ class JoinRoomAliasServlet(RestServlet):
logger.debug("room_alias: %s", room_alias) logger.debug("room_alias: %s", room_alias)
room_alias = RoomAlias.from_string( room_alias = self.hs.parse_roomalias(urllib.unquote(room_alias))
urllib.unquote(room_alias),
self.hs
)
handler = self.handlers.room_member_handler handler = self.handlers.room_member_handler
ret_dict = yield handler.join_room_alias(user, room_alias) ret_dict = yield handler.join_room_alias(user, room_alias)

View file

@ -28,7 +28,7 @@ from synapse.handlers import Handlers
from synapse.rest import RestServletFactory from synapse.rest import RestServletFactory
from synapse.state import StateHandler from synapse.state import StateHandler
from synapse.storage import DataStore from synapse.storage import DataStore
from synapse.types import UserID from synapse.types import UserID, RoomAlias
from synapse.util import Clock from synapse.util import Clock
from synapse.util.distributor import Distributor from synapse.util.distributor import Distributor
from synapse.util.lockutils import LockManager from synapse.util.lockutils import LockManager
@ -120,6 +120,11 @@ class BaseHomeServer(object):
object.""" object."""
return UserID.from_string(s, hs=self) return UserID.from_string(s, hs=self)
def parse_roomalias(self, s):
"""Parse the string given by 's' as a Room Alias and return a RoomAlias
object."""
return RoomAlias.from_string(s, hs=self)
# Build magic accessors for every dependency # Build magic accessors for every dependency
for depname in BaseHomeServer.DEPENDENCIES: for depname in BaseHomeServer.DEPENDENCIES:
BaseHomeServer._make_dependency_method(depname) BaseHomeServer._make_dependency_method(depname)

View file

@ -44,7 +44,6 @@ class DataStore(RoomDataStore, RoomMemberStore, MessageStore, RoomStore,
def __init__(self, hs): def __init__(self, hs):
super(DataStore, self).__init__(hs) super(DataStore, self).__init__(hs)
self.event_factory = hs.get_event_factory() self.event_factory = hs.get_event_factory()
self.hs = hs
@defer.inlineCallbacks @defer.inlineCallbacks
def persist_event(self, event): def persist_event(self, event):

View file

@ -28,8 +28,10 @@ logger = logging.getLogger(__name__)
class SQLBaseStore(object): class SQLBaseStore(object):
def __init__(self, hs): def __init__(self, hs):
self.hs = hs
self._db_pool = hs.get_db_pool() self._db_pool = hs.get_db_pool()
self.event_factory = hs.get_event_factory() self.event_factory = hs.get_event_factory()
self._clock = hs.get_clock()
def cursor_to_dict(self, cursor): def cursor_to_dict(self, cursor):
"""Converts a SQL cursor into an list of dicts. """Converts a SQL cursor into an list of dicts.

View file

@ -168,7 +168,7 @@ class PduStore(SQLBaseStore):
return self._get_pdu_tuples(txn, txn.fetchall()) return self._get_pdu_tuples(txn, txn.fetchall())
def get_pagination(self, context, pdu_list, limit): def get_backfill(self, context, pdu_list, limit):
"""Get a list of Pdus for a given topic that occured before (and """Get a list of Pdus for a given topic that occured before (and
including) the pdus in pdu_list. Return a list of max size `limit`. including) the pdus in pdu_list. Return a list of max size `limit`.
@ -182,12 +182,12 @@ class PduStore(SQLBaseStore):
list: A list of PduTuples list: A list of PduTuples
""" """
return self._db_pool.runInteraction( return self._db_pool.runInteraction(
self._get_paginate, context, pdu_list, limit self._get_backfill, context, pdu_list, limit
) )
def _get_paginate(self, txn, context, pdu_list, limit): def _get_backfill(self, txn, context, pdu_list, limit):
logger.debug( logger.debug(
"paginate: %s, %s, %s", "backfill: %s, %s, %s",
context, repr(pdu_list), limit context, repr(pdu_list), limit
) )
@ -213,7 +213,7 @@ class PduStore(SQLBaseStore):
new_front = [] new_front = []
for pdu_id, origin in front: for pdu_id, origin in front:
logger.debug( logger.debug(
"_paginate_interaction: i=%s, o=%s", "_backfill_interaction: i=%s, o=%s",
pdu_id, origin pdu_id, origin
) )
@ -224,7 +224,7 @@ class PduStore(SQLBaseStore):
for row in txn.fetchall(): for row in txn.fetchall():
logger.debug( logger.debug(
"_paginate_interaction: got i=%s, o=%s", "_backfill_interaction: got i=%s, o=%s",
*row *row
) )
new_front.append(row) new_front.append(row)
@ -262,7 +262,7 @@ class PduStore(SQLBaseStore):
def update_min_depth_for_context(self, context, depth): def update_min_depth_for_context(self, context, depth):
"""Update the minimum `depth` of the given context, which is the line """Update the minimum `depth` of the given context, which is the line
where we stop paginating backwards on. on which we stop backfilling backwards.
Args: Args:
context (str) context (str)
@ -320,9 +320,9 @@ class PduStore(SQLBaseStore):
return [(row[0], row[1], row[2]) for row in results] return [(row[0], row[1], row[2]) for row in results]
def get_oldest_pdus_in_context(self, context): def get_oldest_pdus_in_context(self, context):
"""Get a list of Pdus that we paginated beyond yet (and haven't seen). """Get a list of Pdus that we haven't backfilled beyond yet (and haven't
This list is used when we want to paginate backwards and is the list we seen). This list is used when we want to backfill backwards and is the
send to the remote server. list we send to the remote server.
Args: Args:
txn txn

View file

@ -35,7 +35,7 @@ class PresenceStore(SQLBaseStore):
return self._simple_select_one( return self._simple_select_one(
table="presence", table="presence",
keyvalues={"user_id": user_localpart}, keyvalues={"user_id": user_localpart},
retcols=["state", "status_msg"], retcols=["state", "status_msg", "mtime"],
) )
def set_presence_state(self, user_localpart, new_state): def set_presence_state(self, user_localpart, new_state):
@ -43,7 +43,8 @@ class PresenceStore(SQLBaseStore):
table="presence", table="presence",
keyvalues={"user_id": user_localpart}, keyvalues={"user_id": user_localpart},
updatevalues={"state": new_state["state"], updatevalues={"state": new_state["state"],
"status_msg": new_state["status_msg"]}, "status_msg": new_state["status_msg"],
"mtime": self._clock.time_msec()},
retcols=["state"], retcols=["state"],
) )

View file

@ -16,6 +16,7 @@ CREATE TABLE IF NOT EXISTS presence(
user_id INTEGER NOT NULL, user_id INTEGER NOT NULL,
state INTEGER, state INTEGER,
status_msg TEXT, status_msg TEXT,
mtime INTEGER, -- miliseconds since last state change
FOREIGN KEY(user_id) REFERENCES users(id) FOREIGN KEY(user_id) REFERENCES users(id)
); );

View file

@ -20,7 +20,7 @@ from twisted.trial import unittest
from mock import Mock from mock import Mock
import logging import logging
from ..utils import MockHttpServer from ..utils import MockHttpServer, MockClock
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.federation import initialize_http_replication from synapse.federation import initialize_http_replication
@ -48,16 +48,6 @@ def make_pdu(prev_pdus=[], **kwargs):
return PduTuple(PduEntry(**pdu_fields), prev_pdus) return PduTuple(PduEntry(**pdu_fields), prev_pdus)
class MockClock(object):
now = 1000
def time(self):
return self.now
def time_msec(self):
return self.time() * 1000
class FederationTestCase(unittest.TestCase): class FederationTestCase(unittest.TestCase):
def setUp(self): def setUp(self):
self.mock_http_server = MockHttpServer() self.mock_http_server = MockHttpServer()

View file

@ -0,0 +1,112 @@
# -*- coding: utf-8 -*-
# Copyright 2014 matrix.org
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.trial import unittest
from twisted.internet import defer
from mock import Mock
import logging
from synapse.server import HomeServer
from synapse.handlers.directory import DirectoryHandler
from synapse.storage.directory import RoomAliasMapping
logging.getLogger().addHandler(logging.NullHandler())
class DirectoryHandlers(object):
def __init__(self, hs):
self.directory_handler = DirectoryHandler(hs)
class DirectoryTestCase(unittest.TestCase):
""" Tests the directory service. """
def setUp(self):
self.mock_federation = Mock(spec=[
"make_query",
])
self.query_handlers = {}
def register_query_handler(query_type, handler):
self.query_handlers[query_type] = handler
self.mock_federation.register_query_handler = register_query_handler
hs = HomeServer("test",
datastore=Mock(spec=[
"get_association_from_room_alias",
]),
http_client=None,
http_server=Mock(),
replication_layer=self.mock_federation,
)
hs.handlers = DirectoryHandlers(hs)
self.handler = hs.get_handlers().directory_handler
self.datastore = hs.get_datastore()
self.my_room = hs.parse_roomalias("#my-room:test")
self.remote_room = hs.parse_roomalias("#another:remote")
@defer.inlineCallbacks
def test_get_local_association(self):
mocked_get = self.datastore.get_association_from_room_alias
mocked_get.return_value = defer.succeed(
RoomAliasMapping("!8765qwer:test", "#my-room:test", ["test"])
)
result = yield self.handler.get_association(self.my_room)
self.assertEquals({
"room_id": "!8765qwer:test",
"servers": ["test"],
}, result)
@defer.inlineCallbacks
def test_get_remote_association(self):
self.mock_federation.make_query.return_value = defer.succeed(
{"room_id": "!8765qwer:test", "servers": ["test", "remote"]}
)
result = yield self.handler.get_association(self.remote_room)
self.assertEquals({
"room_id": "!8765qwer:test",
"servers": ["test", "remote"],
}, result)
self.mock_federation.make_query.assert_called_with(
destination="remote",
query_type="directory",
args={"room_alias": "#another:remote"}
)
@defer.inlineCallbacks
def test_incoming_fed_query(self):
mocked_get = self.datastore.get_association_from_room_alias
mocked_get.return_value = defer.succeed(
RoomAliasMapping("!8765asdf:test", "#your-room:test", ["test"])
)
response = yield self.query_handlers["directory"](
{"room_alias": "#your-room:test"}
)
self.assertEquals({
"room_id": "!8765asdf:test",
"servers": ["test"],
}, response)

View file

@ -20,6 +20,8 @@ from twisted.internet import defer
from mock import Mock, call, ANY from mock import Mock, call, ANY
import logging import logging
from ..utils import MockClock
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.api.constants import PresenceState from synapse.api.constants import PresenceState
from synapse.api.errors import SynapseError from synapse.api.errors import SynapseError
@ -55,6 +57,7 @@ class PresenceStateTestCase(unittest.TestCase):
def setUp(self): def setUp(self):
hs = HomeServer("test", hs = HomeServer("test",
clock=MockClock(),
db_pool=None, db_pool=None,
datastore=Mock(spec=[ datastore=Mock(spec=[
"get_presence_state", "get_presence_state",
@ -154,7 +157,11 @@ class PresenceStateTestCase(unittest.TestCase):
mocked_set.assert_called_with("apple", mocked_set.assert_called_with("apple",
{"state": UNAVAILABLE, "status_msg": "Away"}) {"state": UNAVAILABLE, "status_msg": "Away"})
self.mock_start.assert_called_with(self.u_apple, self.mock_start.assert_called_with(self.u_apple,
state={"state": UNAVAILABLE, "status_msg": "Away"}) state={
"state": UNAVAILABLE,
"status_msg": "Away",
"mtime": 1000000, # MockClock
})
yield self.handler.set_state( yield self.handler.set_state(
target_user=self.u_apple, auth_user=self.u_apple, target_user=self.u_apple, auth_user=self.u_apple,
@ -386,7 +393,10 @@ class PresencePushTestCase(unittest.TestCase):
self.replication.send_edu = Mock() self.replication.send_edu = Mock()
self.replication.send_edu.return_value = defer.succeed((200, "OK")) self.replication.send_edu.return_value = defer.succeed((200, "OK"))
self.clock = MockClock()
hs = HomeServer("test", hs = HomeServer("test",
clock=self.clock,
db_pool=None, db_pool=None,
datastore=Mock(spec=[ datastore=Mock(spec=[
"set_presence_state", "set_presence_state",
@ -519,13 +529,18 @@ class PresencePushTestCase(unittest.TestCase):
yield self.handler.set_state(self.u_banana, self.u_banana, yield self.handler.set_state(self.u_banana, self.u_banana,
{"state": ONLINE}) {"state": ONLINE})
self.clock.advance_time(2)
presence = yield self.handler.get_presence_list( presence = yield self.handler.get_presence_list(
observer_user=self.u_apple, accepted=True) observer_user=self.u_apple, accepted=True)
self.assertEquals([ self.assertEquals([
{"observed_user": self.u_banana, "state": ONLINE}, {"observed_user": self.u_banana,
{"observed_user": self.u_clementine, "state": OFFLINE}], "state": ONLINE,
presence) "mtime_age": 2000},
{"observed_user": self.u_clementine,
"state": OFFLINE},
], presence)
self.mock_update_client.assert_has_calls([ self.mock_update_client.assert_has_calls([
call(observer_user=self.u_banana, call(observer_user=self.u_banana,
@ -555,7 +570,8 @@ class PresencePushTestCase(unittest.TestCase):
content={ content={
"push": [ "push": [
{"user_id": "@apple:test", {"user_id": "@apple:test",
"state": "online"}, "state": "online",
"mtime_age": 0},
], ],
}), }),
call( call(
@ -564,7 +580,8 @@ class PresencePushTestCase(unittest.TestCase):
content={ content={
"push": [ "push": [
{"user_id": "@apple:test", {"user_id": "@apple:test",
"state": "online"}, "state": "online",
"mtime_age": 0},
], ],
}) })
], any_order=True) ], any_order=True)
@ -582,7 +599,8 @@ class PresencePushTestCase(unittest.TestCase):
"remote", "m.presence", { "remote", "m.presence", {
"push": [ "push": [
{"user_id": "@potato:remote", {"user_id": "@potato:remote",
"state": "online"}, "state": "online",
"mtime_age": 1000},
], ],
} }
) )
@ -596,9 +614,11 @@ class PresencePushTestCase(unittest.TestCase):
statuscache=ANY), statuscache=ANY),
], any_order=True) ], any_order=True)
self.clock.advance_time(2)
state = yield self.handler.get_state(self.u_potato, self.u_apple) state = yield self.handler.get_state(self.u_potato, self.u_apple)
self.assertEquals({"state": ONLINE}, state) self.assertEquals({"state": ONLINE, "mtime_age": 3000}, state)
@defer.inlineCallbacks @defer.inlineCallbacks
def test_join_room_local(self): def test_join_room_local(self):

View file

@ -22,6 +22,8 @@ from twisted.internet import defer
from mock import Mock, call, ANY from mock import Mock, call, ANY
import logging import logging
from ..utils import MockClock
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.api.constants import PresenceState from synapse.api.constants import PresenceState
from synapse.handlers.presence import PresenceHandler from synapse.handlers.presence import PresenceHandler
@ -60,9 +62,11 @@ class PresenceProfilelikeDataTestCase(unittest.TestCase):
def setUp(self): def setUp(self):
hs = HomeServer("test", hs = HomeServer("test",
clock=MockClock(),
db_pool=None, db_pool=None,
datastore=Mock(spec=[ datastore=Mock(spec=[
"set_presence_state", "set_presence_state",
"is_presence_visible",
"set_profile_displayname", "set_profile_displayname",
]), ]),
@ -83,6 +87,10 @@ class PresenceProfilelikeDataTestCase(unittest.TestCase):
return defer.succeed("Frank") return defer.succeed("Frank")
self.datastore.get_profile_displayname = get_profile_displayname self.datastore.get_profile_displayname = get_profile_displayname
def is_presence_visible(*args, **kwargs):
return defer.succeed(False)
self.datastore.is_presence_visible = is_presence_visible
def get_profile_avatar_url(user_localpart): def get_profile_avatar_url(user_localpart):
return defer.succeed("http://foo") return defer.succeed("http://foo")
self.datastore.get_profile_avatar_url = get_profile_avatar_url self.datastore.get_profile_avatar_url = get_profile_avatar_url
@ -96,14 +104,9 @@ class PresenceProfilelikeDataTestCase(unittest.TestCase):
self.handlers = hs.get_handlers() self.handlers = hs.get_handlers()
self.mock_start = Mock()
self.mock_stop = Mock()
self.mock_update_client = Mock() self.mock_update_client = Mock()
self.mock_update_client.return_value = defer.succeed(None) self.mock_update_client.return_value = defer.succeed(None)
self.handlers.presence_handler.start_polling_presence = self.mock_start
self.handlers.presence_handler.stop_polling_presence = self.mock_stop
self.handlers.presence_handler.push_update_to_clients = ( self.handlers.presence_handler.push_update_to_clients = (
self.mock_update_client) self.mock_update_client)
@ -132,10 +135,6 @@ class PresenceProfilelikeDataTestCase(unittest.TestCase):
mocked_set.assert_called_with("apple", mocked_set.assert_called_with("apple",
{"state": UNAVAILABLE, "status_msg": "Away"}) {"state": UNAVAILABLE, "status_msg": "Away"})
self.mock_start.assert_called_with(self.u_apple,
state={"state": UNAVAILABLE, "status_msg": "Away",
"displayname": "Frank",
"avatar_url": "http://foo"})
@defer.inlineCallbacks @defer.inlineCallbacks
def test_push_local(self): def test_push_local(self):
@ -160,9 +159,13 @@ class PresenceProfilelikeDataTestCase(unittest.TestCase):
observer_user=self.u_apple, accepted=True) observer_user=self.u_apple, accepted=True)
self.assertEquals([ self.assertEquals([
{"observed_user": self.u_banana, "state": ONLINE, {"observed_user": self.u_banana,
"displayname": "Frank", "avatar_url": "http://foo"}, "state": ONLINE,
{"observed_user": self.u_clementine, "state": OFFLINE}], "mtime_age": 0,
"displayname": "Frank",
"avatar_url": "http://foo"},
{"observed_user": self.u_clementine,
"state": OFFLINE}],
presence) presence)
self.mock_update_client.assert_has_calls([ self.mock_update_client.assert_has_calls([
@ -175,9 +178,12 @@ class PresenceProfilelikeDataTestCase(unittest.TestCase):
], any_order=True) ], any_order=True)
statuscache = self.mock_update_client.call_args[1]["statuscache"] statuscache = self.mock_update_client.call_args[1]["statuscache"]
self.assertEquals({"state": ONLINE, self.assertEquals({
"state": ONLINE,
"mtime": 1000000, # MockClock
"displayname": "Frank", "displayname": "Frank",
"avatar_url": "http://foo"}, statuscache.state) "avatar_url": "http://foo",
}, statuscache.state)
self.mock_update_client.reset_mock() self.mock_update_client.reset_mock()
@ -197,9 +203,12 @@ class PresenceProfilelikeDataTestCase(unittest.TestCase):
], any_order=True) ], any_order=True)
statuscache = self.mock_update_client.call_args[1]["statuscache"] statuscache = self.mock_update_client.call_args[1]["statuscache"]
self.assertEquals({"state": ONLINE, self.assertEquals({
"state": ONLINE,
"mtime": 1000000, # MockClock
"displayname": "I am an Apple", "displayname": "I am an Apple",
"avatar_url": "http://foo"}, statuscache.state) "avatar_url": "http://foo",
}, statuscache.state)
@defer.inlineCallbacks @defer.inlineCallbacks
def test_push_remote(self): def test_push_remote(self):
@ -224,6 +233,7 @@ class PresenceProfilelikeDataTestCase(unittest.TestCase):
"push": [ "push": [
{"user_id": "@apple:test", {"user_id": "@apple:test",
"state": "online", "state": "online",
"mtime_age": 0,
"displayname": "Frank", "displayname": "Frank",
"avatar_url": "http://foo"}, "avatar_url": "http://foo"},
], ],

View file

@ -234,7 +234,11 @@ class PresenceEventStreamTestCase(unittest.TestCase):
# I'll already get my own presence state change # I'll already get my own presence state change
self.assertEquals({"start": "0", "end": "1", "chunk": [ self.assertEquals({"start": "0", "end": "1", "chunk": [
{"type": "m.presence", {"type": "m.presence",
"content": {"user_id": "@apple:test", "state": ONLINE}}, "content": {
"user_id": "@apple:test",
"state": ONLINE,
"mtime_age": 0,
}},
]}, response) ]}, response)
self.mock_datastore.set_presence_state.return_value = defer.succeed( self.mock_datastore.set_presence_state.return_value = defer.succeed(
@ -251,5 +255,9 @@ class PresenceEventStreamTestCase(unittest.TestCase):
self.assertEquals(200, code) self.assertEquals(200, code)
self.assertEquals({"start": "1", "end": "2", "chunk": [ self.assertEquals({"start": "1", "end": "2", "chunk": [
{"type": "m.presence", {"type": "m.presence",
"content": {"user_id": "@banana:test", "state": ONLINE}}, "content": {
"user_id": "@banana:test",
"state": ONLINE,
"mtime_age": 0,
}},
]}, response) ]}, response)

View file

@ -62,3 +62,9 @@ class RoomAliasTestCase(unittest.TestCase):
room = RoomAlias("channel", "my.domain", True) room = RoomAlias("channel", "my.domain", True)
self.assertEquals(room.to_string(), "#channel:my.domain") self.assertEquals(room.to_string(), "#channel:my.domain")
def test_via_homeserver(self):
room = mock_homeserver.parse_roomalias("#elsewhere:my.domain")
self.assertEquals("elsewhere", room.localpart)
self.assertEquals("my.domain", room.domain)

View file

@ -95,6 +95,20 @@ class MockHttpServer(HttpServer):
self.callbacks.append((method, path_pattern, callback)) self.callbacks.append((method, path_pattern, callback))
class MockClock(object):
now = 1000
def time(self):
return self.now
def time_msec(self):
return self.time() * 1000
# For unit testing
def advance_time(self, secs):
self.now += secs
class MemoryDataStore(object): class MemoryDataStore(object):
class RoomMember(namedtuple( class RoomMember(namedtuple(

View file

@ -11,21 +11,33 @@ h1 {
/*** Overall page layout ***/ /*** Overall page layout ***/
.page { .page {
max-width: 1280px; position: absolute;
top: 80px;
bottom: 100px;
left: 0px;
right: 0px;
margin: 20px;
margin: 20px;
}
.wrapper {
margin: auto; margin: auto;
margin-bottom: 80px ! important; max-width: 1280px;
padding-left: 20px; height: 100%;
padding-right: 20px;
} }
.roomName { .roomName {
max-width: 1280px;
width: 100%;
text-align: right; text-align: right;
top: -40px;
position: absolute;
font-size: 16pt; font-size: 16pt;
margin-bottom: 10px; margin-bottom: 10px;
} }
.controlPanel { .controlPanel {
position: fixed; position: absolute;
bottom: 0px; bottom: 0px;
width: 100%; width: 100%;
background-color: #f8f8f8; background-color: #f8f8f8;
@ -70,8 +82,9 @@ h1 {
.userAvatar { .userAvatar {
width: 80px; width: 80px;
height: 80px; height: 100px;
position: relative; position: relative;
background-color: #000;
} }
.userAvatar .userAvatarImage { .userAvatar .userAvatarImage {
@ -81,7 +94,7 @@ h1 {
.userAvatar .userAvatarGradient { .userAvatar .userAvatarGradient {
position: absolute; position: absolute;
bottom: 0px; bottom: 20px;
} }
.userAvatar .userName { .userAvatar .userName {
@ -91,7 +104,6 @@ h1 {
bottom: 0px; bottom: 0px;
font-size: 8pt; font-size: 8pt;
word-wrap: break-word; word-wrap: break-word;
word-break: break-all;
} }
.userPresence { .userPresence {
@ -110,27 +122,18 @@ h1 {
background-color: #FFCC00; background-color: #FFCC00;
} }
/*** Room page ***/
/* Limit the height of the page content to 100% of the viewport height minus the
height of the header and the footer.
The two divs containing the messages list and the users list will then scroll-
overflow separetely.
*/
.room .page {
height: calc(100vh - 220px);
}
/*** Message table ***/ /*** Message table ***/
.messageTableWrapper { .messageTableWrapper {
width: auto;
height: 100%; height: 100%;
margin-right: 140px; margin-right: 140px;
overflow-y: auto; overflow-y: auto;
width: auto;
} }
.messageTable { .messageTable {
margin: auto;
max-width: 1280px;
width: 100%; width: 100%;
border-collapse: collapse; border-collapse: collapse;
} }
@ -180,6 +183,8 @@ h1 {
height: 32px; height: 32px;
display: inline-table; display: inline-table;
max-width: 90%; max-width: 90%;
word-wrap: break-word;
word-break: break-all;
} }
.emote { .emote {
@ -217,18 +222,28 @@ h1 {
/******************************/ /******************************/
.header { .header {
margin-top: 12px ! important;
padding-left: 20px; padding-left: 20px;
padding-right: 20px; padding-right: 20px;
max-width: 1280px; max-width: 1280px;
margin: auto; margin: auto;
height: 60px;
} }
.header-buttons { .header-buttons {
float: right; float: right;
} }
.config {
position: absolute;
z-index: 100;
top: 100px;
left: 50%;
width: 400px;
margin-left: -200px;
text-align: center;
padding: 20px;
background-color: #aaa;
}
.text_entry_section { .text_entry_section {
position: fixed; position: fixed;
bottom: 0; bottom: 0;

View file

@ -70,4 +70,9 @@ matrixWebClient
$timeout(function() { element[0].focus() }, 0); $timeout(function() { element[0].focus() }, 0);
} }
}; };
}])
.filter('to_trusted', ['$sce', function($sce){
return function(text) {
return $sce.trustAsHtml(text);
};
}]); }]);

View file

@ -1,5 +1,6 @@
<div ng-controller="LoginController" class="login"> <div ng-controller="LoginController" class="login">
<div class="page"> <div class="page">
<div class="wrapper">
{{ feedback }} {{ feedback }}
@ -47,5 +48,6 @@
<br/> <br/>
</div>
</div> </div>
</div> </div>

View file

@ -42,6 +42,8 @@ angular.module('RoomController', [])
console.log("Got response from "+$scope.state.events_from+" to "+response.data.end); console.log("Got response from "+$scope.state.events_from+" to "+response.data.end);
$scope.state.events_from = response.data.end; $scope.state.events_from = response.data.end;
$scope.feedback = "";
for (var i = 0; i < response.data.chunk.length; i++) { for (var i = 0; i < response.data.chunk.length; i++) {
var chunk = response.data.chunk[i]; var chunk = response.data.chunk[i];
if (chunk.room_id == $scope.room_id && chunk.type == "m.room.message") { if (chunk.room_id == $scope.room_id && chunk.type == "m.room.message") {
@ -68,12 +70,17 @@ angular.module('RoomController', [])
$timeout(shortPoll, 0); $timeout(shortPoll, 0);
} }
}, function(response) { }, function(response) {
$scope.feedback = "Can't stream: " + JSON.stringify(response); $scope.feedback = "Can't stream: " + response.data;
if (response.status == 403) {
$scope.stopPoll = true;
}
if ($scope.stopPoll) { if ($scope.stopPoll) {
console.log("Stopping polling."); console.log("Stopping polling.");
} }
else { else {
$timeout(shortPoll, 2000); $timeout(shortPoll, 5000);
} }
}); });
}; };

View file

@ -1,6 +1,7 @@
<div ng-controller="RoomController" data-ng-init="onInit()" class="room"> <div ng-controller="RoomController" data-ng-init="onInit()" class="room">
<div class="page"> <div class="page">
<div class="wrapper">
<div class="roomName"> <div class="roomName">
{{ room_alias || room_id }} {{ room_alias || room_id }}
@ -12,7 +13,8 @@
<td class="userAvatar"> <td class="userAvatar">
<img class="userAvatarImage" ng-src="{{info.avatar_url || 'img/default-profile.jpg'}}" width="80" height="80"/> <img class="userAvatarImage" ng-src="{{info.avatar_url || 'img/default-profile.jpg'}}" width="80" height="80"/>
<img class="userAvatarGradient" src="img/gradient.png" width="80" height="24"/> <img class="userAvatarGradient" src="img/gradient.png" width="80" height="24"/>
<div class="userName">{{ info.displayname || name }}</div> <!-- FIXME: does allowing <wbr/> to be unescaped introduce HTML injections from user IDs and display names? -->
<div class="userName" ng-bind-html="info.displayname || (name.substr(0, name.indexOf(':')) + '<wbr/>' + name.substr(name.indexOf(':'))) | to_trusted"></div>
</td> </td>
<td class="userPresence" ng-class="info.presenceState === 'online' ? 'online' : (info.presenceState === 'unavailable' ? 'unavailable' : '')" /> <td class="userPresence" ng-class="info.presenceState === 'online' ? 'online' : (info.presenceState === 'unavailable' ? 'unavailable' : '')" />
</table> </table>
@ -31,7 +33,7 @@
</td> </td>
<td ng-class="!msg.content.membership_target ? (msg.content.msgtype === 'm.emote' ? 'emote text' : 'text') : ''"> <td ng-class="!msg.content.membership_target ? (msg.content.msgtype === 'm.emote' ? 'emote text' : 'text') : ''">
<div class="bubble"> <div class="bubble">
{{ msg.content.msgtype === "m.emote" ? ("* " + (members[msg.user_id].displayname || msg.user_id) + " ") : "" }} {{ msg.content.msgtype === "m.emote" ? ("* " + (members[msg.user_id].displayname || msg.user_id) + " " + msg.content.body) : "" }}
{{ msg.content.msgtype === "m.text" ? msg.content.body : "" }} {{ msg.content.msgtype === "m.text" ? msg.content.body : "" }}
<img class="image" ng-hide='msg.content.msgtype !== "m.image"' src="{{ msg.content.url }}" alt="{{ msg.content.body }}"/> <img class="image" ng-hide='msg.content.msgtype !== "m.image"' src="{{ msg.content.url }}" alt="{{ msg.content.body }}"/>
</div> </div>
@ -44,6 +46,7 @@
</table> </table>
</div> </div>
</div>
</div> </div>
<div class="controlPanel"> <div class="controlPanel">
@ -53,7 +56,7 @@
<td width="1"> <td width="1">
{{ state.user_id }} {{ state.user_id }}
</td> </td>
<td width="*"> <td width="*" style="min-width: 100px">
<input class="mainInput" ng-model="textInput" ng-enter="send()" ng-focus="true"/> <input class="mainInput" ng-model="textInput" ng-enter="send()" ng-focus="true"/>
</td> </td>
<td width="1"> <td width="1">
@ -86,6 +89,4 @@
</div> </div>
</div> </div>
</div> </div>

View file

@ -1,6 +1,7 @@
<div ng-controller="RoomsController" class="rooms"> <div ng-controller="RoomsController" class="rooms">
<div class="page"> <div class="page">
<div class="wrapper">
<div> <div>
<form> <form>
@ -77,4 +78,5 @@
{{ feedback }} {{ feedback }}
</div> </div>
</div>
</div> </div>