mirror of
https://mau.dev/maunium/synapse.git
synced 2024-12-14 04:23:45 +01:00
Add comments about how event push actions are stored. (#13445)
This commit is contained in:
parent
860fdd9098
commit
b6a6bb4027
2 changed files with 62 additions and 0 deletions
1
changelog.d/13445.misc
Normal file
1
changelog.d/13445.misc
Normal file
|
@ -0,0 +1 @@
|
|||
Add some comments about how event push actions are stored.
|
|
@ -12,6 +12,67 @@
|
|||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Responsible for storing and fetching push actions / notifications.
|
||||
|
||||
There are two main uses for push actions:
|
||||
1. Sending out push to a user's device; and
|
||||
2. Tracking per-room per-user notification counts (used in sync requests).
|
||||
|
||||
For the former we simply use the `event_push_actions` table, which contains all
|
||||
the calculated actions for a given user (which were calculated by the
|
||||
`BulkPushRuleEvaluator`).
|
||||
|
||||
For the latter we could simply count the number of rows in `event_push_actions`
|
||||
table for a given room/user, but in practice this is *very* heavyweight when
|
||||
there were a large number of notifications (due to e.g. the user never reading a
|
||||
room). Plus, keeping all push actions indefinitely uses a lot of disk space.
|
||||
|
||||
To fix these issues, we add a new table `event_push_summary` that tracks
|
||||
per-user per-room counts of all notifications that happened before a stream
|
||||
ordering S. Thus, to get the notification count for a user / room we can simply
|
||||
query a single row in `event_push_summary` and count the number of rows in
|
||||
`event_push_actions` with a stream ordering larger than S (and as long as S is
|
||||
"recent", the number of rows needing to be scanned will be small).
|
||||
|
||||
The `event_push_summary` table is updated via a background job that periodically
|
||||
chooses a new stream ordering S' (usually the latest stream ordering), counts
|
||||
all notifications in `event_push_actions` between the existing S and S', and
|
||||
adds them to the existing counts in `event_push_summary`.
|
||||
|
||||
This allows us to delete old rows from `event_push_actions` once those rows have
|
||||
been counted and added to `event_push_summary` (we call this process
|
||||
"rotation").
|
||||
|
||||
|
||||
We need to handle when a user sends a read receipt to the room. Again this is
|
||||
done as a background process. For each receipt we clear the row in
|
||||
`event_push_summary` and count the number of notifications in
|
||||
`event_push_actions` that happened after the receipt but before S, and insert
|
||||
that count into `event_push_summary` (If the receipt happened *after* S then we
|
||||
simply clear the `event_push_summary`.)
|
||||
|
||||
Note that its possible that if the read receipt is for an old event the relevant
|
||||
`event_push_actions` rows will have been rotated and we get the wrong count
|
||||
(it'll be too low). We accept this as a rare edge case that is unlikely to
|
||||
impact the user much (since the vast majority of read receipts will be for the
|
||||
latest event).
|
||||
|
||||
The last complication is to handle the race where we request the notifications
|
||||
counts after a user sends a read receipt into the room, but *before* the
|
||||
background update handles the receipt (without any special handling the counts
|
||||
would be outdated). We fix this by including in `event_push_summary` the read
|
||||
receipt we used when updating `event_push_summary`, and every time we query the
|
||||
table we check if that matches the most recent read receipt in the room. If yes,
|
||||
continue as above, if not we simply query the `event_push_actions` table
|
||||
directly.
|
||||
|
||||
Since read receipts are almost always for recent events, scanning the
|
||||
`event_push_actions` table in this case is unlikely to be a problem. Even if it
|
||||
is a problem, it is temporary until the background job handles the new read
|
||||
receipt.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Union, cast
|
||||
|
||||
|
|
Loading…
Reference in a new issue