Preparations to actually remove/replace `BaseDendrite`.
Quite a few changes:
- SyncAPI accepts an `fulltext.Indexer` interface (fulltext is removed
from `BaseDendrite`)
- Caches are removed from `BaseDendrite`
- Introduces a `Router` struct (likely to change)
- also fixes#2903
- Introduces a `sqlutil.ConnectionManager`, which should remove
`base.DatabaseConnection` later on
- probably more
This also fixes an issue regarding updates to relations for invalid
events, which could result in us retrying said event over and over
again, if we fail to unmarshal the event to
`gomatrixserverlib.RelationContent`, this was discovered by
`@sleroq:virto.community`
This PR changes the following:
- `StoreEvent` now only stores an event (and possibly prev event),
instead of also doing redactions
- Adds a `MaybeRedactEvent` (pulled out from `StoreEvent`), which should
be called after storing events
- a few other things
This removes most of the code used for polylith/API mode.
This removes the `/api` internal endpoints entirely.
Binary size change roughly 5%:
```
51437560 Feb 13 10:15 dendrite-monolith-server # old
48759008 Feb 13 10:15 dendrite-monolith-server # new
```
Should fix the following issues or make a lot less worse when using
Postgres:
The main issue behind #2911: The client gives up after a certain time,
causing a cascade of context errors, because the response couldn't be
built up fast enough. This mostly happens on accounts with many rooms,
due to the inefficient way we're getting recent events and current state
For #2777: The queries for getting the membership events for history
visibility were being executed for each room (I think 185?), resulting
in a whooping 2k queries for membership events. (Getting the
statesnapshot -> block nids -> actual wanted membership event)
Both should now be better by:
- Using a LATERAL join to get all recent events for all joined rooms in
one go (TODO: maybe do the same for room summary and current state etc)
- If we're lazy loading on initial syncs, we're now not getting the
whole current state, just to drop the majority of it because we're lazy
loading members - we add a filter to exclude membership events on the
first call to `CurrentState`.
- Using an optimized query to get the membership events needed to
calculate history visibility
---------
Co-authored-by: kegsay <kegan@matrix.org>
This adds Sytest and Complement coverage reporting to the nightly
scheduled CI runs.
Fixes a few API mode related issues as well, since we seemingly never
really ran them with Complement.
Also fixes a bug related to device list changes: When we pass in an
empty `newlyLeftRooms` slice, we got a list of all currently joined
rooms with the corresponding members. When we then got the
`newlyJoinedRooms`, we wouldn't update the `changed` slice, because we
already got the user from the `newlyLeftRooms` query. This is fixed by
simply ignoring empty `newlyLeftRooms`.
This adds a new admin endpoint `/_dendrite/admin/purgeRoom/{roomID}`. It
completely erases all database entries for a given room ID.
The roomserver will start by clearing all data for that room and then
will generate an output event to notify downstream components (i.e. the
sync API and federation API) to do the same.
It does not currently clear media and it is currently not implemented
for SQLite since it relies on SQL array operations right now.
Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
Co-authored-by: Till Faelligen <2353100+S7evinK@users.noreply.github.com>
Since #2849 there is no limit for the current state we fetch to
calculate history visibility. In large rooms this can cause us to fetch
thousands of membership events we don't really care about.
This now only gets the state event types and senders in our timeline,
which should significantly reduce the amount of events we fetch from the
database.
Also removes `MaxTopologicalPosition`, as it is an unnecessary DB call,
given we use the result in `topological_position < $1` calls.
Second part for guest access, this adds a `WithAllowGuests()` option to
`MakeAuthAPI`, allowing guests to access the specified endpoints.
Endpoints taken from the
[spec](https://spec.matrix.org/v1.4/client-server-api/#client-behaviour-14)
and by checking Synapse endpoints for `allow_guest=true`.
Proposed fix for issue:
https://github.com/matrix-org/dendrite/issues/2838
Suppose bob received invites to spaceA and spaceB.
When Bob joins spaceA, we add an OutputEvent event to retire the invite.
This sets the invite to "deleted" in the database. This makes sense.
The bug is in stream_invites.go. Triggered when bob received a new
invite for spaceB, and does a client sync.
In the block (line 76)
`for roomID := range retiredInvites
if _, ok := req.Response.Rooms.Invite[roomID]; ok {
continue
}
if _, ok := req.Response.Rooms.Join[roomID]; ok {
continue
}
...
`
Bob is not in either maps even though he had just accepted the invite
for spaceA. Consequently, the spaceA invite is treated as a retired
invite, and a membership Leave event is generated. What bob sees is that
after accepting the invite to spaceB, he lose access to spaceA.
### Pull Request Checklist
<!-- Please read
https://matrix-org.github.io/dendrite/development/contributing before
submitting your pull request -->
* [ ] I have added tests for PR _or_ I have justified why this PR
doesn't need tests.
* [x ] Pull request includes a [sign off below using a legally
identifiable
name](https://matrix-org.github.io/dendrite/development/contributing#sign-off)
_or_ I have already signed off privately
Signed-off-by: `Tak Wai Wong <tak@hntlabs.com>`
Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
This is apparently some incorrect behaviour that we built as a result of
a spec bug (matrix-org/matrix-spec#1314) where we were applying a filter
to the `"state"` section of the `/sync` response incorrectly. The client
then has no way to know that the state was limited.
This PR removes the state limiting, which probably also helps #2842.
This should stop state events disappearing down a gap where we'd try to
separate out the sections *before* applying history visibility instead
of after.
This may be a better approach than #2843 but I hope @tak-hntlabs will
shout if it isn't.
If we're going backwards, we were selecting potentially thousands of
events, which in turn were fed to history visibility checks, resulting
in bad sync performance.
The problem was that we weren't getting enough recent events, as most of
them were removed by the history visibility filter. Now we're getting
all events between the given input range and re-slice the returned
values after applying history visibility.
Makes the tests
```
Can get rooms/{roomId}/members at a given point
Can filter rooms/{roomId}/members
```
pass, by moving `/members` and `/joined_members` to the SyncAPI.
This is going to make `Can get rooms/{roomId}/messages for a departed
room (SPEC-216)` pass, since we now only grep events from before the
user left the room.
This makes the following changes:
- get state deltas without the user supplied filter, so we can actually
"calculate" state transitions
- closes `stmt` when using SQLite
- Adds presence for users who newly joined a room, even if the syncing
user already knows about the presence status (should fix
https://github.com/matrix-org/complement/pull/516)
Improves the control flow of `GetStateDeltas` for clarity and possibly
also fixes a bug where duplicate state delta entries could be inserted
with different memberships instead of being correctly overridden by
`join`.
Use the stream positions of the notifier, which might have advanced
since setting it at the beginning of the loop. This possibly helps in
reducing roundtrips to the SyncAPI, just because we didn't fetch the
latest data.
Also fixes a minor oversight in the receipts stream.
First attempt at removing empty fields from `/sync` responses. Needs
https://github.com/matrix-org/sytest/pull/1298 to keep Sytest happy.
Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
This fixes a temporary workaround with the `selectEventsWithEventIDsSQL`
queries where fields need to be artificially added to the queries so the
row results match the format of the `syncapi_output_room_events` table.
I made similar functions that accept row results from the
`syncapi_current_room_state` table and convert them into StreamEvents
without the fields that are specific to output room events.
There is also a unit test in the first commit to ensure the resulting
behavior doesn't change from the modified queries and functions.
Fixes#601.
### Pull Request Checklist
<!-- Please read docs/CONTRIBUTING.md before submitting your pull
request -->
* [x] I have added tests for PR _or_ I have justified why this PR
doesn't need tests.
* [x] Pull request includes a [sign
off](https://github.com/matrix-org/dendrite/blob/main/docs/CONTRIBUTING.md#sign-off)
Signed-off-by: `Ashley Nelson <fant@shley.email>`
Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
This now uses a transaction per stream, so that errors in one stream
don't propagate to another, and we therefore no longer need to do hacks
to reopen a new transaction after aborting a failed one.
This should transactional snapshot isolation for `/sync` etc requests.
For now we don't use repeatable read due to some odd test failures with
invites.
…ce {}, a slice of interface` in new notifications select
The sqlite3 version was just not working, original pr here:
https://github.com/matrix-org/dendrite/pull/2688
signed off by: austin ellis <austin@hntlabs.com>
This doesn't fix the notification counts, they still only work about 1
out of every 5 times in my tests. I will stick with my other fix locally
for reliable notification delivery:
https://github.com/matrix-org/dendrite/pull/2701
Based on #2480
This actually indexes events based on their event type. They are removed
from the index if we receive a `m.room.redaction` event on the
`OutputRoomEvent` stream.
An admin endpoint is added to reindex all existing events.
Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
This PR changes the handling of notifications
- removes the `StreamEvent` and `ReadUpdate` stream
- listens on the `OutputRoomEvent` stream in the UserAPI to inform the
SyncAPI about unread notifications
- listens on the `OutputReceiptEvent` stream in the UserAPI to set
receipts/update notifications
- sets the `read_markers` directly from within the internal UserAPI
Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
Recently I have observed that dendrite spends a lot of time (~390s) in
`selectRoomIDsWithAnyMembershipSQL` query
```
dendrite_syncapi=# select total_exec_time, left(query,100) from pg_stat_statements order by total_exec_time desc limit 5 ;
total_exec_time | left
--------------------+------------------------------------------------------------------------------------------------------
747826.5800519128 | SELECT event_id, id, headered_event_json, session_id, exclude_from_sync, transaction_id, history_vis
389130.5490339942 | SELECT DISTINCT room_id, membership FROM syncapi_current_room_state WHERE type = $2 AND state_key =
376104.17514700035 | SELECT psd.datname, xact_commit, xact_rollback, blks_read, blks_hit, tup_returned, tup_fetched, tup_
363644.164092031 | SELECT event_type_nid, event_state_key_nid, event_nid FROM roomserver_events WHERE event_nid = ANY($
58570.48104699995 | SELECT event_id, headered_event_json FROM syncapi_current_room_state WHERE room_id = $1 AND ( $2::te
(5 rows)
```
Explain analyze showed correct usage of `syncapi_room_state_unique`
index:
```
dendrite_syncapi=#
explain analyze SELECT distinct room_id, membership FROM syncapi_current_room_state WHERE type = 'm.room.member' AND state_key = '@qjfl:dendrite.stg.globekeeper.com';
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=2749.38..2749.56 rows=24 width=52) (actual time=2.933..2.956 rows=65 loops=1)
-> Sort (cost=2749.38..2749.44 rows=24 width=52) (actual time=2.932..2.937 rows=65 loops=1)
Sort Key: room_id, membership
Sort Method: quicksort Memory: 34kB
-> Index Scan using syncapi_room_state_unique on syncapi_current_room_state (cost=0.41..2748.83 rows=24 width=52) (actual time=0.030..2.890 rows=65 loops=1)
Index Cond: ((type = 'm.room.member'::text) AND (state_key = '@qjfl:dendrite.stg.globekeeper.com'::text))
Planning Time: 0.140 ms
Execution Time: 2.990 ms
(8 rows)
```
Multi-column indexes in Postgres shall perform well for leftmost
columns, but I gave it a try and created
`syncapi_current_room_state_type_state_key_idx` index. I could observe
significant performance improvement. Execution time dropped from 2.9 ms
to 0.24 ms:
```
explain analyze SELECT distinct room_id, membership FROM syncapi_current_room_state WHERE type = 'm.room.member' AND state_key = '@qjfl:dendrite.stg.globekeeper.com';
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=96.46..96.64 rows=24 width=52) (actual time=0.199..0.218 rows=65 loops=1)
-> Sort (cost=96.46..96.52 rows=24 width=52) (actual time=0.199..0.202 rows=65 loops=1)
Sort Key: room_id, membership
Sort Method: quicksort Memory: 34kB
-> Bitmap Heap Scan on syncapi_current_room_state (cost=4.53..95.91 rows=24 width=52) (actual time=0.048..0.139 rows=65 loops=1)
Recheck Cond: ((type = 'm.room.member'::text) AND (state_key = '@qjfl:dendrite.stg.globekeeper.com'::text))
Heap Blocks: exact=59
-> Bitmap Index Scan on syncapi_current_room_state_type_state_key_idx (cost=0.00..4.53 rows=24 width=0) (actual time=0.037..0.037 rows=65 loops=1)
Index Cond: ((type = 'm.room.member'::text) AND (state_key = '@qjfl:dendrite.stg.globekeeper.com'::text))
Planning Time: 0.236 ms
Execution Time: 0.242 ms
(11 rows)
```
Next improvement is skipping DISTINCT and rely on map assignment in
`SelectRoomIDsWithAnyMembership`. Execution time drops by almost half:
```
explain analyze SELECT room_id, membership FROM syncapi_current_room_state WHERE type = 'm.room.member' AND state_key = '@qjfl:dendrite.stg.globekeeper.com';
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on syncapi_current_room_state (cost=4.53..95.91 rows=24 width=52) (actual time=0.032..0.113 rows=65 loops=1)
Recheck Cond: ((type = 'm.room.member'::text) AND (state_key = '@qjfl:dendrite.stg.globekeeper.com'::text))
Heap Blocks: exact=59
-> Bitmap Index Scan on syncapi_current_room_state_type_state_key_idx (cost=0.00..4.53 rows=24 width=0) (actual time=0.021..0.021 rows=65 loops=1)
Index Cond: ((type = 'm.room.member'::text) AND (state_key = '@qjfl:dendrite.stg.globekeeper.com'::text))
Planning Time: 0.087 ms
Execution Time: 0.136 ms
(7 rows)
```
In our env we spend only 1s on inserting to table, so the write penalty
of creating an index should be small.
```
dendrite_syncapi=# select total_exec_time, left(query,100) from pg_stat_statements where query like '%INSERT%syncapi_current_room_state%' order by total_exec_time desc;
total_exec_time | left
--------------------+------------------------------------------------------------------------------------------------------
1139.9057619999971 | INSERT INTO syncapi_current_room_state (room_id, event_id, type, sender, contains_url, state_key, he
(1 row)
```
This PR does not require test modifications.
### Pull Request Checklist
<!-- Please read docs/CONTRIBUTING.md before submitting your pull
request -->
* [x] I have added added tests for PR _or_ I have justified why this PR
doesn't need tests.
* [x] Pull request includes a [sign
off](https://github.com/matrix-org/dendrite/blob/main/docs/CONTRIBUTING.md#sign-off)
Signed-off-by: `Piotr Kozimor <p1996k@gmail.com>`
Previously `LoadMembershipAtEvent` would fail if the state before one of
the events was not known, i.e. because it was an outlier. This modifies
it so that it gracefully handles not knowing the state and returns no
memberships instead, so that history visibility doesn't freak out and
kill `/sync` requests dead.
Some tweaks for the send-to-device consumers/producers:
- use `json.RawMessage` without marshalling it first
- try further devices (if available) if we failed to `PublishMsg` in the
producers
- some logging changes (to better debug E2EE issues)
Introduced index improves select query performance. Example execution time of `selectSendToDeviceMessagesSQL` query dropped from 80 ms to 15 ms. No sytest modifications are required.
### Pull Request Checklist
* [x] I have added added tests for PR _or_ I have justified why this PR doesn't need tests.
* [x] Pull request includes a [sign off](https://github.com/matrix-org/dendrite/blob/main/docs/CONTRIBUTING.md#sign-off)
Signed-off-by: `Piotr Kozimor <p1996k@gmail.com>`
This should fix an issue where we return less than the expected membership events, when doing an initial sync.
When doing an initial sync, the state limit is set to `math.MaxInt32`, while the default filter is set to 20.
- Reverts 9dc57122d9 as it was causing issues https://github.com/matrix-org/dendrite/issues/2660
- Updates the GMSL `DefaultStateFilter` to use a limit of 20 events
- Uses the timeline events to determine the new position instead of the state events
This should hopefully deflake Backfill works correctly with history visibility set to joined as we were using the default shared visibility, even if the events are set to joined (or something else)
* Use existing current room state if we have it
* Don't dedupe before applying the history vis filter
* Revert "Don't dedupe before applying the history vis filter"
This reverts commit d27c4a0874.
* Revert "Use existing current room state if we have it"
This reverts commit 5819b4a7ce.
* Tweaks
* Only return non-retired invites
* Revert "Only return non-retired invites"
This reverts commit 1150aa7f38.
* Check if we're doing an initial sync in the stream
* Add possibility to set history_visibility and user AccountType
* Add new DB queries
* Add actual history_visibility changes for /messages
* Add passing tests
* Extract check function
* Cleanup
* Cleanup
* Fix build on 386
* Move ApplyHistoryVisibilityFilter to internal
* Move queries to topology table
* Add filtering to /sync and /context
Some cleanup
* Add passing tests; Remove failing tests :(
* Re-add passing tests
* Move filtering to own function to avoid duplication
* Re-add passing test
* Use newly added GMSL HistoryVisibility
* Update gomatrixserverlib
* Set the visibility when creating events
* Default to shared history visibility
* Remove unused query
* Update history visibility checks to use gmsl
Update tests
* Remove unused statement
* Update migrations to set "correct" history visibility
* Add method to fetch the membership at a given event
* Tweaks and logging
* Use actual internal rsAPI, default to shared visibility in tests
* Revert "Move queries to topology table"
This reverts commit 4f0d41be9c.
* Remove noise/unneeded code
* More cleanup
* Try to optimize database requests
* Fix imports
* PR peview fixes/changes
* Move setting history visibility to own migration, be more restrictive
* Fix unit tests
* Lint
* Fix missing entries
* Tweaks for incremental syncs
* Adapt generic changes
Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
Co-authored-by: kegsay <kegan@matrix.org>
* Generic-based internal HTTP API (tested out on a few endpoints in the federation API)
* Add `PerformInvite`
* More tweaks
* Fix metric name
* Fix LookupStateIDs
* Lots of changes to clients
* Some serverside stuff
* Some error handling
* Use paths as metric names
* Revert "Use paths as metric names"
This reverts commit a9323a6a34.
* Namespace metric names
* Remove duplicate entry
* Remove another duplicate entry
* Tweak error handling
* Some more tweaks
* Update error behaviour
* Some more error tweaking
* Fix API path for `PerformDeleteKeys`
* Fix another path
* Tweak federation client proxying
* Fix another path
* Don't return typed nils
* Some more tweaks, not that it makes any difference
* Tweak federation client proxying
* Maybe fix the key backup test
* Bypass lazyLoadCache if we're doing an initial sync
* Make the linter happy again?
* Revert "Make the linter happy again?"
This reverts commit 52a5691ba3.
* Try that again
* Invalidate LazyLoadCache on initial syncs
* Remove unneeded check
* Add TODO
* Rename Invalite -> InvalidateLazyLoadedUser
* Thanks IDE
* Add race testing to tests, and fix a few small race conditions in the tests
* Enable run-sytest on MacOS
* Remove deadlock detecting mutex, per code review feedback
* Remove autoformatting related changes and a closure that is not needed
* Adjust to importing nats client as 'natsclient'
Signed-off-by: Brian Meek <brian@hntlabs.com>
* Clarify the use of gooseMutex to proect goose internal state
Signed-off-by: Brian Meek <brian@hntlabs.com>
* Remove no longer needed mutex for guarding goose
Signed-off-by: Brian Meek <brian@hntlabs.com>
* Fix query issue, only add "changed" users if we actually share a room
* Avoid log spam if context is done
* Undo changes to filterSharedUsers
* Add logging again..
* Fix SQLite shared users query
* Change query to include invited users
Issue: During conversation, under some conditions, sync cookie is not advanced, and, as a result, client loops on the same sync API call creating high traffic and CPU load.
Fix: pdu component of cookie was updated incorrectly.
* Add new db migration
* Update migrations
Remove goose
* Add possibility to test direct upgrades
* Try to fix WASM test
* Add checks for specific migrations
* Remove AddMigration
Use WithTransaction
Add Dendrite version to table
* Fix linter issues
* Update tests
* Update comments, outdent if
* Namespace migrations
* Add direct upgrade tests, skipping over one version
* Split migrations
* Update go version in CI
* Fix copy&paste mistake
* Use contexts in migrations
Co-authored-by: kegsay <kegan@matrix.org>
Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
* Membership updater refactoring
* Pass in membership state
* Use membership check rather than referring to state directly
* Delete irrelevant membership states
* We don't need the leave event after all
* Tweaks
* Put a log entry in that I might stand a chance of finding
* Be less panicky
* Tweak invite handling
* Don't freak if we can't find the event NID
* Use event NID from `types.Event`
* Clean up
* Better invite handling
* Placate the almighty linter
* Blacklist a Sytest which is otherwise fine under Complement for reasons I don't understand
* Fix the sytest after all (thanks @S7evinK for the spot)
* Add function to the sync API storage package for filtering shared users
* Use the database instead of asking the RS API
* Fix unit tests
* Fix map handling in `filterSharedUsers`
* Try Ristretto cache
* Tweak
* It's beautiful
* Update GMSL
* More strict keyable interface
* Fix that some more
* Make less panicky
* Don't enforce mutability checks for now
* Determine mutability using deep equality
* Tweaks
* Namespace keys
* Make federation caches mutable
* Update cost estimation, add metric
* Update GMSL
* Estimate cost for metrics better
* Reduce counters a bit
* Try caching events
* Some guards
* Try again
* Try this
* Use separate caches for hopefully better hash distribution
* Fix bug with admitting events into cache
* Try to fix bugs
* Check nil
* Try that again
* Preserve order jeezo this is messy
* thanks VS Code for doing exactly the wrong thing
* Try this again
* Be more specific
* aaaaargh
* One more time
* That might be better
* Stronger sorting
* Cache expiries, async publishing of EDUs
* Put it back
* Use a shared cache again
* Cost estimation fixes
* Update ristretto
* Reduce counters a bit
* Clean up a bit
* Update GMSL
* 1GB
* Configurable cache sizees
* Tweaks
* Add `config.DataUnit` for specifying friendly cache sizes
* Various tweaks
* Update GMSL
* Add back some lazy loading caching
* Include key in cost
* Include key in cost
* Tweak max age handling, config key name
* Only register prometheus metrics if requested
* Review comments @S7evinK
* Don't return errors when creating caches (it is better just to crash since otherwise we'll `nil`-pointer exception everywhere)
* Review comments
* Update sample configs
* Update GHA Workflow
* Update Complement images to Go 1.18
* Remove the cache test from the federation API as we no longer guarantee immediate cache admission
* Don't check the caches in the renewal test
* Possibly fix the upgrade tests
* Update to matrix-org/gomatrixserverlib#322
* Update documentation to refer to Go 1.18