2020-02-13 18:27:33 +01:00
|
|
|
// Copyright 2017-2018 New Vector Ltd
|
|
|
|
// Copyright 2019-2020 The Matrix.org Foundation C.I.C.
|
|
|
|
//
|
|
|
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
// you may not use this file except in compliance with the License.
|
|
|
|
// You may obtain a copy of the License at
|
|
|
|
//
|
|
|
|
// http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
//
|
|
|
|
// Unless required by applicable law or agreed to in writing, software
|
|
|
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
// See the License for the specific language governing permissions and
|
|
|
|
// limitations under the License.
|
|
|
|
|
|
|
|
package sqlite3
|
|
|
|
|
|
|
|
import (
|
|
|
|
"context"
|
|
|
|
"database/sql"
|
|
|
|
"encoding/json"
|
2021-01-19 19:00:42 +01:00
|
|
|
"fmt"
|
2020-02-13 18:27:33 +01:00
|
|
|
"sort"
|
2022-03-11 13:48:45 +01:00
|
|
|
"strings"
|
2020-02-13 18:27:33 +01:00
|
|
|
|
2020-06-12 15:55:57 +02:00
|
|
|
"github.com/matrix-org/dendrite/internal"
|
2020-02-13 18:27:33 +01:00
|
|
|
"github.com/matrix-org/dendrite/roomserver/api"
|
2020-05-14 10:53:55 +02:00
|
|
|
"github.com/matrix-org/dendrite/syncapi/storage/tables"
|
2020-02-13 18:27:33 +01:00
|
|
|
"github.com/matrix-org/dendrite/syncapi/types"
|
|
|
|
|
2020-06-12 15:55:57 +02:00
|
|
|
"github.com/matrix-org/dendrite/internal/sqlutil"
|
2020-02-13 18:27:33 +01:00
|
|
|
"github.com/matrix-org/gomatrixserverlib"
|
|
|
|
log "github.com/sirupsen/logrus"
|
|
|
|
)
|
|
|
|
|
|
|
|
const outputRoomEventsSchema = `
|
|
|
|
-- Stores output room events received from the roomserver.
|
|
|
|
CREATE TABLE IF NOT EXISTS syncapi_output_room_events (
|
|
|
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
|
|
event_id TEXT NOT NULL UNIQUE,
|
|
|
|
room_id TEXT NOT NULL,
|
2020-03-19 13:07:01 +01:00
|
|
|
headered_event_json TEXT NOT NULL,
|
2020-02-13 18:27:33 +01:00
|
|
|
type TEXT NOT NULL,
|
|
|
|
sender TEXT NOT NULL,
|
|
|
|
contains_url BOOL NOT NULL,
|
Add peer-to-peer support into Dendrite via libp2p and fetch (#880)
* Use a fork of pq which supports userCurrent on wasm
* Use sqlite3_js driver when running in JS
* Add cmd/dendritejs to pull in sqlite3_js driver for wasm only
* Update to latest go-sqlite-js version
* Replace prometheus with a stub. sigh
* Hard-code a config and don't use opentracing
* Latest go-sqlite3-js version
* Generate a key for now
* Listen for fetch traffic rather than HTTP
* Latest hacks for js
* libp2p support
* More libp2p
* Fork gjson to allow us to enforce auth checks as before
Previously, all events would come down redacted because the hash
checks would fail. They would fail because sjson.DeleteBytes didn't
remove keys not used for hashing. This didn't work because of a build
tag which included a file which no-oped the index returned.
See https://github.com/tidwall/gjson/issues/157
When it's resolved, let's go back to mainline.
* Use gjson@1.6.0 as it fixes https://github.com/tidwall/gjson/issues/157
* Use latest gomatrixserverlib for sig checks
* Fix a bug which could cause exclude_from_sync to not be set
Caused when sending events over federation.
* Use query variadic to make lookups actually work!
* Latest gomatrixserverlib
* Add notes on getting p2p up and running
Partly so I don't forget myself!
* refactor: Move p2p specific stuff to cmd/dendritejs
This is important or else the normal build of dendrite will fail
because the p2p libraries depend on syscall/js which doesn't work
on normal builds.
Also, clean up main.go to read a bit better.
* Update ho-http-js-libp2p to return errors from RoundTrip
* Add an LRU cache around the key DB
We actually need this for P2P because otherwise we can *segfault*
with things like: "runtime: unexpected return pc for runtime.handleEvent"
where the event is a `syscall/js` event, caused by spamming sql.js
caused by "Checking event signatures for 14 events of room state" which
hammers the key DB repeatedly in quick succession.
Using a cache fixes this, though the underlying cause is probably a bug
in the version of Go I'm on (1.13.7)
* breaking: Add Tracing.Enabled to toggle whether we do opentracing
Defaults to false, which is why this is a breaking change. We need
this flag because WASM builds cannot do opentracing.
* Start adding conditional builds for wasm to handle lib/pq
The general idea here is to have the wasm build have a `NewXXXDatabase`
that doesn't import any postgres package and hence we never import
`lib/pq`, which doesn't work under WASM (undefined `userCurrent`).
* Remove lib/pq for wasm for syncapi
* Add conditional building to remaining storage APIs
* Update build script to set env vars correctly for dendritejs
* sqlite bug fixes
* Docs
* Add a no-op main for dendritejs when not building under wasm
* Use the real prometheus, even for WASM
Instead, the dendrite-sw.js must mock out `process.pid` and
`fs.stat` - which must invoke the callback with an error (e.g `EINVAL`)
in order for it to work:
```
global.process = {
pid: 1,
};
global.fs.stat = function(path, cb) {
cb({
code: "EINVAL",
});
}
```
* Linting
2020-03-06 11:23:55 +01:00
|
|
|
add_state_ids TEXT, -- JSON encoded string array
|
|
|
|
remove_state_ids TEXT, -- JSON encoded string array
|
2020-02-13 18:27:33 +01:00
|
|
|
session_id BIGINT,
|
|
|
|
transaction_id TEXT,
|
Add peer-to-peer support into Dendrite via libp2p and fetch (#880)
* Use a fork of pq which supports userCurrent on wasm
* Use sqlite3_js driver when running in JS
* Add cmd/dendritejs to pull in sqlite3_js driver for wasm only
* Update to latest go-sqlite-js version
* Replace prometheus with a stub. sigh
* Hard-code a config and don't use opentracing
* Latest go-sqlite3-js version
* Generate a key for now
* Listen for fetch traffic rather than HTTP
* Latest hacks for js
* libp2p support
* More libp2p
* Fork gjson to allow us to enforce auth checks as before
Previously, all events would come down redacted because the hash
checks would fail. They would fail because sjson.DeleteBytes didn't
remove keys not used for hashing. This didn't work because of a build
tag which included a file which no-oped the index returned.
See https://github.com/tidwall/gjson/issues/157
When it's resolved, let's go back to mainline.
* Use gjson@1.6.0 as it fixes https://github.com/tidwall/gjson/issues/157
* Use latest gomatrixserverlib for sig checks
* Fix a bug which could cause exclude_from_sync to not be set
Caused when sending events over federation.
* Use query variadic to make lookups actually work!
* Latest gomatrixserverlib
* Add notes on getting p2p up and running
Partly so I don't forget myself!
* refactor: Move p2p specific stuff to cmd/dendritejs
This is important or else the normal build of dendrite will fail
because the p2p libraries depend on syscall/js which doesn't work
on normal builds.
Also, clean up main.go to read a bit better.
* Update ho-http-js-libp2p to return errors from RoundTrip
* Add an LRU cache around the key DB
We actually need this for P2P because otherwise we can *segfault*
with things like: "runtime: unexpected return pc for runtime.handleEvent"
where the event is a `syscall/js` event, caused by spamming sql.js
caused by "Checking event signatures for 14 events of room state" which
hammers the key DB repeatedly in quick succession.
Using a cache fixes this, though the underlying cause is probably a bug
in the version of Go I'm on (1.13.7)
* breaking: Add Tracing.Enabled to toggle whether we do opentracing
Defaults to false, which is why this is a breaking change. We need
this flag because WASM builds cannot do opentracing.
* Start adding conditional builds for wasm to handle lib/pq
The general idea here is to have the wasm build have a `NewXXXDatabase`
that doesn't import any postgres package and hence we never import
`lib/pq`, which doesn't work under WASM (undefined `userCurrent`).
* Remove lib/pq for wasm for syncapi
* Add conditional building to remaining storage APIs
* Update build script to set env vars correctly for dendritejs
* sqlite bug fixes
* Docs
* Add a no-op main for dendritejs when not building under wasm
* Use the real prometheus, even for WASM
Instead, the dendrite-sw.js must mock out `process.pid` and
`fs.stat` - which must invoke the callback with an error (e.g `EINVAL`)
in order for it to work:
```
global.process = {
pid: 1,
};
global.fs.stat = function(path, cb) {
cb({
code: "EINVAL",
});
}
```
* Linting
2020-03-06 11:23:55 +01:00
|
|
|
exclude_from_sync BOOL NOT NULL DEFAULT FALSE
|
2020-02-13 18:27:33 +01:00
|
|
|
);
|
2022-05-10 12:23:36 +02:00
|
|
|
|
|
|
|
CREATE INDEX IF NOT EXISTS syncapi_output_room_events_type_idx ON syncapi_output_room_events (type);
|
|
|
|
CREATE INDEX IF NOT EXISTS syncapi_output_room_events_sender_idx ON syncapi_output_room_events (sender);
|
|
|
|
CREATE INDEX IF NOT EXISTS syncapi_output_room_events_room_id_idx ON syncapi_output_room_events (room_id);
|
|
|
|
CREATE INDEX IF NOT EXISTS syncapi_output_room_events_exclude_from_sync_idx ON syncapi_output_room_events (exclude_from_sync);
|
2020-02-13 18:27:33 +01:00
|
|
|
`
|
|
|
|
|
|
|
|
const insertEventSQL = "" +
|
|
|
|
"INSERT INTO syncapi_output_room_events (" +
|
2020-03-19 13:07:01 +01:00
|
|
|
"id, room_id, event_id, headered_event_json, type, sender, contains_url, add_state_ids, remove_state_ids, session_id, transaction_id, exclude_from_sync" +
|
2020-02-13 18:27:33 +01:00
|
|
|
") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12) " +
|
2021-02-17 16:18:41 +01:00
|
|
|
"ON CONFLICT (event_id) DO UPDATE SET exclude_from_sync = (excluded.exclude_from_sync AND $13)"
|
2020-02-13 18:27:33 +01:00
|
|
|
|
|
|
|
const selectEventsSQL = "" +
|
2022-04-08 18:53:24 +02:00
|
|
|
"SELECT event_id, id, headered_event_json, session_id, exclude_from_sync, transaction_id FROM syncapi_output_room_events WHERE event_id IN ($1)"
|
2020-02-13 18:27:33 +01:00
|
|
|
|
|
|
|
const selectRecentEventsSQL = "" +
|
2020-12-09 19:07:17 +01:00
|
|
|
"SELECT event_id, id, headered_event_json, session_id, exclude_from_sync, transaction_id FROM syncapi_output_room_events" +
|
2021-01-19 19:00:42 +01:00
|
|
|
" WHERE room_id = $1 AND id > $2 AND id <= $3"
|
2022-03-03 12:40:53 +01:00
|
|
|
|
2022-02-21 17:12:22 +01:00
|
|
|
// WHEN, ORDER BY and LIMIT are appended by prepareWithFilters
|
2020-02-13 18:27:33 +01:00
|
|
|
|
|
|
|
const selectRecentEventsForSyncSQL = "" +
|
2020-12-09 19:07:17 +01:00
|
|
|
"SELECT event_id, id, headered_event_json, session_id, exclude_from_sync, transaction_id FROM syncapi_output_room_events" +
|
2021-01-19 19:00:42 +01:00
|
|
|
" WHERE room_id = $1 AND id > $2 AND id <= $3 AND exclude_from_sync = FALSE"
|
2022-03-03 12:40:53 +01:00
|
|
|
|
2022-02-21 17:12:22 +01:00
|
|
|
// WHEN, ORDER BY and LIMIT are appended by prepareWithFilters
|
2020-02-13 18:27:33 +01:00
|
|
|
|
|
|
|
const selectEarlyEventsSQL = "" +
|
2020-12-09 19:07:17 +01:00
|
|
|
"SELECT event_id, id, headered_event_json, session_id, exclude_from_sync, transaction_id FROM syncapi_output_room_events" +
|
2021-01-19 19:00:42 +01:00
|
|
|
" WHERE room_id = $1 AND id > $2 AND id <= $3"
|
2022-03-03 12:40:53 +01:00
|
|
|
|
2022-02-21 17:12:22 +01:00
|
|
|
// WHEN, ORDER BY and LIMIT are appended by prepareWithFilters
|
2020-02-13 18:27:33 +01:00
|
|
|
|
|
|
|
const selectMaxEventIDSQL = "" +
|
|
|
|
"SELECT MAX(id) FROM syncapi_output_room_events"
|
|
|
|
|
2020-07-08 18:45:39 +02:00
|
|
|
const updateEventJSONSQL = "" +
|
|
|
|
"UPDATE syncapi_output_room_events SET headered_event_json=$1 WHERE event_id=$2"
|
|
|
|
|
2020-02-13 18:27:33 +01:00
|
|
|
const selectStateInRangeSQL = "" +
|
2021-11-03 10:53:37 +01:00
|
|
|
"SELECT event_id, id, headered_event_json, exclude_from_sync, add_state_ids, remove_state_ids" +
|
2020-02-13 18:27:33 +01:00
|
|
|
" FROM syncapi_output_room_events" +
|
2021-01-19 19:00:42 +01:00
|
|
|
" WHERE (id > $1 AND id <= $2)" +
|
2022-03-11 13:48:45 +01:00
|
|
|
" AND room_id IN ($3)" +
|
2021-01-19 19:00:42 +01:00
|
|
|
" AND ((add_state_ids IS NOT NULL AND add_state_ids != '') OR (remove_state_ids IS NOT NULL AND remove_state_ids != ''))"
|
2022-03-03 12:40:53 +01:00
|
|
|
|
2022-02-21 17:12:22 +01:00
|
|
|
// WHEN, ORDER BY and LIMIT are appended by prepareWithFilters
|
2020-02-13 18:27:33 +01:00
|
|
|
|
2020-09-15 12:17:46 +02:00
|
|
|
const deleteEventsForRoomSQL = "" +
|
|
|
|
"DELETE FROM syncapi_output_room_events WHERE room_id = $1"
|
|
|
|
|
2022-02-21 17:12:22 +01:00
|
|
|
const selectContextEventSQL = "" +
|
|
|
|
"SELECT id, headered_event_json FROM syncapi_output_room_events WHERE room_id = $1 AND event_id = $2"
|
|
|
|
|
|
|
|
const selectContextBeforeEventSQL = "" +
|
|
|
|
"SELECT headered_event_json FROM syncapi_output_room_events WHERE room_id = $1 AND id < $2"
|
2022-03-03 12:40:53 +01:00
|
|
|
|
2022-02-21 17:12:22 +01:00
|
|
|
// WHEN, ORDER BY and LIMIT are appended by prepareWithFilters
|
|
|
|
|
|
|
|
const selectContextAfterEventSQL = "" +
|
|
|
|
"SELECT id, headered_event_json FROM syncapi_output_room_events WHERE room_id = $1 AND id > $2"
|
2022-03-03 12:40:53 +01:00
|
|
|
|
2022-02-21 17:12:22 +01:00
|
|
|
// WHEN, ORDER BY and LIMIT are appended by prepareWithFilters
|
|
|
|
|
2020-02-13 18:27:33 +01:00
|
|
|
type outputRoomEventsStatements struct {
|
2022-02-21 17:12:22 +01:00
|
|
|
db *sql.DB
|
2022-04-08 18:53:24 +02:00
|
|
|
streamIDStatements *StreamIDStatements
|
2022-02-21 17:12:22 +01:00
|
|
|
insertEventStmt *sql.Stmt
|
|
|
|
selectMaxEventIDStmt *sql.Stmt
|
|
|
|
updateEventJSONStmt *sql.Stmt
|
|
|
|
deleteEventsForRoomStmt *sql.Stmt
|
|
|
|
selectContextEventStmt *sql.Stmt
|
|
|
|
selectContextBeforeEventStmt *sql.Stmt
|
|
|
|
selectContextAfterEventStmt *sql.Stmt
|
2020-02-13 18:27:33 +01:00
|
|
|
}
|
|
|
|
|
2022-04-08 18:53:24 +02:00
|
|
|
func NewSqliteEventsTable(db *sql.DB, streamID *StreamIDStatements) (tables.Events, error) {
|
2020-05-14 10:53:55 +02:00
|
|
|
s := &outputRoomEventsStatements{
|
2020-07-21 16:48:21 +02:00
|
|
|
db: db,
|
2020-05-14 10:53:55 +02:00
|
|
|
streamIDStatements: streamID,
|
|
|
|
}
|
|
|
|
_, err := db.Exec(outputRoomEventsSchema)
|
2020-02-13 18:27:33 +01:00
|
|
|
if err != nil {
|
2020-05-14 10:53:55 +02:00
|
|
|
return nil, err
|
2020-02-13 18:27:33 +01:00
|
|
|
}
|
2022-02-21 17:12:22 +01:00
|
|
|
return s, sqlutil.StatementList{
|
|
|
|
{&s.insertEventStmt, insertEventSQL},
|
|
|
|
{&s.selectMaxEventIDStmt, selectMaxEventIDSQL},
|
|
|
|
{&s.updateEventJSONStmt, updateEventJSONSQL},
|
|
|
|
{&s.deleteEventsForRoomStmt, deleteEventsForRoomSQL},
|
|
|
|
{&s.selectContextEventStmt, selectContextEventSQL},
|
|
|
|
{&s.selectContextBeforeEventStmt, selectContextBeforeEventSQL},
|
|
|
|
{&s.selectContextAfterEventStmt, selectContextAfterEventSQL},
|
|
|
|
}.Prepare(db)
|
2020-02-13 18:27:33 +01:00
|
|
|
}
|
|
|
|
|
2020-07-08 18:45:39 +02:00
|
|
|
func (s *outputRoomEventsStatements) UpdateEventJSON(ctx context.Context, event *gomatrixserverlib.HeaderedEvent) error {
|
|
|
|
headeredJSON, err := json.Marshal(event)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2020-08-21 11:42:08 +02:00
|
|
|
_, err = s.updateEventJSONStmt.ExecContext(ctx, headeredJSON, event.EventID())
|
|
|
|
return err
|
2020-07-08 18:45:39 +02:00
|
|
|
}
|
|
|
|
|
2020-02-13 18:27:33 +01:00
|
|
|
// selectStateInRange returns the state events between the two given PDU stream positions, exclusive of oldPos, inclusive of newPos.
|
|
|
|
// Results are bucketed based on the room ID. If the same state is overwritten multiple times between the
|
|
|
|
// two positions, only the most recent state is returned.
|
2020-05-14 10:53:55 +02:00
|
|
|
func (s *outputRoomEventsStatements) SelectStateInRange(
|
2020-05-15 10:41:12 +02:00
|
|
|
ctx context.Context, txn *sql.Tx, r types.Range,
|
2022-03-11 13:48:45 +01:00
|
|
|
stateFilter *gomatrixserverlib.StateFilter, roomIDs []string,
|
2020-02-13 18:27:33 +01:00
|
|
|
) (map[string]map[string]bool, map[string]types.StreamEvent, error) {
|
2022-03-11 13:48:45 +01:00
|
|
|
stmtSQL := strings.Replace(selectStateInRangeSQL, "($3)", sqlutil.QueryVariadicOffset(len(roomIDs), 2), 1)
|
|
|
|
inputParams := []interface{}{
|
|
|
|
r.Low(), r.High(),
|
|
|
|
}
|
|
|
|
for _, roomID := range roomIDs {
|
|
|
|
inputParams = append(inputParams, roomID)
|
|
|
|
}
|
2021-01-19 19:00:42 +01:00
|
|
|
stmt, params, err := prepareWithFilters(
|
2022-03-11 13:48:45 +01:00
|
|
|
s.db, txn, stmtSQL, inputParams,
|
2021-01-19 19:00:42 +01:00
|
|
|
stateFilter.Senders, stateFilter.NotSenders,
|
|
|
|
stateFilter.Types, stateFilter.NotTypes,
|
2022-04-13 13:16:02 +02:00
|
|
|
nil, stateFilter.ContainsURL, stateFilter.Limit, FilterOrderAsc,
|
2020-02-13 18:27:33 +01:00
|
|
|
)
|
2021-01-19 19:00:42 +01:00
|
|
|
if err != nil {
|
|
|
|
return nil, nil, fmt.Errorf("s.prepareWithFilters: %w", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
rows, err := stmt.QueryContext(ctx, params...)
|
2020-02-13 18:27:33 +01:00
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
2020-03-17 17:45:40 +01:00
|
|
|
defer rows.Close() // nolint: errcheck
|
2020-02-13 18:27:33 +01:00
|
|
|
// Fetch all the state change events for all rooms between the two positions then loop each event and:
|
|
|
|
// - Keep a cache of the event by ID (99% of state change events are for the event itself)
|
|
|
|
// - For each room ID, build up an array of event IDs which represents cumulative adds/removes
|
|
|
|
// For each room, map cumulative event IDs to events and return. This may need to a batch SELECT based on event ID
|
|
|
|
// if they aren't in the event ID cache. We don't handle state deletion yet.
|
|
|
|
eventIDToEvent := make(map[string]types.StreamEvent)
|
|
|
|
|
|
|
|
// RoomID => A set (map[string]bool) of state event IDs which are between the two positions
|
|
|
|
stateNeeded := make(map[string]map[string]bool)
|
|
|
|
|
|
|
|
for rows.Next() {
|
|
|
|
var (
|
2021-11-03 10:53:37 +01:00
|
|
|
eventID string
|
2020-02-13 18:27:33 +01:00
|
|
|
streamPos types.StreamPosition
|
|
|
|
eventBytes []byte
|
|
|
|
excludeFromSync bool
|
Add peer-to-peer support into Dendrite via libp2p and fetch (#880)
* Use a fork of pq which supports userCurrent on wasm
* Use sqlite3_js driver when running in JS
* Add cmd/dendritejs to pull in sqlite3_js driver for wasm only
* Update to latest go-sqlite-js version
* Replace prometheus with a stub. sigh
* Hard-code a config and don't use opentracing
* Latest go-sqlite3-js version
* Generate a key for now
* Listen for fetch traffic rather than HTTP
* Latest hacks for js
* libp2p support
* More libp2p
* Fork gjson to allow us to enforce auth checks as before
Previously, all events would come down redacted because the hash
checks would fail. They would fail because sjson.DeleteBytes didn't
remove keys not used for hashing. This didn't work because of a build
tag which included a file which no-oped the index returned.
See https://github.com/tidwall/gjson/issues/157
When it's resolved, let's go back to mainline.
* Use gjson@1.6.0 as it fixes https://github.com/tidwall/gjson/issues/157
* Use latest gomatrixserverlib for sig checks
* Fix a bug which could cause exclude_from_sync to not be set
Caused when sending events over federation.
* Use query variadic to make lookups actually work!
* Latest gomatrixserverlib
* Add notes on getting p2p up and running
Partly so I don't forget myself!
* refactor: Move p2p specific stuff to cmd/dendritejs
This is important or else the normal build of dendrite will fail
because the p2p libraries depend on syscall/js which doesn't work
on normal builds.
Also, clean up main.go to read a bit better.
* Update ho-http-js-libp2p to return errors from RoundTrip
* Add an LRU cache around the key DB
We actually need this for P2P because otherwise we can *segfault*
with things like: "runtime: unexpected return pc for runtime.handleEvent"
where the event is a `syscall/js` event, caused by spamming sql.js
caused by "Checking event signatures for 14 events of room state" which
hammers the key DB repeatedly in quick succession.
Using a cache fixes this, though the underlying cause is probably a bug
in the version of Go I'm on (1.13.7)
* breaking: Add Tracing.Enabled to toggle whether we do opentracing
Defaults to false, which is why this is a breaking change. We need
this flag because WASM builds cannot do opentracing.
* Start adding conditional builds for wasm to handle lib/pq
The general idea here is to have the wasm build have a `NewXXXDatabase`
that doesn't import any postgres package and hence we never import
`lib/pq`, which doesn't work under WASM (undefined `userCurrent`).
* Remove lib/pq for wasm for syncapi
* Add conditional building to remaining storage APIs
* Update build script to set env vars correctly for dendritejs
* sqlite bug fixes
* Docs
* Add a no-op main for dendritejs when not building under wasm
* Use the real prometheus, even for WASM
Instead, the dendrite-sw.js must mock out `process.pid` and
`fs.stat` - which must invoke the callback with an error (e.g `EINVAL`)
in order for it to work:
```
global.process = {
pid: 1,
};
global.fs.stat = function(path, cb) {
cb({
code: "EINVAL",
});
}
```
* Linting
2020-03-06 11:23:55 +01:00
|
|
|
addIDsJSON string
|
|
|
|
delIDsJSON string
|
2020-02-13 18:27:33 +01:00
|
|
|
)
|
2021-11-03 10:53:37 +01:00
|
|
|
if err := rows.Scan(&eventID, &streamPos, &eventBytes, &excludeFromSync, &addIDsJSON, &delIDsJSON); err != nil {
|
2020-02-13 18:27:33 +01:00
|
|
|
return nil, nil, err
|
|
|
|
}
|
Add peer-to-peer support into Dendrite via libp2p and fetch (#880)
* Use a fork of pq which supports userCurrent on wasm
* Use sqlite3_js driver when running in JS
* Add cmd/dendritejs to pull in sqlite3_js driver for wasm only
* Update to latest go-sqlite-js version
* Replace prometheus with a stub. sigh
* Hard-code a config and don't use opentracing
* Latest go-sqlite3-js version
* Generate a key for now
* Listen for fetch traffic rather than HTTP
* Latest hacks for js
* libp2p support
* More libp2p
* Fork gjson to allow us to enforce auth checks as before
Previously, all events would come down redacted because the hash
checks would fail. They would fail because sjson.DeleteBytes didn't
remove keys not used for hashing. This didn't work because of a build
tag which included a file which no-oped the index returned.
See https://github.com/tidwall/gjson/issues/157
When it's resolved, let's go back to mainline.
* Use gjson@1.6.0 as it fixes https://github.com/tidwall/gjson/issues/157
* Use latest gomatrixserverlib for sig checks
* Fix a bug which could cause exclude_from_sync to not be set
Caused when sending events over federation.
* Use query variadic to make lookups actually work!
* Latest gomatrixserverlib
* Add notes on getting p2p up and running
Partly so I don't forget myself!
* refactor: Move p2p specific stuff to cmd/dendritejs
This is important or else the normal build of dendrite will fail
because the p2p libraries depend on syscall/js which doesn't work
on normal builds.
Also, clean up main.go to read a bit better.
* Update ho-http-js-libp2p to return errors from RoundTrip
* Add an LRU cache around the key DB
We actually need this for P2P because otherwise we can *segfault*
with things like: "runtime: unexpected return pc for runtime.handleEvent"
where the event is a `syscall/js` event, caused by spamming sql.js
caused by "Checking event signatures for 14 events of room state" which
hammers the key DB repeatedly in quick succession.
Using a cache fixes this, though the underlying cause is probably a bug
in the version of Go I'm on (1.13.7)
* breaking: Add Tracing.Enabled to toggle whether we do opentracing
Defaults to false, which is why this is a breaking change. We need
this flag because WASM builds cannot do opentracing.
* Start adding conditional builds for wasm to handle lib/pq
The general idea here is to have the wasm build have a `NewXXXDatabase`
that doesn't import any postgres package and hence we never import
`lib/pq`, which doesn't work under WASM (undefined `userCurrent`).
* Remove lib/pq for wasm for syncapi
* Add conditional building to remaining storage APIs
* Update build script to set env vars correctly for dendritejs
* sqlite bug fixes
* Docs
* Add a no-op main for dendritejs when not building under wasm
* Use the real prometheus, even for WASM
Instead, the dendrite-sw.js must mock out `process.pid` and
`fs.stat` - which must invoke the callback with an error (e.g `EINVAL`)
in order for it to work:
```
global.process = {
pid: 1,
};
global.fs.stat = function(path, cb) {
cb({
code: "EINVAL",
});
}
```
* Linting
2020-03-06 11:23:55 +01:00
|
|
|
|
|
|
|
addIDs, delIDs, err := unmarshalStateIDs(addIDsJSON, delIDsJSON)
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
|
2020-02-13 18:27:33 +01:00
|
|
|
// Sanity check for deleted state and whine if we see it. We don't need to do anything
|
|
|
|
// since it'll just mark the event as not being needed.
|
|
|
|
if len(addIDs) < len(delIDs) {
|
|
|
|
log.WithFields(log.Fields{
|
2020-05-15 10:41:12 +02:00
|
|
|
"since": r.From,
|
|
|
|
"current": r.To,
|
Add peer-to-peer support into Dendrite via libp2p and fetch (#880)
* Use a fork of pq which supports userCurrent on wasm
* Use sqlite3_js driver when running in JS
* Add cmd/dendritejs to pull in sqlite3_js driver for wasm only
* Update to latest go-sqlite-js version
* Replace prometheus with a stub. sigh
* Hard-code a config and don't use opentracing
* Latest go-sqlite3-js version
* Generate a key for now
* Listen for fetch traffic rather than HTTP
* Latest hacks for js
* libp2p support
* More libp2p
* Fork gjson to allow us to enforce auth checks as before
Previously, all events would come down redacted because the hash
checks would fail. They would fail because sjson.DeleteBytes didn't
remove keys not used for hashing. This didn't work because of a build
tag which included a file which no-oped the index returned.
See https://github.com/tidwall/gjson/issues/157
When it's resolved, let's go back to mainline.
* Use gjson@1.6.0 as it fixes https://github.com/tidwall/gjson/issues/157
* Use latest gomatrixserverlib for sig checks
* Fix a bug which could cause exclude_from_sync to not be set
Caused when sending events over federation.
* Use query variadic to make lookups actually work!
* Latest gomatrixserverlib
* Add notes on getting p2p up and running
Partly so I don't forget myself!
* refactor: Move p2p specific stuff to cmd/dendritejs
This is important or else the normal build of dendrite will fail
because the p2p libraries depend on syscall/js which doesn't work
on normal builds.
Also, clean up main.go to read a bit better.
* Update ho-http-js-libp2p to return errors from RoundTrip
* Add an LRU cache around the key DB
We actually need this for P2P because otherwise we can *segfault*
with things like: "runtime: unexpected return pc for runtime.handleEvent"
where the event is a `syscall/js` event, caused by spamming sql.js
caused by "Checking event signatures for 14 events of room state" which
hammers the key DB repeatedly in quick succession.
Using a cache fixes this, though the underlying cause is probably a bug
in the version of Go I'm on (1.13.7)
* breaking: Add Tracing.Enabled to toggle whether we do opentracing
Defaults to false, which is why this is a breaking change. We need
this flag because WASM builds cannot do opentracing.
* Start adding conditional builds for wasm to handle lib/pq
The general idea here is to have the wasm build have a `NewXXXDatabase`
that doesn't import any postgres package and hence we never import
`lib/pq`, which doesn't work under WASM (undefined `userCurrent`).
* Remove lib/pq for wasm for syncapi
* Add conditional building to remaining storage APIs
* Update build script to set env vars correctly for dendritejs
* sqlite bug fixes
* Docs
* Add a no-op main for dendritejs when not building under wasm
* Use the real prometheus, even for WASM
Instead, the dendrite-sw.js must mock out `process.pid` and
`fs.stat` - which must invoke the callback with an error (e.g `EINVAL`)
in order for it to work:
```
global.process = {
pid: 1,
};
global.fs.stat = function(path, cb) {
cb({
code: "EINVAL",
});
}
```
* Linting
2020-03-06 11:23:55 +01:00
|
|
|
"adds": addIDsJSON,
|
|
|
|
"dels": delIDsJSON,
|
2020-02-13 18:27:33 +01:00
|
|
|
}).Warn("StateBetween: ignoring deleted state")
|
|
|
|
}
|
|
|
|
|
|
|
|
// TODO: Handle redacted events
|
2020-03-19 13:07:01 +01:00
|
|
|
var ev gomatrixserverlib.HeaderedEvent
|
2021-11-03 10:53:37 +01:00
|
|
|
if err := ev.UnmarshalJSONWithEventID(eventBytes, eventID); err != nil {
|
2020-02-13 18:27:33 +01:00
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
needSet := stateNeeded[ev.RoomID()]
|
|
|
|
if needSet == nil { // make set if required
|
|
|
|
needSet = make(map[string]bool)
|
|
|
|
}
|
|
|
|
for _, id := range delIDs {
|
|
|
|
needSet[id] = false
|
|
|
|
}
|
|
|
|
for _, id := range addIDs {
|
|
|
|
needSet[id] = true
|
|
|
|
}
|
|
|
|
stateNeeded[ev.RoomID()] = needSet
|
|
|
|
|
2021-11-03 10:53:37 +01:00
|
|
|
eventIDToEvent[eventID] = types.StreamEvent{
|
2020-11-16 16:44:53 +01:00
|
|
|
HeaderedEvent: &ev,
|
2020-02-13 18:27:33 +01:00
|
|
|
StreamPosition: streamPos,
|
|
|
|
ExcludeFromSync: excludeFromSync,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return stateNeeded, eventIDToEvent, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// MaxID returns the ID of the last inserted event in this table. 'txn' is optional. If it is not supplied,
|
|
|
|
// then this function should only ever be used at startup, as it will race with inserting events if it is
|
|
|
|
// done afterwards. If there are no inserted events, 0 is returned.
|
2020-05-14 10:53:55 +02:00
|
|
|
func (s *outputRoomEventsStatements) SelectMaxEventID(
|
2020-02-13 18:27:33 +01:00
|
|
|
ctx context.Context, txn *sql.Tx,
|
|
|
|
) (id int64, err error) {
|
|
|
|
var nullableID sql.NullInt64
|
2020-06-12 15:55:57 +02:00
|
|
|
stmt := sqlutil.TxStmt(txn, s.selectMaxEventIDStmt)
|
2020-02-13 18:27:33 +01:00
|
|
|
err = stmt.QueryRowContext(ctx).Scan(&nullableID)
|
|
|
|
if nullableID.Valid {
|
|
|
|
id = nullableID.Int64
|
|
|
|
}
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
// InsertEvent into the output_room_events table. addState and removeState are an optional list of state event IDs. Returns the position
|
|
|
|
// of the inserted event.
|
2020-05-14 10:53:55 +02:00
|
|
|
func (s *outputRoomEventsStatements) InsertEvent(
|
2020-02-13 18:27:33 +01:00
|
|
|
ctx context.Context, txn *sql.Tx,
|
2020-03-19 13:07:01 +01:00
|
|
|
event *gomatrixserverlib.HeaderedEvent, addState, removeState []string,
|
2020-02-13 18:27:33 +01:00
|
|
|
transactionID *api.TransactionID, excludeFromSync bool,
|
2020-07-21 16:48:21 +02:00
|
|
|
) (types.StreamPosition, error) {
|
2020-02-13 18:27:33 +01:00
|
|
|
var txnID *string
|
|
|
|
var sessionID *int64
|
|
|
|
if transactionID != nil {
|
|
|
|
sessionID = &transactionID.SessionID
|
|
|
|
txnID = &transactionID.TransactionID
|
|
|
|
}
|
|
|
|
|
|
|
|
// Parse content as JSON and search for an "url" key
|
|
|
|
containsURL := false
|
|
|
|
var content map[string]interface{}
|
2022-04-13 13:16:02 +02:00
|
|
|
if json.Unmarshal(event.Content(), &content) == nil {
|
2020-02-13 18:27:33 +01:00
|
|
|
// Set containsURL to true if url is present
|
|
|
|
_, containsURL = content["url"]
|
|
|
|
}
|
|
|
|
|
2020-03-19 13:07:01 +01:00
|
|
|
var headeredJSON []byte
|
2020-07-21 16:48:21 +02:00
|
|
|
headeredJSON, err := json.Marshal(event)
|
2020-02-13 18:27:33 +01:00
|
|
|
if err != nil {
|
2020-07-21 16:48:21 +02:00
|
|
|
return 0, err
|
2020-02-13 18:27:33 +01:00
|
|
|
}
|
|
|
|
|
2021-01-19 19:00:42 +01:00
|
|
|
var addStateJSON, removeStateJSON []byte
|
|
|
|
if len(addState) > 0 {
|
|
|
|
addStateJSON, err = json.Marshal(addState)
|
|
|
|
}
|
Add peer-to-peer support into Dendrite via libp2p and fetch (#880)
* Use a fork of pq which supports userCurrent on wasm
* Use sqlite3_js driver when running in JS
* Add cmd/dendritejs to pull in sqlite3_js driver for wasm only
* Update to latest go-sqlite-js version
* Replace prometheus with a stub. sigh
* Hard-code a config and don't use opentracing
* Latest go-sqlite3-js version
* Generate a key for now
* Listen for fetch traffic rather than HTTP
* Latest hacks for js
* libp2p support
* More libp2p
* Fork gjson to allow us to enforce auth checks as before
Previously, all events would come down redacted because the hash
checks would fail. They would fail because sjson.DeleteBytes didn't
remove keys not used for hashing. This didn't work because of a build
tag which included a file which no-oped the index returned.
See https://github.com/tidwall/gjson/issues/157
When it's resolved, let's go back to mainline.
* Use gjson@1.6.0 as it fixes https://github.com/tidwall/gjson/issues/157
* Use latest gomatrixserverlib for sig checks
* Fix a bug which could cause exclude_from_sync to not be set
Caused when sending events over federation.
* Use query variadic to make lookups actually work!
* Latest gomatrixserverlib
* Add notes on getting p2p up and running
Partly so I don't forget myself!
* refactor: Move p2p specific stuff to cmd/dendritejs
This is important or else the normal build of dendrite will fail
because the p2p libraries depend on syscall/js which doesn't work
on normal builds.
Also, clean up main.go to read a bit better.
* Update ho-http-js-libp2p to return errors from RoundTrip
* Add an LRU cache around the key DB
We actually need this for P2P because otherwise we can *segfault*
with things like: "runtime: unexpected return pc for runtime.handleEvent"
where the event is a `syscall/js` event, caused by spamming sql.js
caused by "Checking event signatures for 14 events of room state" which
hammers the key DB repeatedly in quick succession.
Using a cache fixes this, though the underlying cause is probably a bug
in the version of Go I'm on (1.13.7)
* breaking: Add Tracing.Enabled to toggle whether we do opentracing
Defaults to false, which is why this is a breaking change. We need
this flag because WASM builds cannot do opentracing.
* Start adding conditional builds for wasm to handle lib/pq
The general idea here is to have the wasm build have a `NewXXXDatabase`
that doesn't import any postgres package and hence we never import
`lib/pq`, which doesn't work under WASM (undefined `userCurrent`).
* Remove lib/pq for wasm for syncapi
* Add conditional building to remaining storage APIs
* Update build script to set env vars correctly for dendritejs
* sqlite bug fixes
* Docs
* Add a no-op main for dendritejs when not building under wasm
* Use the real prometheus, even for WASM
Instead, the dendrite-sw.js must mock out `process.pid` and
`fs.stat` - which must invoke the callback with an error (e.g `EINVAL`)
in order for it to work:
```
global.process = {
pid: 1,
};
global.fs.stat = function(path, cb) {
cb({
code: "EINVAL",
});
}
```
* Linting
2020-03-06 11:23:55 +01:00
|
|
|
if err != nil {
|
2021-01-19 19:00:42 +01:00
|
|
|
return 0, fmt.Errorf("json.Marshal(addState): %w", err)
|
|
|
|
}
|
|
|
|
if len(removeState) > 0 {
|
|
|
|
removeStateJSON, err = json.Marshal(removeState)
|
Add peer-to-peer support into Dendrite via libp2p and fetch (#880)
* Use a fork of pq which supports userCurrent on wasm
* Use sqlite3_js driver when running in JS
* Add cmd/dendritejs to pull in sqlite3_js driver for wasm only
* Update to latest go-sqlite-js version
* Replace prometheus with a stub. sigh
* Hard-code a config and don't use opentracing
* Latest go-sqlite3-js version
* Generate a key for now
* Listen for fetch traffic rather than HTTP
* Latest hacks for js
* libp2p support
* More libp2p
* Fork gjson to allow us to enforce auth checks as before
Previously, all events would come down redacted because the hash
checks would fail. They would fail because sjson.DeleteBytes didn't
remove keys not used for hashing. This didn't work because of a build
tag which included a file which no-oped the index returned.
See https://github.com/tidwall/gjson/issues/157
When it's resolved, let's go back to mainline.
* Use gjson@1.6.0 as it fixes https://github.com/tidwall/gjson/issues/157
* Use latest gomatrixserverlib for sig checks
* Fix a bug which could cause exclude_from_sync to not be set
Caused when sending events over federation.
* Use query variadic to make lookups actually work!
* Latest gomatrixserverlib
* Add notes on getting p2p up and running
Partly so I don't forget myself!
* refactor: Move p2p specific stuff to cmd/dendritejs
This is important or else the normal build of dendrite will fail
because the p2p libraries depend on syscall/js which doesn't work
on normal builds.
Also, clean up main.go to read a bit better.
* Update ho-http-js-libp2p to return errors from RoundTrip
* Add an LRU cache around the key DB
We actually need this for P2P because otherwise we can *segfault*
with things like: "runtime: unexpected return pc for runtime.handleEvent"
where the event is a `syscall/js` event, caused by spamming sql.js
caused by "Checking event signatures for 14 events of room state" which
hammers the key DB repeatedly in quick succession.
Using a cache fixes this, though the underlying cause is probably a bug
in the version of Go I'm on (1.13.7)
* breaking: Add Tracing.Enabled to toggle whether we do opentracing
Defaults to false, which is why this is a breaking change. We need
this flag because WASM builds cannot do opentracing.
* Start adding conditional builds for wasm to handle lib/pq
The general idea here is to have the wasm build have a `NewXXXDatabase`
that doesn't import any postgres package and hence we never import
`lib/pq`, which doesn't work under WASM (undefined `userCurrent`).
* Remove lib/pq for wasm for syncapi
* Add conditional building to remaining storage APIs
* Update build script to set env vars correctly for dendritejs
* sqlite bug fixes
* Docs
* Add a no-op main for dendritejs when not building under wasm
* Use the real prometheus, even for WASM
Instead, the dendrite-sw.js must mock out `process.pid` and
`fs.stat` - which must invoke the callback with an error (e.g `EINVAL`)
in order for it to work:
```
global.process = {
pid: 1,
};
global.fs.stat = function(path, cb) {
cb({
code: "EINVAL",
});
}
```
* Linting
2020-03-06 11:23:55 +01:00
|
|
|
}
|
|
|
|
if err != nil {
|
2021-01-19 19:00:42 +01:00
|
|
|
return 0, fmt.Errorf("json.Marshal(removeState): %w", err)
|
Add peer-to-peer support into Dendrite via libp2p and fetch (#880)
* Use a fork of pq which supports userCurrent on wasm
* Use sqlite3_js driver when running in JS
* Add cmd/dendritejs to pull in sqlite3_js driver for wasm only
* Update to latest go-sqlite-js version
* Replace prometheus with a stub. sigh
* Hard-code a config and don't use opentracing
* Latest go-sqlite3-js version
* Generate a key for now
* Listen for fetch traffic rather than HTTP
* Latest hacks for js
* libp2p support
* More libp2p
* Fork gjson to allow us to enforce auth checks as before
Previously, all events would come down redacted because the hash
checks would fail. They would fail because sjson.DeleteBytes didn't
remove keys not used for hashing. This didn't work because of a build
tag which included a file which no-oped the index returned.
See https://github.com/tidwall/gjson/issues/157
When it's resolved, let's go back to mainline.
* Use gjson@1.6.0 as it fixes https://github.com/tidwall/gjson/issues/157
* Use latest gomatrixserverlib for sig checks
* Fix a bug which could cause exclude_from_sync to not be set
Caused when sending events over federation.
* Use query variadic to make lookups actually work!
* Latest gomatrixserverlib
* Add notes on getting p2p up and running
Partly so I don't forget myself!
* refactor: Move p2p specific stuff to cmd/dendritejs
This is important or else the normal build of dendrite will fail
because the p2p libraries depend on syscall/js which doesn't work
on normal builds.
Also, clean up main.go to read a bit better.
* Update ho-http-js-libp2p to return errors from RoundTrip
* Add an LRU cache around the key DB
We actually need this for P2P because otherwise we can *segfault*
with things like: "runtime: unexpected return pc for runtime.handleEvent"
where the event is a `syscall/js` event, caused by spamming sql.js
caused by "Checking event signatures for 14 events of room state" which
hammers the key DB repeatedly in quick succession.
Using a cache fixes this, though the underlying cause is probably a bug
in the version of Go I'm on (1.13.7)
* breaking: Add Tracing.Enabled to toggle whether we do opentracing
Defaults to false, which is why this is a breaking change. We need
this flag because WASM builds cannot do opentracing.
* Start adding conditional builds for wasm to handle lib/pq
The general idea here is to have the wasm build have a `NewXXXDatabase`
that doesn't import any postgres package and hence we never import
`lib/pq`, which doesn't work under WASM (undefined `userCurrent`).
* Remove lib/pq for wasm for syncapi
* Add conditional building to remaining storage APIs
* Update build script to set env vars correctly for dendritejs
* sqlite bug fixes
* Docs
* Add a no-op main for dendritejs when not building under wasm
* Use the real prometheus, even for WASM
Instead, the dendrite-sw.js must mock out `process.pid` and
`fs.stat` - which must invoke the callback with an error (e.g `EINVAL`)
in order for it to work:
```
global.process = {
pid: 1,
};
global.fs.stat = function(path, cb) {
cb({
code: "EINVAL",
});
}
```
* Linting
2020-03-06 11:23:55 +01:00
|
|
|
}
|
|
|
|
|
2021-01-19 19:00:42 +01:00
|
|
|
streamPos, err := s.streamIDStatements.nextPDUID(ctx, txn)
|
2020-08-21 11:42:08 +02:00
|
|
|
if err != nil {
|
|
|
|
return 0, err
|
|
|
|
}
|
|
|
|
insertStmt := sqlutil.TxStmt(txn, s.insertEventStmt)
|
|
|
|
_, err = insertStmt.ExecContext(
|
|
|
|
ctx,
|
|
|
|
streamPos,
|
|
|
|
event.RoomID(),
|
|
|
|
event.EventID(),
|
|
|
|
headeredJSON,
|
|
|
|
event.Type(),
|
|
|
|
event.Sender(),
|
|
|
|
containsURL,
|
|
|
|
string(addStateJSON),
|
|
|
|
string(removeStateJSON),
|
|
|
|
sessionID,
|
|
|
|
txnID,
|
|
|
|
excludeFromSync,
|
|
|
|
excludeFromSync,
|
|
|
|
)
|
2020-07-21 16:48:21 +02:00
|
|
|
return streamPos, err
|
2020-02-13 18:27:33 +01:00
|
|
|
}
|
|
|
|
|
2020-05-14 10:53:55 +02:00
|
|
|
func (s *outputRoomEventsStatements) SelectRecentEvents(
|
2020-02-13 18:27:33 +01:00
|
|
|
ctx context.Context, txn *sql.Tx,
|
2021-01-19 19:00:42 +01:00
|
|
|
roomID string, r types.Range, eventFilter *gomatrixserverlib.RoomEventFilter,
|
2020-02-13 18:27:33 +01:00
|
|
|
chronologicalOrder bool, onlySyncEvents bool,
|
2020-06-26 16:34:41 +02:00
|
|
|
) ([]types.StreamEvent, bool, error) {
|
2021-01-19 19:00:42 +01:00
|
|
|
var query string
|
2020-02-13 18:27:33 +01:00
|
|
|
if onlySyncEvents {
|
2021-01-19 19:00:42 +01:00
|
|
|
query = selectRecentEventsForSyncSQL
|
2020-02-13 18:27:33 +01:00
|
|
|
} else {
|
2021-01-19 19:00:42 +01:00
|
|
|
query = selectRecentEventsSQL
|
2020-02-13 18:27:33 +01:00
|
|
|
}
|
|
|
|
|
2021-01-19 19:00:42 +01:00
|
|
|
stmt, params, err := prepareWithFilters(
|
|
|
|
s.db, txn, query,
|
|
|
|
[]interface{}{
|
|
|
|
roomID, r.Low(), r.High(),
|
|
|
|
},
|
|
|
|
eventFilter.Senders, eventFilter.NotSenders,
|
|
|
|
eventFilter.Types, eventFilter.NotTypes,
|
2022-04-13 13:16:02 +02:00
|
|
|
nil, eventFilter.ContainsURL, eventFilter.Limit+1, FilterOrderDesc,
|
2021-01-19 19:00:42 +01:00
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
return nil, false, fmt.Errorf("s.prepareWithFilters: %w", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
rows, err := stmt.QueryContext(ctx, params...)
|
2020-02-13 18:27:33 +01:00
|
|
|
if err != nil {
|
2020-06-26 16:34:41 +02:00
|
|
|
return nil, false, err
|
2020-02-13 18:27:33 +01:00
|
|
|
}
|
2020-05-21 15:40:13 +02:00
|
|
|
defer internal.CloseAndLogIfError(ctx, rows, "selectRecentEvents: rows.close() failed")
|
2020-02-13 18:27:33 +01:00
|
|
|
events, err := rowsToStreamEvents(rows)
|
|
|
|
if err != nil {
|
2020-06-26 16:34:41 +02:00
|
|
|
return nil, false, err
|
2020-02-13 18:27:33 +01:00
|
|
|
}
|
|
|
|
if chronologicalOrder {
|
|
|
|
// The events need to be returned from oldest to latest, which isn't
|
|
|
|
// necessary the way the SQL query returns them, so a sort is necessary to
|
|
|
|
// ensure the events are in the right order in the slice.
|
|
|
|
sort.SliceStable(events, func(i int, j int) bool {
|
|
|
|
return events[i].StreamPosition < events[j].StreamPosition
|
|
|
|
})
|
|
|
|
}
|
2020-06-26 16:34:41 +02:00
|
|
|
// we queried for 1 more than the limit, so if we returned one more mark limited=true
|
|
|
|
limited := false
|
2021-01-19 19:00:42 +01:00
|
|
|
if len(events) > eventFilter.Limit {
|
2020-06-26 16:34:41 +02:00
|
|
|
limited = true
|
|
|
|
// re-slice the extra (oldest) event out: in chronological order this is the first entry, else the last.
|
|
|
|
if chronologicalOrder {
|
|
|
|
events = events[1:]
|
|
|
|
} else {
|
|
|
|
events = events[:len(events)-1]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return events, limited, nil
|
2020-02-13 18:27:33 +01:00
|
|
|
}
|
|
|
|
|
2020-05-14 10:53:55 +02:00
|
|
|
func (s *outputRoomEventsStatements) SelectEarlyEvents(
|
2020-02-13 18:27:33 +01:00
|
|
|
ctx context.Context, txn *sql.Tx,
|
2021-01-19 19:00:42 +01:00
|
|
|
roomID string, r types.Range, eventFilter *gomatrixserverlib.RoomEventFilter,
|
2020-02-13 18:27:33 +01:00
|
|
|
) ([]types.StreamEvent, error) {
|
2021-01-19 19:00:42 +01:00
|
|
|
stmt, params, err := prepareWithFilters(
|
|
|
|
s.db, txn, selectEarlyEventsSQL,
|
|
|
|
[]interface{}{
|
|
|
|
roomID, r.Low(), r.High(),
|
|
|
|
},
|
|
|
|
eventFilter.Senders, eventFilter.NotSenders,
|
|
|
|
eventFilter.Types, eventFilter.NotTypes,
|
2022-04-13 13:16:02 +02:00
|
|
|
nil, eventFilter.ContainsURL, eventFilter.Limit, FilterOrderAsc,
|
2021-01-19 19:00:42 +01:00
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
return nil, fmt.Errorf("s.prepareWithFilters: %w", err)
|
|
|
|
}
|
|
|
|
rows, err := stmt.QueryContext(ctx, params...)
|
2020-02-13 18:27:33 +01:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2020-05-21 15:40:13 +02:00
|
|
|
defer internal.CloseAndLogIfError(ctx, rows, "selectEarlyEvents: rows.close() failed")
|
2020-02-13 18:27:33 +01:00
|
|
|
events, err := rowsToStreamEvents(rows)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
// The events need to be returned from oldest to latest, which isn't
|
|
|
|
// necessarily the way the SQL query returns them, so a sort is necessary to
|
|
|
|
// ensure the events are in the right order in the slice.
|
|
|
|
sort.SliceStable(events, func(i int, j int) bool {
|
|
|
|
return events[i].StreamPosition < events[j].StreamPosition
|
|
|
|
})
|
|
|
|
return events, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// selectEvents returns the events for the given event IDs. If an event is
|
|
|
|
// missing from the database, it will be omitted.
|
2020-05-14 10:53:55 +02:00
|
|
|
func (s *outputRoomEventsStatements) SelectEvents(
|
2022-04-13 13:16:02 +02:00
|
|
|
ctx context.Context, txn *sql.Tx, eventIDs []string, filter *gomatrixserverlib.RoomEventFilter, preserveOrder bool,
|
2020-02-13 18:27:33 +01:00
|
|
|
) ([]types.StreamEvent, error) {
|
2022-04-08 18:53:24 +02:00
|
|
|
iEventIDs := make([]interface{}, len(eventIDs))
|
|
|
|
for i := range eventIDs {
|
|
|
|
iEventIDs[i] = eventIDs[i]
|
|
|
|
}
|
|
|
|
selectSQL := strings.Replace(selectEventsSQL, "($1)", sqlutil.QueryVariadic(len(eventIDs)), 1)
|
2022-04-13 13:16:02 +02:00
|
|
|
|
|
|
|
if filter == nil {
|
|
|
|
filter = &gomatrixserverlib.RoomEventFilter{Limit: 20}
|
|
|
|
}
|
|
|
|
stmt, params, err := prepareWithFilters(
|
|
|
|
s.db, txn, selectSQL, iEventIDs,
|
|
|
|
filter.Senders, filter.NotSenders,
|
|
|
|
filter.Types, filter.NotTypes,
|
|
|
|
nil, filter.ContainsURL, filter.Limit, FilterOrderAsc,
|
|
|
|
)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
2022-04-08 18:53:24 +02:00
|
|
|
}
|
2022-04-13 13:16:02 +02:00
|
|
|
rows, err := stmt.QueryContext(ctx, params...)
|
2022-04-08 18:53:24 +02:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
defer internal.CloseAndLogIfError(ctx, rows, "selectEvents: rows.close() failed")
|
|
|
|
streamEvents, err := rowsToStreamEvents(rows)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
if preserveOrder {
|
|
|
|
var returnEvents []types.StreamEvent
|
|
|
|
eventMap := make(map[string]types.StreamEvent)
|
|
|
|
for _, ev := range streamEvents {
|
|
|
|
eventMap[ev.EventID()] = ev
|
2020-02-13 18:27:33 +01:00
|
|
|
}
|
2022-04-08 18:53:24 +02:00
|
|
|
for _, eventID := range eventIDs {
|
|
|
|
ev, ok := eventMap[eventID]
|
|
|
|
if ok {
|
|
|
|
returnEvents = append(returnEvents, ev)
|
|
|
|
}
|
2020-02-13 18:27:33 +01:00
|
|
|
}
|
2022-04-08 18:53:24 +02:00
|
|
|
return returnEvents, nil
|
2020-02-13 18:27:33 +01:00
|
|
|
}
|
2022-04-08 18:53:24 +02:00
|
|
|
return streamEvents, nil
|
2020-02-13 18:27:33 +01:00
|
|
|
}
|
|
|
|
|
2020-09-15 12:17:46 +02:00
|
|
|
func (s *outputRoomEventsStatements) DeleteEventsForRoom(
|
|
|
|
ctx context.Context, txn *sql.Tx, roomID string,
|
|
|
|
) (err error) {
|
|
|
|
_, err = sqlutil.TxStmt(txn, s.deleteEventsForRoomStmt).ExecContext(ctx, roomID)
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2020-02-13 18:27:33 +01:00
|
|
|
func rowsToStreamEvents(rows *sql.Rows) ([]types.StreamEvent, error) {
|
|
|
|
var result []types.StreamEvent
|
|
|
|
for rows.Next() {
|
|
|
|
var (
|
2020-12-09 19:07:17 +01:00
|
|
|
eventID string
|
2020-02-13 18:27:33 +01:00
|
|
|
streamPos types.StreamPosition
|
|
|
|
eventBytes []byte
|
|
|
|
excludeFromSync bool
|
|
|
|
sessionID *int64
|
|
|
|
txnID *string
|
|
|
|
transactionID *api.TransactionID
|
|
|
|
)
|
2020-12-09 19:07:17 +01:00
|
|
|
if err := rows.Scan(&eventID, &streamPos, &eventBytes, &sessionID, &excludeFromSync, &txnID); err != nil {
|
2020-02-13 18:27:33 +01:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
// TODO: Handle redacted events
|
2020-03-19 13:07:01 +01:00
|
|
|
var ev gomatrixserverlib.HeaderedEvent
|
2020-12-09 19:07:17 +01:00
|
|
|
if err := ev.UnmarshalJSONWithEventID(eventBytes, eventID); err != nil {
|
2020-02-13 18:27:33 +01:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
if sessionID != nil && txnID != nil {
|
|
|
|
transactionID = &api.TransactionID{
|
|
|
|
SessionID: *sessionID,
|
|
|
|
TransactionID: *txnID,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
result = append(result, types.StreamEvent{
|
2020-11-16 16:44:53 +01:00
|
|
|
HeaderedEvent: &ev,
|
2020-02-13 18:27:33 +01:00
|
|
|
StreamPosition: streamPos,
|
|
|
|
TransactionID: transactionID,
|
|
|
|
ExcludeFromSync: excludeFromSync,
|
|
|
|
})
|
|
|
|
}
|
|
|
|
return result, nil
|
|
|
|
}
|
2022-02-21 17:12:22 +01:00
|
|
|
func (s *outputRoomEventsStatements) SelectContextEvent(
|
|
|
|
ctx context.Context, txn *sql.Tx, roomID, eventID string,
|
|
|
|
) (id int, evt gomatrixserverlib.HeaderedEvent, err error) {
|
|
|
|
row := sqlutil.TxStmt(txn, s.selectContextEventStmt).QueryRowContext(ctx, roomID, eventID)
|
|
|
|
var eventAsString string
|
|
|
|
if err = row.Scan(&id, &eventAsString); err != nil {
|
|
|
|
return 0, evt, err
|
|
|
|
}
|
|
|
|
|
|
|
|
if err = json.Unmarshal([]byte(eventAsString), &evt); err != nil {
|
|
|
|
return 0, evt, err
|
|
|
|
}
|
|
|
|
return id, evt, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (s *outputRoomEventsStatements) SelectContextBeforeEvent(
|
|
|
|
ctx context.Context, txn *sql.Tx, id int, roomID string, filter *gomatrixserverlib.RoomEventFilter,
|
|
|
|
) (evts []*gomatrixserverlib.HeaderedEvent, err error) {
|
|
|
|
stmt, params, err := prepareWithFilters(
|
|
|
|
s.db, txn, selectContextBeforeEventSQL,
|
|
|
|
[]interface{}{
|
|
|
|
roomID, id,
|
|
|
|
},
|
|
|
|
filter.Senders, filter.NotSenders,
|
|
|
|
filter.Types, filter.NotTypes,
|
2022-04-13 13:16:02 +02:00
|
|
|
nil, filter.ContainsURL, filter.Limit, FilterOrderDesc,
|
2022-02-21 17:12:22 +01:00
|
|
|
)
|
|
|
|
|
|
|
|
rows, err := stmt.QueryContext(ctx, params...)
|
|
|
|
if err != nil {
|
|
|
|
return
|
|
|
|
}
|
2022-03-24 11:03:22 +01:00
|
|
|
defer internal.CloseAndLogIfError(ctx, rows, "rows.close() failed")
|
2022-02-21 17:12:22 +01:00
|
|
|
|
|
|
|
for rows.Next() {
|
|
|
|
var (
|
|
|
|
eventBytes []byte
|
|
|
|
evt *gomatrixserverlib.HeaderedEvent
|
|
|
|
)
|
|
|
|
if err = rows.Scan(&eventBytes); err != nil {
|
|
|
|
return evts, err
|
|
|
|
}
|
|
|
|
if err = json.Unmarshal(eventBytes, &evt); err != nil {
|
|
|
|
return evts, err
|
|
|
|
}
|
|
|
|
evts = append(evts, evt)
|
|
|
|
}
|
|
|
|
|
|
|
|
return evts, rows.Err()
|
|
|
|
}
|
|
|
|
|
|
|
|
func (s *outputRoomEventsStatements) SelectContextAfterEvent(
|
|
|
|
ctx context.Context, txn *sql.Tx, id int, roomID string, filter *gomatrixserverlib.RoomEventFilter,
|
|
|
|
) (lastID int, evts []*gomatrixserverlib.HeaderedEvent, err error) {
|
|
|
|
stmt, params, err := prepareWithFilters(
|
|
|
|
s.db, txn, selectContextAfterEventSQL,
|
|
|
|
[]interface{}{
|
|
|
|
roomID, id,
|
|
|
|
},
|
|
|
|
filter.Senders, filter.NotSenders,
|
|
|
|
filter.Types, filter.NotTypes,
|
2022-04-13 13:16:02 +02:00
|
|
|
nil, filter.ContainsURL, filter.Limit, FilterOrderAsc,
|
2022-02-21 17:12:22 +01:00
|
|
|
)
|
|
|
|
|
|
|
|
rows, err := stmt.QueryContext(ctx, params...)
|
|
|
|
if err != nil {
|
|
|
|
return
|
|
|
|
}
|
2022-03-24 11:03:22 +01:00
|
|
|
defer internal.CloseAndLogIfError(ctx, rows, "rows.close() failed")
|
2022-02-21 17:12:22 +01:00
|
|
|
|
|
|
|
for rows.Next() {
|
|
|
|
var (
|
|
|
|
eventBytes []byte
|
|
|
|
evt *gomatrixserverlib.HeaderedEvent
|
|
|
|
)
|
|
|
|
if err = rows.Scan(&lastID, &eventBytes); err != nil {
|
|
|
|
return 0, evts, err
|
|
|
|
}
|
|
|
|
if err = json.Unmarshal(eventBytes, &evt); err != nil {
|
|
|
|
return 0, evts, err
|
|
|
|
}
|
|
|
|
evts = append(evts, evt)
|
|
|
|
}
|
|
|
|
return lastID, evts, rows.Err()
|
|
|
|
}
|
Add peer-to-peer support into Dendrite via libp2p and fetch (#880)
* Use a fork of pq which supports userCurrent on wasm
* Use sqlite3_js driver when running in JS
* Add cmd/dendritejs to pull in sqlite3_js driver for wasm only
* Update to latest go-sqlite-js version
* Replace prometheus with a stub. sigh
* Hard-code a config and don't use opentracing
* Latest go-sqlite3-js version
* Generate a key for now
* Listen for fetch traffic rather than HTTP
* Latest hacks for js
* libp2p support
* More libp2p
* Fork gjson to allow us to enforce auth checks as before
Previously, all events would come down redacted because the hash
checks would fail. They would fail because sjson.DeleteBytes didn't
remove keys not used for hashing. This didn't work because of a build
tag which included a file which no-oped the index returned.
See https://github.com/tidwall/gjson/issues/157
When it's resolved, let's go back to mainline.
* Use gjson@1.6.0 as it fixes https://github.com/tidwall/gjson/issues/157
* Use latest gomatrixserverlib for sig checks
* Fix a bug which could cause exclude_from_sync to not be set
Caused when sending events over federation.
* Use query variadic to make lookups actually work!
* Latest gomatrixserverlib
* Add notes on getting p2p up and running
Partly so I don't forget myself!
* refactor: Move p2p specific stuff to cmd/dendritejs
This is important or else the normal build of dendrite will fail
because the p2p libraries depend on syscall/js which doesn't work
on normal builds.
Also, clean up main.go to read a bit better.
* Update ho-http-js-libp2p to return errors from RoundTrip
* Add an LRU cache around the key DB
We actually need this for P2P because otherwise we can *segfault*
with things like: "runtime: unexpected return pc for runtime.handleEvent"
where the event is a `syscall/js` event, caused by spamming sql.js
caused by "Checking event signatures for 14 events of room state" which
hammers the key DB repeatedly in quick succession.
Using a cache fixes this, though the underlying cause is probably a bug
in the version of Go I'm on (1.13.7)
* breaking: Add Tracing.Enabled to toggle whether we do opentracing
Defaults to false, which is why this is a breaking change. We need
this flag because WASM builds cannot do opentracing.
* Start adding conditional builds for wasm to handle lib/pq
The general idea here is to have the wasm build have a `NewXXXDatabase`
that doesn't import any postgres package and hence we never import
`lib/pq`, which doesn't work under WASM (undefined `userCurrent`).
* Remove lib/pq for wasm for syncapi
* Add conditional building to remaining storage APIs
* Update build script to set env vars correctly for dendritejs
* sqlite bug fixes
* Docs
* Add a no-op main for dendritejs when not building under wasm
* Use the real prometheus, even for WASM
Instead, the dendrite-sw.js must mock out `process.pid` and
`fs.stat` - which must invoke the callback with an error (e.g `EINVAL`)
in order for it to work:
```
global.process = {
pid: 1,
};
global.fs.stat = function(path, cb) {
cb({
code: "EINVAL",
});
}
```
* Linting
2020-03-06 11:23:55 +01:00
|
|
|
|
|
|
|
func unmarshalStateIDs(addIDsJSON, delIDsJSON string) (addIDs []string, delIDs []string, err error) {
|
|
|
|
if len(addIDsJSON) > 0 {
|
|
|
|
if err = json.Unmarshal([]byte(addIDsJSON), &addIDs); err != nil {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if len(delIDsJSON) > 0 {
|
|
|
|
if err = json.Unmarshal([]byte(delIDsJSON), &delIDs); err != nil {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return
|
|
|
|
}
|