Table of Contents
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
Dendrite is built entirely using a micro-service architecture. That is, all of Dendrite’s functions are implemented across a set of components which can either run in the same process (referred to as a “monolith” deployment) or across separate processes or even separate machines (referred to as a “polylith” deployment).
A component may do any of the following:
- Implement specific logic useful to the homeserver
- Store and retrieve data from a SQL database, a filesystem, etc
- Produce Kafka events for eventual, but guaranteed, consumption by other Dendrite components
- Consume Kafka events generated by other Dendrite components or sources
- Provide an internal API, exposing Remote Procedure Call (RPC)-like functionality to other Dendrite components
- Consume the internal APIs of other components, to call upon RPC-like functionality of another component in realtime
- Provide an external API, allowing clients, federated homeservers etc to communicate inbound with the Dendrite deployment
- Consume an external API of another homeserver, service etc over the network or the Internet
Dendrite components are all fully capable of running either as their own process or as part of a shared monolith process.
Internal APIs
A component which implements an internal API will have three packages:
api
- Defines the API shape, exposed functions, request and response structsinternal
- Implements the API shape defined in theapi
package with concrete logicinthttp
- Implements the API shape defined in theapi
package as a set of both HTTP client and server endpoints for wrapping API function calls across the network
When running a monolith deployment, the internal
packages are used exclusively and the function implementations are called directly by other components. The request and response structs are passed between components as pointer values. This is referred to as “short-circuiting” as the API surface is not exposed outside of the process.
When running a polylith deployment, the inthttp
package wraps the API calls by taking the request struct, serialising it to JSON and then sending it over the network to the remote component. That component will then deserialise the request, call the matching internal
API function locally and then serialise the response before sending it back to the calling component.
All internal HTTP APIs must be registered onto the internal API mux so that they are exposed on the correct listeners in polylith and/or full HTTP monolith mode.
It is important that both the internal
and the inthttp
packages implement the interface as defined in the api
package exactly, so that they can be used interchangeably.
Today there are three main classes of API functions:
Query
calls, which typically take a set of inputs and return something in response, without causing side-effects or changing statePerform
calls, which typically take a set of inputs and synchronously perform a task that will change state or have side-effects, optionally returning some return valuesInput
calls, which typically take a set of inputs and either synchronously or asynchronously perform a task that will change state, typically without returning any return values
External APIs
A component that wishes to export HTTP endpoints to clients, other federated homeservers etc must do so by registering HTTP endpoints onto one of the public API muxes:
- The public client API mux is for registering endpoints to be accessed by clients (under
/_matrix/client
) - The public federation API mux is for registering endpoints to be accessed by other federated homeservers (under
/_matrix/federation
)
For this, the component must implement a routing
package, which will contain all of the code for setting up and handling these requests. A number of helper functions are available in httputil
which assist in creating authenticated or federation endpoints with the correct CORS and federation header header/access token verification - these are always used.
Kafka producers
A component may produce Kafka events onto one or more Kafka topics for consumption by other components. Code for producing Kafka events is always implemented in the component's producer
package.
Each Kafka topic is owned explicitly by one component - under no circumstances should a component write to a Kafka topic owned by another component.
Kafka consumers
A component may consume Kafka events from one or more Kafka topics. Code for consuming Kafka events is always implemented in the component's consumer
package.
Any number of components may subscribe to a given Kafka topic, although it is important to understand that the component will be expected to deal with every event that arrives from a given topic.
Database storage
Components often need to persist data to the disk in order to be useful. This is typically done by implementing database code in the component's storage
package.
Unless there are exceptional circumstances, non-file-based storage is always done using an SQL database. Dendrite implements two SQL backends, therefore a component must implement both:
- PostgreSQL - the preferred database engine, especially for medium-to-large deployments
- SQLite - used mainly for single-user deployments
Each component is expected to maintain its own database and all tables should be namespaced to avoid conflicts in the event that multiple components share the same Postgres database, for example. (This happens in Sytest.)
All storage code is kept in the component's storage
package - the package should export an interface with pragmatic and useful functions but without exposing any of the internal details of the storage layer. Numeric IDs used in storage, SQL transactions etc should not cross the top-level storage interface.