When building the docker images for complement testing, copy a preinstalled
complement over from a base image, rather than apt installing it. This avoids
network traffic and is much faster.
When we join a room via the faster-joins mechanism, we end up with "partial
state" at some points on the event DAG. Many parts of the codebase need to
wait for the full state to load. So, we implement a mechanism to keep track of
which events have partial state, and wait for them to be fully-populated.
When we run a worker-mode synapse under docker, everything gets logged to stdout. Currently, output from the workers is tacked with a worker name, for example:
```
2022-04-13 15:27:56,810 - worker:frontend_proxy1 - synapse.util.caches.lrucache - 154 - INFO - LruCache._expire_old_entries-0 - Dropped 0 items from caches
```
- note `worker:frontend_proxy1`. No such tag is applied to log lines from the master, which makes somewhat confusing reading.
To fix this, we generate a dedicated log config file for the master in the same way that we do for the workers, and use that.
The requirements file generated by `poetry export` isn't correctly processed by `pip install -r requirements.txt`. It contains twisted and treq, both pinned to 22.2.0.
When `pip` installs treq, it notices that `Twisted[tls]` is required. It then tries to acquire the latest twisted release, only to fail (because this hash isn't listed in the requirements file).From e.g. https://github.com/matrix-org/synapse/runs/5977154990?check_suite_focus=true
> ```
> #15 9.204 Collecting Twisted[tls]>=18.7.0
> #15 9.205 ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==. These do not:
> #15 9.205 Twisted[tls]>=18.7.0 from 38622ff95b/Twisted-22.4.0-py3-none-any.whl (sha256)=f9f7a91f94932477a9fc3b169d57f54f96c6e74a23d78d9ce54039a7f48928a2 (from treq==22.2.0->-r /synapse/requirements.txt (line 724))
> #15 ERROR: executor failed running [/bin/sh -c pip install --prefix="/install" --no-warn-script-location -r /synapse/requirements.txt]: exit code: 1
> ```
The underlying pip issue is https://github.com/pypa/pip/issues/9644. A comment notes that one can avoid this behaviour with by `pip install`ing with the `--no-deps` flag. Let us do so.
(At first glance, the problem looks like https://github.com/python-poetry/poetry/issues/5311, but that was a bug in `poetry install`; this is `poetry export`, whose behaviour is fine AFAICS).
Fixesmatrix-org/complement#330 (or it will, once we remove the old files).
It's not quite a lift-and-shift: I've also taken the opportunity to get rid of the custom CA that we used to use to sign the TLS certs, which has been superceded by the CA exposed by Complement.
* Two scripts are basically entry_points already
* Move and rename scripts/* to synapse/_scripts/*.py
* Delete sync_room_to_group.pl
* Expose entry points in setup.py
* Update linter script and config
* Fixup scripts & docs mentioning scripts that moved
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
The driver for this is to stop Complement complaining about it, but as far as I can tell it was pointless and needed to go away anyway.
I'm a bit unclear about what exactly VOLUME does, but I think what it means is that, if you don't override it with an explicit -v argument, then docker run will create a temporary volume, and copy things into it. The temporary volume is then deleted when the container finishes.
That only sounds useful if your image has something to copy into it (otherwise you may as well just use the default root filesystem), and our image notably doesn't copy anything into /data.
So... this wasn't doing anything, except annoying Complement?
* remove reference in comments to python3.6
* upgrade tox python env in script
* bump python version in example for completeness
* upgrade python version requirement in setup doc
* upgrade necessary python version in __init__.py
* upgrade python version in setup.py
* newsfragment
* drops refs to bionic and replace with focal
* bump refs to postgres 9.6 to 10
* fix hanging ci
* try installing tzdata first
* revert change made in b979f336
* ignore new random mypy error while debugging other error
* fix lint error for temporary workaround
* revert change to install list
* try passing env var
* export debian frontend var?
* move line and add comment
* bump pillow dependency
* bump lxml depenency
* install libjpeg-dev for pillow
* bump automat version to one compatible with py3.8
* add libwebp for pillow
* bump twisted trunk python version
* change suffix of newsfragment
* remove redundant python 3.7 checks
* lint
* remove background update code related to deprecated config flag
* changelog entry
* update changelog
* Delete 11394.removal
Duplicate, wrong number
* add no-op background update and change newfragment so it will be consolidated with associated work
* remove unused code
* Remove code associated with deprecated flag from legacy docker dynamic config file
Co-authored-by: reivilibre <oliverw@matrix.org>
* Docker image: avoid changing user during `generate`
The intention was always that the config files get written as the initial user
(normally root) - only the data directory needs to be writable by Synapse. This
got changed in https://github.com/matrix-org/synapse/pull/5970, but that seems
to have been a mistake.
* Avoid changing user if no explicit UID is given
* changelog
- Use sytest:bionic. Sytest:latest is two years old (do we want
CI to push out latest at all?) and comes with Python 3.5, which we
explictly no longer support. The script now runs under PostgreSQL 10
as a result.
- Advertise script in the docs
- Move pg testing script to scripts-dev directory
- Write to host as the script's exector, not root
A few changes to make it speedier to re-run the tests:
- Create blank DB in the container, not the script, so we don't have to
`initdb` each time
- Use a named volume to persist the tox environment, so we don't have to
fetch and install a bunch of packages from PyPI each time
Co-authored-by: reivilibre <olivier@librepush.net>
* Add healthcheck startup delay by 5secs and reduced interval check to 15s
to reduce waiting time for docker aware edge routers bringing an
instance online
This PR adds a Dockerfile and some supporting files to the `docker/` directory. The Dockerfile's intention is to spin up a container with:
* A Synapse main process.
* Any desired worker processes, defined by a `SYNAPSE_WORKERS` environment variable supplied at runtime.
* A redis for worker communication.
* A nginx for routing traffic.
* A supervisord to start all worker processes and monitor them if any go down.
Note that **this is not currently intended to be used in production**. If you'd like to use Synapse workers with Docker, instead make use of the official image, with one worker per container. The purpose of this dockerfile is currently to allow testing Synapse in worker mode with the [Complement](https://github.com/matrix-org/complement/) test suite.
`configure_workers_and_start.py` is where most of the magic happens in this PR. It reads from environment variables (documented in the file) and creates all necessary config files for the processes. It is the entrypoint of the Dockerfile, and thus is run any time the docker container is spun up, recreating all config files in case you want to use a different set of workers. One can specify which workers they'd like to use by setting the `SYNAPSE_WORKERS` environment variable (as a comma-separated list of arbitrary worker names) or by setting it to `*` for all worker processes. We will be using the latter in CI.
Huge thanks to @MatMaul for helping get this all working 🎉 This PR is paired with its equivalent on the Complement side: https://github.com/matrix-org/complement/pull/62.
Note, for the purpose of testing this PR before it's merged: You'll need to (re)build the base Synapse docker image for everything to work (`matrixdotorg/synapse:latest`). Then build the worker-based docker image on top (`matrixdotorg/synapse:workers`).
Context is in https://github.com/matrix-org/synapse/issues/9764#issuecomment-818615894.
I struggled to find a more official link for this. The problem occurs when using WSL1 instead of WSL2, which some Windows platforms (at least Server 2019) still don't have. Docker have updated their documentation to paint a much happier picture now given WSL2's support.
The last sentence here can probably be removed once WSL1 is no longer around... though that will likely not be for a very long time.
They don't make any sense on the intermediate builder image. The final
images needs them to be of use for anyone.
Signed-off-by: Johannes Wienke <languitar@semipol.de>
`room_invite_state_types` was inconvenient as a configuration setting, because
anyone that ever set it would not receive any new types that were added to the
defaults. Here, we deprecate the old setting, and replace it with a couple of
new settings under `room_prejoin_state`.
Make pip install faster in Docker build for [Complement](https://github.com/matrix-org/complement) testing.
If files have changed in a `COPY` command, Docker will invalidate all of the layers below. So I changed the order of operations to install all dependencies before we `COPY synapse /synapse/synapse/`. This allows Docker to use our cached layer of dependencies even when we change the source of Synapse and speed up builds dramatically! `53.5s` -> `3.7s` builds 🤘
As an alternative, I did try using BuildKit caches but this still took 30 seconds overall on that step. 15 seconds to gather the dependencies from the cache and another 15 seconds to `Installing collected packages`.
Fix https://github.com/matrix-org/synapse/issues/9364
Adds note about updating dh-virtualenv once we drop support for Xenial.
We can't update now, because it needs debhelper 12, while Xenial only
backports 10.
Signed-off-by: Dan Callahan <danc@element.io>
Debian package builds were failing for two reasons:
1. Python versions prior to 3.7 throw exceptions when attempting to print
Unicode characters under a "C" locale. (#9076)
2. We depended on `dh-systemd` which no longer exists in Debian Bullseye, but
is necessary in Ubuntu Xenial. (#9073)
Setting `LANG="C.UTF-8"` in the build environment fixes the first issue.
See also: https://bugs.python.org/issue19846
The second issue is a bit trickier. The dh-systemd package was merged into
debhelper version 9.20160709 and a transitional package left in its wake.
The transitional dh-systemd package was removed in Debian Bullseye.
However, Ubuntu Xenial ships an older debhelper, and still needs dh-systemd.
Thus, builds were failing on Bullseye since we depended on a package which had
ceased existing, but we couldn't remove it from the debian/control file and our
build scripts because we still needed it for Ubuntu Xenial.
We can fix the debian/control issue by listing dh-systemd as an alternative to
the newer versions of debhelper. Since dh-systemd declares that it depends on
debhelper, Ubuntu Xenial will select its older dh-systemd which will in turn
pull in its older debhelper, resulting in no change from the status quo. All
other supported releases will satisfy the debhelper dependency constraint and
skip the dh-systemd alternative.
Build scripts were fixed by unconditionally attempting to install dh-systemd on
all releases and suppressing failures.
Once we drop support for Ubuntu Xenial, we can revert most of this commit and
rely on the version constraint on debhelper in debian/control.
Fixes#9076Fixes#9073
Signed-off-by: Dan Callahan <danc@element.io>
This removes the version pin of the `prometheus_client` dependency, in direct response to #8831. If merged, this will close#8831
As far as I can tell, no other changes are needed, but as I'm no synapse expert, I'm relying heavily on CI and maintainer reviews for this. My very primitive test of synapse with prometheus_client v0.9.0 on my home server didn't bring up any issues, so we'll see what happens.
Signed-off-by: Jordan Bancino
We do this to prevent foot guns. The default config uses a MemoryFilter,
but users are free to change to logging to files directly. If they do
then they have to ensure to set the `filters: [context]` on the right
handler, otherwise records get written with the wrong context.
Instead we move the logic to happen when we generate a record, which is
when we *log* rather than *handle*.
(It's possible to add filters to loggers in the config, however they
don't apply to descendant loggers and so they have to be manually set on
*every* logger used in the code base)
As mentioned in #7397, switching to a debian base should help with multi-arch work to save time on compiling. This is unashamedly based on #6373, but without the extra functionality. Switch python version back to generic 3.7 to always pull the latest. Essentially, keeping this as small as possible. The image is bigger though unfortunately.