This reverts commit 57961d2b83, reversing
changes made to b04f913afc.
(I.e. this reverts PR #141192.)
While well-intended, this change does unfortunately introduce very
serious regressions that are especially disruptive/noticeable on desktop
systems (e.g. users of Sway will loose their graphical session when
running "nixos-rebuild switch").
Therefore, this change has to be reverted ASAP instead of trying to fix
it in "production".
Note: An updated version should be extensively discussed, reviewed, and
tested before re-landing this change as an earlier version also had to
be reverted for the exact same issues [0].
Fix: #146727
[0]: https://github.com/NixOS/nixpkgs/pull/73871#issuecomment-559783752
See the added comment in all-packages.nix for a more detailed
explanation. This makes the top-level GHC different from
haskellPackages.ghc (which is build->host and used for building the
package set), but more consistent with gcc, gnat etc.
Specifically, pkgsCross.${platform}.buildPackages.ghc will now be a
cross-compiler instead of a native build->build compiler.
Since this change has a slight chance of being disruptive, add a note to
the changelog.
Update the default GNAT version from 9 to 11, as GNAT >= 11 is required
to compile the 22.* AdaCore libraries.
To allow this, we need to pick a patch from ghdl's master fixing a
compilation problem with GNAT 11.
The option `services.prometheus.environmentFile` has been removed since it was causing [issues](https://github.com/NixOS/nixpkgs/issues/126083) and Prometheus now has native support for secret files.
pam_mkhomedir should create homedirs with the same umask as the rest
of the system. Currently it creates homedirs with go+rx which makes
it readable for other non-privileged users.
Use service internal bind mounts instead of global ones.
This also moves the logs to /var/log/unifi on the host
and the run directory to /run/unifi.
Closes#61424
The new option `services.prometheus.enableReload` has been introduced
which, when enabled, causes the prometheus systemd service to reload
when its config file changes.
More specifically the following property holds: switching to a
configuration (`switch-to-configuration`) that changes the prometheus
configuration only finishes successully when prometheus has finished
loading the new configuration.
`enableReload` is `false` by default in which case the old semantics
of restarting the prometheus systemd service are in effect.
the `nixos-rebuild` command has multiple subcommands, and not each of
them would fix the problem of a large `/boot` partition, so let’s be
more precise here.
* FluidSynth 1.1.11 was kept around as a dependency of some packages
that hadn't yet adjusted to API breakages. All of these packages now
use FluidSynth 2.x, so fluidsynth_1 can be removed. It has been broken
ever since glib was updated to 2.70 and was affected by an unpatched
CVE.
* Refactor expression a bit, use pname instead of name.
* Add changelog entry in case someone was using this downstream
(accidentally?).
Fixes#141508.
Fixes#124624.
The version 20 of Nextcloud will be EOLed by the end of this month[1].
Since the recommended default (that didn't raise an eval-warning) on
21.05 was Nextcloud 21, this shouldn't affect too many people.
In order to ensure that nobody does a (not working) upgrade across
several major-versions of Nextcloud, I replaced the derivation of
`nextcloud20` with a `throw` that provides instructions how to proceed.
The only case that I consider "risky" is a setup upgraded from 21.05 (or
older) with a `system.stateVersion` <21.11 and with
`services.nextcloud.package` not explicitly declared in its config. To
avoid that, I also left the `else-if` for `stateVersion < 21.03` which
now sets `services.nextcloud.package` to `pkgs.nextcloud20` and thus
leads to an eval-error. This condition can be removed
as soon as 21.05 is EOL because then it's safe to assume that only
21.11. is used as stable release where no Nextcloud <=20 exists that can
lead to such an issue.
It can't be removed earlier because then every `system.stateVersion <
21.11` would lead to `nextcloud21` which is a problem if `nextcloud19`
is still used.
[1] https://docs.nextcloud.com/server/20/admin_manual/release_schedule.html
mosquitto needs a lot of attention concerning its config because it doesn't
parse it very well, often ignoring trailing parts of lines, duplicated config
keys, or just looking back way further in the file to associated config keys
with previously defined items than might be expected.
this replaces the mosquitto module completely. we now have a hierarchical config
that flattens out to the mosquitto format (hopefully) without introducing spooky
action at a distance.
This commit changes a lot more that you'd expect but it also adds a lot
of new testing code so nothing breaks in the future. The main change is
that sockets are now restarted when they change. The main reason for
the large amount of changes is the ability of activation scripts to
restart/reload units. This also works for socket-activated units now,
and honors reloadIfChanged and restartIfChanged. The two changes don't
really work without each other so they are done in the one large commit.
The test should show what works now and ensure it will continue to do so
in the future.
allows configuration of foo-over-udp decapsulation endpoints. sadly networkd
seems to lack the features necessary to support local and peer address
configuration, so those are only supported when using scripted configuration.
The multipath-tools package had existed in Nixpkgs for some time but
without a nixos module to configure/drive it. This module provides
attributes to drive the majority of multipath configuration options
and is being successfully used in stage-1 and stage-2 boot to mount
/nix from a multipath-serviced iSCSI volume.
Credit goes to @grahamc for early contributions to the module and
authoring the NixOS module test.
NixOS should be able to support the Nintendo Switch Pro controller for
steam and non-steam at the same time. Currently there are two mutually
exclusive ways to support the Pro Controller: Steam and `hid-nintendo`.
Unfortunately these don't work together, but there's a workaround in
newer versions of `joycond` (described [here](https://wiki.archlinux.org/title/Gamepad#Using_hid-nintendo_pro_controller_with_Steam_Games_(with_joycond))). To use this
workaround `hid-nintendo` and `joycond` need to be updated, and the
systemd and udev configuration needs to be made available in NixOS.
In opencv 2.x, unfree libraries are built by default. The package
should therefore have been marked as unfree, but wasn't.
I've disabled the non-free libraries by default, and added an option
to enable them. There are three programs in Nixpkgs that depend on
opencv2: mathematica, pfstools, and p2pvc. pfstools requires the
non-free libraries if it's built with opencv support, so I've disabled
opencv by default there and added an option to enable it. p2pvc links
fine, so presumably doesn't need the non-free libraries. I can't test
mathematica, so I'm just going to leave it alone.
This updates systemd to version v249.4 from version v247.6.
Besides the many new features that can be found in the upstream
repository they also introduced a bunch of cleanup which ended up
requiring a few more patches on our side.
a) 0022-core-Handle-lookup-paths-being-symlinks.patch:
The way symlinked units were handled was changed in such that the last
name of a unit file within one of the unit directories
(/run/systemd/system, /etc/systemd/system, ...) is used as the name
for the unit. Unfortunately that code didn't take into account that
the unit directories themselves could already be symlinks and thus
caused all our units to be recognized slightly different.
There is an upstream PR for this new patch:
https://github.com/systemd/systemd/pull/20479
b) The way the APIVFS is setup has been changed in such a way that we
now always have /run. This required a few changes to the
confinement tests which did assert that they didn't exist. Instead of
adding another patch we can just adopt the upstream behavior. An
empty /run doesn't seem harmful.
As part of this work I refactored the confinement test just a little
bit to allow better debugging of test failures. Previously it would
just fail at some point and it wasn't obvious which of the many
commands failed or what the unexpected string was. This should now be
more obvious.
c) Again related to the confinement tests the way a file was tested for
being accessible was optimized. Previously systemd would in some
situations open a file twice during that check. This was reduced to
one operation but required the procfs to be mounted in a units
namespace.
An upstream bug was filed and fixed. We are now carrying the
essential patch to fix that issue until it is backported to a new
release (likely only version 250). The good part about this story is
that upstream systemd now has a test case that looks very similar to
one of our confinement tests. Hopefully that will lead to less
friction in the long run.
https://github.com/systemd/systemd/issues/20514https://github.com/systemd/systemd/pull/20515
d) Previously we could grep for dlopen( somewhat reliably but now
upstream started using a wrapper around dlopen that is most of the
time used with linebreaks. This makes using grep not ergonomic
anymore.
With this bump we are grepping for anything that looks like a
dynamic library name (in contrast to a dlopen(3) call) and replace
those instead. That seems more robust. Time will tell if this holds.
I tried using coccinelle to patch all those call sites using its
tooling but unfornately it does stumble upon the _cleanup_
annotations that are very common in the systemd code.
e) We now have some machinery for libbpf support in our systemd build.
That being said it doesn't actually work as generating some skeletons
doesn't work just yet. It fails with the below error message and is
disabled by default (in both minimal and the regular build).
> FAILED: src/core/bpf/socket_bind/socket-bind.skel.h
> /build/source/tools/build-bpf-skel.py --clang_exec /nix/store/x1bi2mkapk1m0zq2g02nr018qyjkdn7a-clang-wrapper-12.0.1/bin/clang --llvm_strip_exec /nix/store/zm0kqan9qc77x219yihmmisi9g3sg8ns-llvm-12.0.1/bin/llvm-strip --bpftool_exec /nix/store/l6dg8jlbh8qnqa58mshh3d8r6999dk0p-bpftools-5.13.11/bin/bpftool --arch x86_64 ../src/core/bpf/socket_bind/socket-bind.bpf.c src/core/bpf/socket_bind/socket-bind.skel.h
> libbpf: elf: socket_bind_bpf is not a valid eBPF object file
> Error: failed to open BPF object file: BPF object format invalid
> Traceback (most recent call last):
> File "/build/source/tools/build-bpf-skel.py", line 128, in <module>
> bpf_build(args)
> File "/build/source/tools/build-bpf-skel.py", line 92, in bpf_build
> gen_bpf_skeleton(bpftool_exec=args.bpftool_exec,
> File "/build/source/tools/build-bpf-skel.py", line 63, in gen_bpf_skeleton
> skel = subprocess.check_output(bpftool_args, universal_newlines=True)
> File "/nix/store/81lwy2hfqj4c1943b1x8a0qsivjhdhw9-python3-3.9.6/lib/python3.9/subprocess.py", line 424, in check_output
> return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
> File "/nix/store/81lwy2hfqj4c1943b1x8a0qsivjhdhw9-python3-3.9.6/lib/python3.9/subprocess.py", line 528, in run
> raise CalledProcessError(retcode, process.args,
> subprocess.CalledProcessError: Command '['/nix/store/l6dg8jlbh8qnqa58mshh3d8r6999dk0p-bpftools-5.13.11/bin/bpftool', 'g', 's', '../src/core/bpf/socket_bind/socket-bind.bpf.o']' returned non-zero exit status 255.
> [102/1457] Compiling C object src/journal/libjournal-core.a.p/journald-server.c.oapture output)put)ut)
> ninja: build stopped: subcommand failed.
f) We do now have support for TPM2 based disk encryption in our
systemd build. The actual bits and pieces to make use of that are
missing but there are various ongoing efforts in that direction.
There is also the story about systemd in our initrd to enable this
being used for root volumes. None of this will yet work out of the
box but we can start improving on that front.
g) FIDO2 support was added systemd and consequently we can now use
that. Just with TPM2 there hasn't been any integration work with
NixOS and instead this just adds that capability to work on that.
Co-Authored-By: Jörg Thalheim <joerg@thalheim.io>