The existing tests for HDFS and YARN only check if the services come up and expose their web interfaces.
The new combined hadoop test will also test whether the services and roles work together as intended.
It spin up an HDFS+YARN cluster and submit a demo YARN application that uses the hadoop cluster for
storageand yarn cluster for compute.
borg is able to process stdin during backups when backing up the special path -,
which can be very useful for backing up things that can be streamed (eg database
dumps, zfs snapshots).
mosquitto needs a lot of attention concerning its config because it doesn't
parse it very well, often ignoring trailing parts of lines, duplicated config
keys, or just looking back way further in the file to associated config keys
with previously defined items than might be expected.
this replaces the mosquitto module completely. we now have a hierarchical config
that flattens out to the mosquitto format (hopefully) without introducing spooky
action at a distance.
/etc/crypttab can contain the _netdev option, which adds crypto devices
to the remote-cryptsetup.target.
remote-cryptsetup.target has a dependency on cryptsetup-pre.target. So
let's add both of them.
Currently, one needs to manually ssh in and invoke `systemctl start
systemd-cryptsetup@<name>.service` to unlock volumes.
After this change, systemd will properly add it to the target, and
assuming remote-cryptsetup.target is pulled in somewhere, you can simply
pass the passphrase by invoking `systemd-tty-ask-password-agent` after
ssh-ing in, without having to manually start these services.
Whether remote-cryptsetup.target should be added to multi-user.target
(as it is on other distros) is part of another discussion - right now
the following snippet will do:
```
systemd.targets.multi-user.wants = [ "remote-cryptsetup.target" ];
```
Makes service more customizeable and makes debuggingin easier through
the use of flags like `--log-debug` or `--dump-settings`.
Signed-off-by: Jakub Sokołowski <jakub@status.im>
The current implementation just forks off a thread to read
QEMU's stdout and lets it exist forever. This, however,
makes the interpreter shutdown racy, as the thread could
still be running and writing out buffered stdout when the
main thread exits (and since it's using the low level API,
the worker thread does not get cleaned up by the atexit hooks
installed by `threading`, either). So, instead of doing that,
let's create a real `threading.Thread` object, and also
explicitly `join` it along with the other stuff when cleaning up.