Summary of this change:
- Simplify code.
- Stop a disk image from being cached in the binary cache.
- Make erofs Nix Store image build in an acceptable time outside of
testing environments (like `darwin.builder`).
- Do not regress on performance for tests that use many store paths in
their Nix store image.
- Slightly longer startup time for tests where not many store paths are
included in the image (these probably shouldn't use `useNixStoreImage`
anyways).
- Slightly longer startup time when inputs of VM do not change because
the Nix store image is not cached anymore.
Remove the `storeImage` built with make-disk-image.nix. This produced a
separate derivation which is then cached in the binary cache. These
types of images should be avoided because they gunk up the cache as they
change frequently. Now all Nix store images, whether read-only or
writable are based on the erofs image previously only used for read-only
images.
Additionally, simplify the way the erofs image is built by copying the
paths to include to a separate directory and build the erofs image from
there.
Before this change, the list of Nix store paths to include in the Nix
store image was converted to a complex regex that *excludes* all other
paths from a potentially large Nix store.
This previous approach suffers from two issues:
1. The regex is complex and, as admitted in the source code of the
includes-to-excludes.py script, most likely contains at least one
error. This means that it's unlikely that anyone will touch this
piece of software again.
2. When the Nix store image is built from a large Nix store (like when
you build the VM script to run outside of any testing context) this
regex becomes painfully slow. There is at least one prominent
use-case where this matters: `darwin.builder`.
Benchmarking impressions:
- Building Nix store via make-disk-image.nix takes ~25s
- Building Nix store as an erofs image takes ~4s
- Running nixosTests.qemu-vm-writable-store-image takes ~10s when
building the erofs image with the regex vs ~14s when building by
copying to a temporary directory.
- nixosTests.gitlab which had the biggest gains from the initial erofs
change takes the same time as before.
- On a host with ~140k paths in /nix/store, building the erofs image
with the regex takes 410s as opposed to 6s when copying to a temporary
directory.
these changes were generated with nixq 0.0.2, by running
nixq ">> lib.mdDoc[remove] Argument[keep]" --batchmode nixos/**.nix
nixq ">> mdDoc[remove] Argument[keep]" --batchmode nixos/**.nix
nixq ">> Inherit >> mdDoc[remove]" --batchmode nixos/**.nix
two mentions of the mdDoc function remain in nixos/, both of which
are inside of comments.
Since lib.mdDoc is already defined as just id, this commit is a no-op as
far as Nix (and the built manual) is concerned.
- use normal VM nodes for target, with some extra trickery
- rename preBootCommands to postBootCommands to match its actual intent
- rename VMs to installer and target, so they're not all called machine
- set platforms on non-UEFI tests properly
- add missing packages for systemd-boot test
- fix initrd secrets leaking into the store and having wrong paths
The qemu module shouldn't implicitly (and for all architectures) enable
SSM when enabling Secure Boot.
Additionally, this breaks aarch64 Secure Boot tests because this module
doesn't use the right machine type for anything but X86.
Changed the default security model for shared directories from 'none' to
the 'mapped-xattr'. QEMU recommends using this option for its security and
reliability benefits.
This change replaces the previously hard-coded `/boot` path with a
reference to `efiSysMountPoint` and more importantly this change makes
it possible to override these rules in scenarios in which they are not
desired.
One such scenario would be when `systemd-gpt-auto-generator(8)` is used
to automount the ESP. Consider this section from the mentioned manpage:
> The ESP is mounted to /boot/ if that directory exists and is not used
> for XBOOTLDR, and otherwise to /efi/. Same as for /boot/, an automount
> unit is used. The mount point will be created if necessary.
Prior to this change, the ESP would be automounted under `/efi` on first
boot, then the previous tmpfiles rules caused `/boot` to be created.
Following the quote above, this meant that the ESP is mounted under
`/boot` for each subsequent boot.
The virtualisation.directBoot.initrd option was added for netboot
images, but the assertion to check directBoot enabled if it was used
caused an infinite recursion if it was. Minimal reproduction:
import nixos/tests/make-test-python.nix ({ pkgs, ... }: {
name = "";
nodes = {
machine = { config, ...}: {
imports = [ nixos/modules/installer/netboot/netboot-minimal.nix ];
virtualisation.directBoot = {
enable = true;
initrd = "${config.system.build.netbootRamdisk}/${config.system.boot.loader.initrdFile}";
};
};
};
testScript = "";
}) {}
The fix is to swap the two conditions, so that cfg.directBoot.enable
is checked first, and the initrd comparision will be short circuited.
This wasn't noticed during review because in earlier versions of the
virtualisation.directBoot patch, the assertion was accidentally in the
conditional above, so wasn't evaluated unless port forwarding was in
use.
The idea is to run an async process waiting for swtpm
and we have to ensure that `FD_CLOEXEC` is cleared on this process'
stdin file descriptor, we use `fdflags` for this, a loadable builtin in
Bash ≥ 5.
The async process when exited will terminate `swtpm`, we bind the
termination of the async process to the termination of QEMU by virtue of
having `qemu` exec in that Bash script.
Signed-off-by: Arthur Gautier <baloo@superbaloo.net>
Co-authored-by: Raito Bezarius <masterancpp@gmail.com>
Allow the user to disable overriding the fileSystems option with
virtualisation.fileSystems by setting
`virtualisation.fileSystems = lib.mkForce { };`.
With this change you can use the qemu-vm module to boot from an external
image that was not produced by the qemu-vm module itself. The user can
now re-use the modularly set fileSystems option instead of having to
reproduce it in virtualisation.fileSystems.
This change removes the bespoke logic around identifying block devices.
Instead of trying to find the right device by iterating over
`qemu.drives` and guessing the right partition number (e.g.
/dev/vda{1,2}), devices are now identified by persistent names provided
by udev in /dev/disk/by-*.
Before this change, the root device was formatted on demand in the
initrd. However, this makes it impossible to use filesystem identifiers
to identify devices. Now, the formatting step is performed before the VM
is started. Because some tests, however, rely on this behaviour, a
utility function to replace this behaviour in added in
/nixos/tests/common/auto-format-root-device.nix.
Devices that contain neither a partition table nor a filesystem are
identified by their hardware serial number which is injecetd via QEMU
(and is thus persistent and predictable). PCI paths are not a reliably
way to identify devices because their availability and numbering depends
on the QEMU machine type.
This change makes the module more robust against changes in QEMU and the
kernel (non-persistent device naming) and by decoupling abstractions
(i.e. rootDevice, bootPartition, and bootLoaderDevice) enables further
improvement down the line.
As with many things, we have scenarios where we don't want to boot on a
disk / bootloader and also we don't want to boot directly.
Sometimes, we want to boot through an OptionROM of our NIC, e.g. netboot
scenarios or let the firmware decide something, e.g. UEFI PXE (or even
UEFI OptionROM!).
This is composed of:
- `directBoot.enable`: whether to direct boot or not
- `directBoot.initrd`: enable overriding the
`config.system.build.initialRamdisk` defaults, useful for
netbootRamdisk for example.
This makes it possible.