Hi everyone,
In a scenario where running benchmarks on dedicated hardware is not possible, I'm trying to momentarily cap the I/O bandwidth used by interactive user sessions while benchmarks are running, in order to improve the stability of said benchmark's I/O performance.
In the following discussion, I'll focus on capping the read
bandwidth of /dev/sda for the sake of keeping my examples short,
but if I can get this to work, the idea would be to cap the read
and write bandwidth of all storage devices.
From
https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html
, I understand that I should be able to achieve the intended goal
by...
- Running something like `systemctl set-property --runtime user.slice IOReadBandwidthMax='/dev/sda 1M'` before the benchmark
- Running something like `systemctl set-property --runtime user.slice IOReadBandwidthMax=` after the benchmark.
However, this is not effective, as can be checked by running `hdparm -t`, which still observes the full disk bandwidth.
I have tried the following variants:
- `systemd-run -p IOReadBandwidthMax='/dev/sda 1M' -t bash`
works (hdparm only sees 1 MB/s within the resulting shell)
- `systemd-run -t bash` followed by `systemctl set-property --runtime <new unit> IOReadBandwidthMax='/dev/sda 1M'` also works (hdparm only sees 1 MB/s).
- More specifically targeting individual users' slices
(user-<uid>.slice) doesn't work.
This looks like a cgroups or systemd bug to me, but I thought I would cross-check with you before reporting this to my distribution's bugtracker (my distro packages systemd 246, which is just below your minimal version criterion for upstream bug reports).
Should I be able to set I/O bandwidth caps on a top-level slice like user.slice, or is it expected that I can only do it on individual services?
Cheers,
Hadrien
_______________________________________________ systemd-devel mailing list systemd-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/systemd-devel