On Mon, Apr 23, 2018 at 5:53 PM, Nix <nix@xxxxxxxxxxxxx> wrote: > On 23 Apr 2018, Avery Pennarun stated: >> If you are checking a couple of realtime streams to make >> sure they don't miss any deadlines, then you might not notice any >> impact, but you ought to notice a reduction in total available >> throughput while a big backup task is running (unless it's running >> using idle priority). > > Of course it is. Why would you want to run a backup task at higher > priority than that? :) Well, assuming your block scheduler supports it :) Anyway, the idea is that this is relatively easy to benchmark empirically, as long as you know what to look for. >> Incidentally, I have a tool that we used on a DVR product to ensure we >> could support multiple realtime streams under heavy load (ie. >> something like 12 readers + 12 writers on a single 7200 RPM disk). > > This is also what xfs's realtime stuff was meant for, back in the day. Oops, I wasn't clear. diskbench is a tool for checking whether your cpu + disk + filesystem + scheduler can handle the load. It doesn't actually do the work. That way you can do things like compare ext4, ext4 + large prealloc, xfs, different disk schedulers, etc. For our dvr it was clear that the deadline scheduler did better, but only because we had virtually no non-dvr disk accesses. It would have been really nice to be able to deprioritize all the non-realtime disk accesses. -- To unsubscribe from this list: send the line "unsubscribe linux-xfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html