Re: [PATCH v3 01/18] block: introduce duration-limits priority class

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Damien,

Finally had a window where I could sit down a read this extremely long
thread.

> I am not denying anything. I simply keep telling you that CDL is not a
> generic feature for random applications to use, including those that
> already use RT/BE/IDLE. It is for applications that know and expect
> it, and so have a setup suited for CDL use down to the drive CDL
> descriptors. That includes DM setups.
>
> Thinking about CDL in a generic setup for any random application to
> use is nonsense. And even if that happens and a user not knowing about
> it still tries it, than as mentioned, nothing bad will happen. Using
> CDL in a setup that does not support it is a NOP. That would be the
> same as not using it.

My observations:

 - Wrt. ioprio as conduit, I personally really dislike the idea of
   conflating priority (relative performance wrt. other I/O) with CDL
   (which is a QoS concept). I would really prefer those things to be
   separate. However, I do think that the ioprio *interface* is a good
   fit. A tool like ionice seems like a reasonable approach to letting
   generic applications set their CDL.

   If bio space wasn't a premium, I'd say just keep things separate.
   But given the inherent incompatibility between kernel I/O scheduling
   and CDL, it's probably not worth the hassle to separate them. As much
   as it pains me to mix two concepts which should be completely
   orthogonal.

   I wish we could let applications specify both a priority and a CDL at
   the same time, though. Even if the kernel plumbing in the short term
   ends up using bi_ioprio as conduit. It's much harder to make changes
   in the application interface at a later date.

 - Wrt. "CDL is not a generic feature", I think you are underestimating
   how many applications actually want something like this. We have
   many.

   I don't think we should design for "special interest only, needs
   custom device tweaking to be usable". We have been down that path
   before (streams, etc.). And with poor results.

   I/O hints also tanked but at least we tried to pre-define performance
   classes that made sense in an abstract fashion. And programmed the
   mode page on discovered devices so that the classes were identical
   across all disks, regardless of whether they were SSDs or million
   dollar arrays. This allowed filesystems to communicate "this is
   metadata" regardless of the device the I/O was going to. Instead of
   "index 5 on this device" but "index 42 on the mirror".

   As such, I don't like the "just customize your settings with
   cdltools" approach. I'd much rather see us try to define a few QoS
   classes that make sense that would apply to every app and use those
   to define the application interface. And then have the kernel program
   those CDL classes into SCSI/ATA devices by default.

   Having the kernel provide an abstract interface for bio QoS and
   configuring a new disk with a sane handful of classes does not
   prevent $CLOUD_VENDOR from overriding what Linux configured. But at
   least we'd have a generic approach to block QoS in Linux. Similar to
   the existing I/O priority infrastructure which is also not tied to
   any particular hardware feature.

   A generic implementation also allows us to do fancy things in the
   hypervisor where we would like to be able to do QoS across multiple
   devices as well. Without having ATA or SCSI with CDL involved. Or
   whatever things might look like in NVMe.

-- 
Martin K. Petersen	Oracle Linux Engineering



[Index of Archives]     [Linux Filesystems]     [Linux SCSI]     [Linux RAID]     [Git]     [Kernel Newbies]     [Linux Newbie]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Samba]     [Device Mapper]

  Powered by Linux