Re: [PATCH v3 01/18] block: introduce duration-limits priority class

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/30/23 22:44, Hannes Reinecke wrote:
> On 1/29/23 04:52, Damien Le Moal wrote:
>> On 1/29/23 05:25, Martin K. Petersen wrote:
> [ .. ]
>>>
>>>     As such, I don't like the "just customize your settings with
>>>     cdltools" approach. I'd much rather see us try to define a few QoS
>>>     classes that make sense that would apply to every app and use those
>>>     to define the application interface. And then have the kernel program
>>>     those CDL classes into SCSI/ATA devices by default.
>>
>> Makes sense. Though I think it will be hard to define a set of QoS hints that
>> are useful for a wide range of applications, and even harder to convert the
>> defined hint classes to CDL descriptors. I fear that we may end up with the same
>> issues as IO hints/streams.
>>
>>>     Having the kernel provide an abstract interface for bio QoS and
>>>     configuring a new disk with a sane handful of classes does not
>>>     prevent $CLOUD_VENDOR from overriding what Linux configured. But at
>>>     least we'd have a generic approach to block QoS in Linux. Similar to
>>>     the existing I/O priority infrastructure which is also not tied to
>>>     any particular hardware feature.
>>
>> OK. See below about this.
>>
>>>     A generic implementation also allows us to do fancy things in the
>>>     hypervisor where we would like to be able to do QoS across multiple
>>>     devices as well. Without having ATA or SCSI with CDL involved. Or
>>>     whatever things might look like in NVMe.
>>
>> Fair point, especially given that virtio actually already forwards a guest
>> ioprio to the host through the virtio block command. Thinking of that particular
>> point together with what you said, I came up with the change show below as a
>> replacement for this patch 1/18.
>>
>> This changes the 13-bits ioprio data into a 3-bits QOS hint + 3-bits of IO prio
>> level. This is consistent with the IO prio interface since IO priority levels
>> have to be between 0 and 7 (otherwise, errors are returned). So in fact, the
>> upper 10-bits of the ioprio data are ignored and we can safely use 3 of these
>> bits for an IO hint.
>>
>> This hint applies to all priority classes and levels, that is, for the CDL case,
>> we can enrich any priority with a hint that specifies the CDL index to use for
>> an IO.
>>
>> This falls short of actually defining generic IO hints, but this has the
>> advantage to not break anything for current applications using IO priorities,
>> not require any change to existing IO schedulers, while still allowing to pass
>> CDL indexes for IOs down to the scsi & ATA layers (which for now would be the
>> only layers in the kernel acting on the ioprio qos hints).
>>
>> I think that this approach still allows us to enable CDL support, and on top of
>> it, go further and define generic QOS hints that IO scheduler can use and that
>> also potentially map to CDL for scsi & ata (similarly to the RT class IOs
>> mapping to the NCQ priority feature if the user enabled that feature).
>>
>> As mentioned above, I think that defining generic IO hint classes will be
>> difficult. But the change below is I think a good a starting point that should
>> not prevent working on that.
>>
>> Thoughts ?
>>
> I like the idea.
> QoS is one of the recurring topic always coming up sooner or later when 
> talking of storage networks, so having _some_ concept of QoS in the 
> linux kernel (for storage) would be beneficial.
> 
> Maybe time for a topic at LSF?

Yes. I was hoping for a quicker resolution so that we can get the CDL
"mechanical" bits in, but without a nice API for it, we cannot :)
Trying to compile something with Niklas. So far, we are thinking of having
QOS flags + QOS data, the flags determining how (and if) the QOS data is used
and what it means.

Ex of things We could have:
* IOPRIO_QOS_FAILFAST: do not retry the IO if it fails the first time
* IOPRIO_QOS_DURATION_LIMIT: then the QOS data indicates the limit to use
(number). That can be implemented in schedulers and also map to CDL on drives
that support that feature.

That is the difficult part: what else ? For now, considering only our target of
adding scsi & ata CDL support, the above is enough. But is that enough in
general for most users/apps ?

> 
> Cheers,
> 
> Hannes
> 

-- 
Damien Le Moal
Western Digital Research




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux