Re: [PATCH v2 03/15] block: Support data lifetime in the I/O priority bitfield

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/14/23 05:18, Bart Van Assche wrote:
> On 10/12/23 18:08, Damien Le Moal wrote:
>> On 10/13/23 03:00, Bart Van Assche wrote:
>>> We are having this discussion because bi_ioprio is sixteen bits wide and
>>> because we don't want to make struct bio larger. How about expanding the
>>> bi_ioprio field from 16 to 32 bits and to use separate bits for CDL
>>> information and data lifetimes?
>>
>> I guess we could do that as well. User side aio_reqprio field of struct aiocb,
>> which is used by io_uring and libaio, is an int, so 32-bits also. Changing
>> bi_ioprio to match that should not cause regressions or break user space I
>> think. Kernel uapi ioprio.h will need some massaging though.
> 
> Hmm ... are we perhaps looking at different kernel versions? This is
> what I found:
> 
> $ git grep -nHE 'ioprio;|reqprio;' include/uapi/linux/{io_uring,aio_abi}.h
> include/uapi/linux/aio_abi.h:89:	__s16	aio_reqprio;
> include/uapi/linux/io_uring.h:33:	__u16	ioprio;		/* ioprio for the 
> request */

My bad. I looked at "man aio" but that is the posix AIO API, not Linux native.

> The struct iocb used for asynchronous I/O has a size of 64 bytes and
> does not have any holes. struct io_uring_sqe also has a size of 64 bytes
> and does not have any holes either. The ioprio_set() and ioprio_get()
> system calls use the data type int so these wouldn't need any changes to
> increase the number of ioprio bits.

Yes, but I think it would be better to keep the bio bi_ioprio field size synced
with the per AIO aio_reqprio/ioprio for libaio and io_uring, that is, 16-bits.

>> Reading Niklas's reply to Kanchan, I was reminded that using ioprio hint for
>> the lifetime may have one drawback: that information will be propagated to the
>> device only for direct IOs, no ? For buffered IOs, the information will be
>> lost. The other potential disadvantage of the ioprio interface is that we
>> cannot define ioprio+hint per file (or per inode really), unlike the old
>> write_hint that you initially reintroduced. Are these points blockers for the
>> user API you were thinking of ? How do you envision the user specifying
>> lifetime ? Per file ? Or are you thinking of not relying on the user to specify
>> that but rather the FS (e.g. f2fs) deciding on its own ? If it is the latter, I
>> think ioprio+hint is fine (it is simple). But if it is the former, the ioprio
>> API may not be the best suited for the job at hand.
> 
> The way I see it is that the primary purpose of the bits in the
> bi_ioprio member that are used for the data lifetime is to allow
> filesystems to provide data lifetime information to block drivers.
> 
> Specifying data lifetime information for direct I/O is convenient when
> writing test scripts that verify whether data lifetime supports works
> correctly. There may be other use cases but this is not my primary
> focus.
> 
> I think that applications that want to specify data lifetime information
> should use fcntl(fd, F_SET_RW_HINT, ...). It is up to the filesystem to
> make sure that this information ends up in the bi_ioprio field. The
> block layer is responsible for passing the information in the bi_ioprio
> member to block drivers. Filesystems can support multiple policies for
> combining the i_write_hint and other information into a data lifetime.
> See also the whint_mode restored by patch 05/15 in this series.

Explaining this in the cover letter of the series would be helpful for one to
understand your view of how the information is propagated from user to device.

I am not a fan of having a fcntl() call ending up modifying the ioprio of IOs
using hints, given that hints in themselves are already a user facing
information/API. This is confusing... What if we have a user issue direct IOs
with a lifetime value hint on a file that has a different lifetime set with
fcntl() ? And I am sure there are other corner cases like this.

Given that lifetime is per file (inode) and IO prio is per process or per I/O,
having different user APIs makes sense. The issue of not growing (if possible)
the bio and request structures remains. For bio, you identified a hole already,
so what about using another 16-bits field for lifetime ? Not sure for requests.
I thought also of a union with bi_ioprio, but that would prevent using lifetime
and IO priority together, which is not ideal.

> 
> Thanks,
> 
> Bart.

-- 
Damien Le Moal
Western Digital Research




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux