Re: [PATCH RFC 01/16] block: Add atomic write operations to request_queue limits

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, May 06, 2023 at 09:31:52AM +1000, Dave Chinner wrote:
> On Fri, May 05, 2023 at 10:47:19PM +0000, Eric Biggers wrote:
> > On Fri, May 05, 2023 at 08:26:23AM +1000, Dave Chinner wrote:
> > > > ok, we can do that but would also then make statx field 64b. I'm fine with
> > > > that if it is wise to do so - I don't don't want to wastefully use up an
> > > > extra 2 x 32b in struct statx.
> > > 
> > > Why do we need specific varibles for DIO atomic write alignment
> > > limits? We already have direct IO alignment and size constraints in statx(),
> > > so why wouldn't we just reuse those variables when the user requests
> > > atomic limits for DIO?
> > > 
> > > i.e. if STATX_DIOALIGN is set, we return normal DIO alignment
> > > constraints. If STATX_DIOALIGN_ATOMIC is set, we return the atomic
> > > DIO alignment requirements in those variables.....
> > > 
> > > Yes, we probably need the dio max size to be added to statx for
> > > this. Historically speaking, I wanted statx to support this in the
> > > first place because that's what we were already giving userspace
> > > with XFS_IOC_DIOINFO and we already knew that atomic IO when it came
> > > along would require a bound maximum IO size much smaller than normal
> > > DIO limits.  i.e.:
> > > 
> > > struct dioattr {
> > >         __u32           d_mem;          /* data buffer memory alignment */
> > >         __u32           d_miniosz;      /* min xfer size                */
> > >         __u32           d_maxiosz;      /* max xfer size                */
> > > };
> > > 
> > > where d_miniosz defined the alignment and size constraints for DIOs.
> > > 
> > > If we simply document that STATX_DIOALIGN_ATOMIC returns minimum
> > > (unit) atomic IO size and alignment in statx->dio_offset_align (as
> > > per STATX_DIOALIGN) and the maximum atomic IO size in
> > > statx->dio_max_iosize, then we don't burn up anywhere near as much
> > > space in the statx structure....
> > 
> > I don't think that's how statx() is meant to work.  The request mask is a bitmask, and the user can
> > request an arbitrary combination of different items.  For example, the user could request both
> > STATX_DIOALIGN and STATX_WRITE_ATOMIC at the same time.  That doesn't work if different items share
> > the same fields.
> 
> Sure it does - what is contained in the field on return is defined
> by the result mask. In this case, whatever the filesystem puts in
> the DIO fields will match which flag it asserts in the result mask.
> 
> i.e. if the application wants RWF_ATOMIC and so asks for STATX_DIOALIGN |
> STATX_DIOALIGN_ATOMIC in the request mask then:
> 
> - if the filesystem does not support RWF_ATOMIC it fills in the
>   normal DIO alingment values and puts STATX_DIOALIGN in the result
>   mask.
> 
>   Now the application knows that it can't use RWF_ATOMIC, and it
>   doesn't need to do another statx() call to get the dio alignment
>   values it needs.
> 
> - if the filesystem supports RWF_ATOMIC, it fills in the values with
>   the atomic DIO constraints and puts STATX_DIOALIGN_ATOMIC in the
>   result mask.
> 
>   Now the application knows it can use RWF_ATOMIC and has the atomic
>   DIO constraints in the dio alignment fields returned.
> 
> This uses the request/result masks exactly as intended, yes?
> 

We could certainly implement some scheme like that, but I don't think that was
how statx() was intended to work.  I think that each bit in the mask was
intended to correspond to an independent piece of information.

- Eric



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux