Re: [PATCH v3 4/4] io_uring: add support for zone-append

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 10, 2020 at 12:20 AM Jens Axboe <axboe@xxxxxxxxx> wrote:
>
> On 7/9/20 12:36 PM, Kanchan Joshi wrote:
> > On Thu, Jul 9, 2020 at 7:36 PM Jens Axboe <axboe@xxxxxxxxx> wrote:
> >>
> >> On 7/9/20 8:00 AM, Christoph Hellwig wrote:
> >>> On Thu, Jul 09, 2020 at 07:58:04AM -0600, Jens Axboe wrote:
> >>>>> We don't actually need any new field at all.  By the time the write
> >>>>> returned ki_pos contains the offset after the write, and the res
> >>>>> argument to ->ki_complete contains the amount of bytes written, which
> >>>>> allow us to trivially derive the starting position.
> >
> > Deriving starting position was not the purpose at all.
> > But yes, append-offset is not needed, for a different reason.
> > It was kept for uring specific handling. Completion-result from lower
> > layer was always coming to uring in ret2 via ki_complete(....,ret2).
> > And ret2 goes to CQE (and user-space) without any conversion in between.
> > For polled-completion, there is a short window when we get ret2 but cannot
> > write into CQE immediately, so thought of storing that in append_offset
> > (but should not have done, solving was possible without it).
> >
> > FWIW, if we move to indirect-offset approach, append_offset gets
> > eliminated automatically, because there is no need to write to CQE
> > itself.
> >
> >>>> Then let's just do that instead of jumping through hoops either
> >>>> justifying growing io_rw/io_kiocb or turning kiocb into a global
> >>>> completion thing.
> >>>
> >>> Unfortunately that is a totally separate issue - the in-kernel offset
> >>> can be trivially calculated.  But we still need to figure out a way to
> >>> pass it on to userspace.  The current patchset does that by abusing
> >>> the flags, which doesn't really work as the flags are way too small.
> >>> So we somewhere need to have an address to do the put_user to.
> >>
> >> Right, we're just trading the 'append_offset' for a 'copy_offset_here'
> >> pointer, which are stored in the same spot...
> >
> > The address needs to be stored somewhere. And there does not seem
> > other option but to use io_kiocb?
>
> That is where it belongs, not sure this was ever questioned. And inside
> io_rw at that.
>
> > The bigger problem with address/indirect-offset is to be able to write
> > to it during completion as process-context is different. Will that
> > require entering into task_work_add() world, and may make it costly
> > affair?
>
> It might, if you have IRQ context for the completion. task_work isn't
> expensive, however. It's not like a thread offload.
>
> > Using flags have not been liked here, but given the upheaval involved so
> > far I have begun to feel - it was keeping things simple. Should it be
> > reconsidered?
>
> It's definitely worth considering, especially since we can use cflags
> like Pavel suggested upfront and not need any extra storage. But it
> brings us back to the 32-bit vs 64-bit discussion, and then using blocks
> instead of bytes. Which isn't exactly super pretty.
>
I agree that what we had was not great.
Append required special treatment (conversion for sector to bytes) for io_uring.
And we were planning a user-space wrapper to abstract that.

But good part (as it seems now) was: append result went along with cflags at
virtually no additional cost. And uring code changes became super clean/minimal
with further revisions.
While indirect-offset requires doing allocation/mgmt in application,
io-uring submission
and in completion path (which seems trickier), and those CQE flags
still get written
user-space and serve no purpose for append-write.

-- 
Joshi



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux