Re: [LSF/MM/BPF TOPIC] Do not pin pages for various direct-io scheme

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/21/20 7:31 PM, jglisse@xxxxxxxxxx wrote:
> From: Jérôme Glisse <jglisse@xxxxxxxxxx>
> 
> Direct I/O does pin memory through GUP (get user page) this does
> block several mm activities like:
>     - compaction
>     - numa
>     - migration
>     ...
> 
> It is also troublesome if the pinned pages are actualy file back
> pages that migth go under writeback. In which case the page can
> not be write protected from direct-io point of view (see various
> discussion about recent work on GUP [1]). This does happens for
> instance if the virtual memory address use as buffer for read
> operation is the outcome of an mmap of a regular file.
> 
> 
> With direct-io or aio (asynchronous io) pages are pinned until
> syscall completion (which depends on many factors: io size,
> block device speed, ...). For io-uring pages can be pinned an
> indifinite amount of time.
> 
> 
> So i would like to convert direct io code (direct-io, aio and
> io-uring) to obey mmu notifier and thus allow memory management
> and writeback to work and behave like any other process memory.
> 
> For direct-io and aio this mostly gives a way to wait on syscall
> completion. For io-uring this means that buffer might need to be
> re-validated (ie looking up pages again to get the new set of
> pages for the buffer). Impact for io-uring is the delay needed
> to lookup new pages or wait on writeback (if necessary). This
> would only happens _if_ an invalidation event happens, which it-
> self should only happen under memory preissure or for NUMA
> activities.
> 
> They are ways to minimize the impact (for instance by using the
> mmu notifier type to ignore some invalidation cases).
> 
> 
> So i would like to discuss all this during LSF, it is mostly a
> filesystem discussion with strong tie to mm.

I'd be interested in this topic, as it pertains to io_uring. The whole
point of registered buffers is to avoid mapping overhead, and page
references. If we add extra overhead per operation for that, well... I'm
assuming the above is strictly for file mapped pages? Or also page
migration?

-- 
Jens Axboe




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux