Re: [LSF/MM/BPF TOPIC] Do not pin pages for various direct-io scheme

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 21, 2020 at 08:54:22PM -0700, Jens Axboe wrote:
> On 1/21/20 7:31 PM, jglisse@xxxxxxxxxx wrote:
> > From: Jérôme Glisse <jglisse@xxxxxxxxxx>
> > 
> > Direct I/O does pin memory through GUP (get user page) this does
> > block several mm activities like:
> >     - compaction
> >     - numa
> >     - migration
> >     ...
> > 
> > It is also troublesome if the pinned pages are actualy file back
> > pages that migth go under writeback. In which case the page can
> > not be write protected from direct-io point of view (see various
> > discussion about recent work on GUP [1]). This does happens for
> > instance if the virtual memory address use as buffer for read
> > operation is the outcome of an mmap of a regular file.
> > 
> > 
> > With direct-io or aio (asynchronous io) pages are pinned until
> > syscall completion (which depends on many factors: io size,
> > block device speed, ...). For io-uring pages can be pinned an
> > indifinite amount of time.
> > 
> > 
> > So i would like to convert direct io code (direct-io, aio and
> > io-uring) to obey mmu notifier and thus allow memory management
> > and writeback to work and behave like any other process memory.
> > 
> > For direct-io and aio this mostly gives a way to wait on syscall
> > completion. For io-uring this means that buffer might need to be
> > re-validated (ie looking up pages again to get the new set of
> > pages for the buffer). Impact for io-uring is the delay needed
> > to lookup new pages or wait on writeback (if necessary). This
> > would only happens _if_ an invalidation event happens, which it-
> > self should only happen under memory preissure or for NUMA
> > activities.
> > 
> > They are ways to minimize the impact (for instance by using the
> > mmu notifier type to ignore some invalidation cases).
> > 
> > 
> > So i would like to discuss all this during LSF, it is mostly a
> > filesystem discussion with strong tie to mm.
> 
> I'd be interested in this topic, as it pertains to io_uring. The whole
> point of registered buffers is to avoid mapping overhead, and page
> references. If we add extra overhead per operation for that, well... I'm
> assuming the above is strictly for file mapped pages? Or also page
> migration?

File back page and anonymous, the idea is that we have choice on what
to do, ie favor io-uring and make it last resort for mm to mess with a
page that is GUPed or we could favor mm (compaction, NUMA, reclaim,
...). We can also discuss what kind of knobs we want to expose so that
people can decide to choose the tradeof themself (ie from i want low
latency io-uring and i don't care wether mm can not do its business; to
i want mm to never be impeded in its business and i accept the extra
latency burst i might face in io operations).

One of the issue with io-uring AFAICT is that today someone could
potentialy pin pages that are never actualy use by direct io and thus
potential DDOS or mm starve others.

Cheers,
Jérôme





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux