[PATCH v2 0/3] bio: Direct IO: convert to pin_user_pages_fast()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Changes since v1:

* Now handles ITER_PIPE, by appying pin_user_page() to ITER_PIPE pages,
on the Direct IO path. Thanks to Al Viro for pointing me in the right
direction there.

* Removed the ceph and BIO_FOLL_PIN patches: the ceph improvements were
handled separately as a different patch entirely, by Jeff Layton. And
the BIO_FOLL_PIN idea turned out to be completely undesirable here.

Original cover letter, updated for v2:

This converts the Direct IO block/bio layer over to use FOLL_PIN pages
(those acquired via pin_user_pages*()). This effectively converts
several file systems (ext4, for example) that use the common Direct IO
routines. See "Remaining work", below for a bit more detail there.

Quite a few approaches have been considered over the years. This one is
inspired by Christoph Hellwig's July, 2019 observation that there are
only 5 ITER_ types, and we can simplify handling of them for Direct IO
[1]. After working through how bio submission and completion works, I
became convinced that this is the simplest and cleanest approach to
conversion.

Design notes ============

This whole approach depends on certain concepts:

1) Each struct bio instance must not mix different types of pages:
FOLL_PIN and non-FOLL_PIN pages. (By FOLL_PIN I'm referring to pages
that were acquired and pinned via pin_user_page*() routines.)
Fortunately, this is already an enforced constraint for bio's, as
evidenced by the existence and use of BIO_NO_PAGE_REF.

2) Christoph Hellwig's July, 2019 observation that there are
only 5 ITER_ types, and we can simplify handling of them for Direct IO
[1]. Accordingly, this series implements the following pseudocode:

Direct IO behavior:

    ITER_IOVEC:
        pin_user_pages_fast();
        break;

    ITER_PIPE:
        for each page:
             pin_user_page();
        break;

    ITER_KVEC:    // already elevated page refcount, leave alone
    ITER_BVEC:    // already elevated page refcount, leave alone
    ITER_DISCARD: // discard
        return -EFAULT or -ENVALID;

...which works for callers that already have sorted out which case they
are in. Such as, Direct IO in the block/bio layers.

Now, this does leave ITER_KVEC and ITER_BVEC unconverted, but on the
other hand, it's not clear that these are actually affected in the real
world, by the get_user_pages()+filesystem interaction problems of [2].
If it turns out to matter, then those can be handled too, but it's just
more refactoring and surgery to do so.

Testing
=======

Performance: no obvious regressions from running fio (direct=1: Direct
IO) on both SSD and NVMe drives.

Functionality: selected non-destructive bare metal xfstests on xfs,
ext4, btrfs, orangefs filesystems, plus LTP tests.

Note that I have only a single x86 64-bit test machine, though.

Remaining work
==============

Non-converted call sites for iter_iov_get_pages*() at the
moment include: net, crypto, cifs, ceph, vhost, fuse, nfs/direct,
vhost/scsi. However, it's not clear which of those really have to be
converted, because some of them probably use ITER_BVEC or ITER_KVEC.

About-to-be-converted sites (in a subsequent patch) are: Direct IO for
filesystems that use the generic read/write functions.

[1] https://lore.kernel.org/kvm/20190724061750.GA19397@xxxxxxxxxxxxx/

[2] "Explicit pinning of user-space pages":
    https://lwn.net/Articles/807108/


John Hubbard (3):
  mm/gup: introduce pin_user_page()
  iov_iter: introduce iov_iter_pin_user_pages*() routines
  bio: convert get_user_pages_fast() --> pin_user_pages_fast()

 block/bio.c          |  24 +++++-----
 block/blk-map.c      |   6 +--
 fs/direct-io.c       |  28 +++++------
 fs/iomap/direct-io.c |   2 +-
 include/linux/mm.h   |   2 +
 include/linux/uio.h  |   5 ++
 lib/iov_iter.c       | 110 +++++++++++++++++++++++++++++++++++++++----
 mm/gup.c             |  30 ++++++++++++
 8 files changed, 169 insertions(+), 38 deletions(-)

-- 
2.28.0





[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux