Re: [RFC 0/8] Replacing the readpages a_op

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 13, 2020 at 04:42:10PM +0000, Chris Mason wrote:
> On 13 Jan 2020, at 10:37, Matthew Wilcox wrote:
> > From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx>
> > I think everybody hates the readpages API.  The fundamental problem 
> > with
> > it is that it passes the pages to be read on a doubly linked list, 
> > using
> > the ->lru list in the struct page.  That means the filesystems have to
> > do the work of calling add_to_page_cache{,_lru,_locked}, and handling
> > failures (because another task is also accessing that chunk of the 
> > file,
> > and so it fails).
> 
> I've always kind of liked the compromise of sending the lists.  It's 
> really good at the common case and doesn't have massive problems when 
> things break down.

I think we'll have to disagree on that point.  Linked lists are awful
for the CPU in the common case, and the error handling code for "things
break down" is painful.  I'm pretty sure I spotted three bugs in the
CIFS implementation.

> Just glancing through the patches, the old 
> readpages is called in bigger chunks, so for massive reads we can do 
> more effective readahead on metadata.  I don't think any of us actually 
> do, but we could.
> 
> With this new operation, our window is constant, and much smaller.
> 
> > The fundamental question is, how do we indicate to the implementation 
> > of
> > ->readahead what pages to operate on?  I've gone with passing a 
> > pagevec.
> > This has the obvious advantage that it's a data structure that already
> > exists and is used within filemap for batches of pages.  I had to add 
> > a
> > bit of new infrastructure to support iterating over the pages in the
> > pagevec, but with that done, it's quite nice.
> >
> > I think the biggest problem is that the size of the pagevec is limited
> > to 15 pages (60kB).  So that'll mean that if the readahead window 
> > bumps
> > all the way up to 256kB, we may end up making 5 BIOs (and merging 
> > them)
> > instead of one.  I'd kind of like to be able to allocate variable 
> > length
> > pagevecs while allowing regular pagevecs to be allocated on the stack,
> > but I can't figure out a way to do that.  eg this doesn't work:
> >
> > -       struct page *pages[PAGEVEC_SIZE];
> > +       union {
> > +               struct page *pages[PAGEVEC_SIZE];
> > +               struct page *_pages[];
> > +       }
> >
> > and if we just allocate them, useful and wonderful tools are going to
> > point out when pages[16] is accessed that we've overstepped the end of
> > the array.
> >
> > I have considered alternatives to the pagevec like just having the
> > ->readahead implementation look up the pages in the i_pages XArray
> > directly.  That didn't work out too well.
> >
> 
> Btrfs basically does this now, honestly iomap isn't that far away.  
> Given how sensible iomap is for this, I'd rather see us pile into that 
> abstraction than try to pass pagevecs for large ranges.  Otherwise, if 
> the lists are awkward we can make some helpers to make it less error 
> prone?

I did do a couple of helpers for lists for iomap before deciding the
whole thing was too painful.  I didn't look at btrfs until just now, but, um ...

int extent_readpages(struct address_space *mapping, struct list_head *pages,
                     unsigned nr_pages)
..
        struct page *pagepool[16];
..
        while (!list_empty(pages)) {
..
                        list_del(&page->lru);
                        if (add_to_page_cache_lru(page, mapping, page->index,
..
                        pagepool[nr++] = page;

you're basically doing exactly what i'm proposing to be the new interface!
OK, you get one extra page per batch ;-P



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux