Re: [RFC 0/8] Replacing the readpages a_op

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 13 Jan 2020, at 10:37, Matthew Wilcox wrote:

> From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx>
>
> I think everybody hates the readpages API.  The fundamental problem 
> with
> it is that it passes the pages to be read on a doubly linked list, 
> using
> the ->lru list in the struct page.  That means the filesystems have to
> do the work of calling add_to_page_cache{,_lru,_locked}, and handling
> failures (because another task is also accessing that chunk of the 
> file,
> and so it fails).
>
> This is an attempt to add a ->readahead op to replace ->readpages.  
> I've
> converted two users, iomap/xfs and cifs.  The cifs conversion is 
> lacking
> fscache support, and that's just because I didn't want to do that 
> work;
> I don't believe there's anything fundamental to it.  But I wanted to 
> do
> iomap because it is The Infrastructure Of The Future and cifs because 
> it
> is the sole remaining user of add_to_page_cache_locked(), which 
> enables
> the last two patches in the series.  By the way, that gives CIFS 
> access
> to the workingset shadow infrastructure, which it had to ignore before
> because it couldn't put pages onto the lru list at the right time.

I've always kind of liked the compromise of sending the lists.  It's 
really good at the common case and doesn't have massive problems when 
things break down.   Just glancing through the patches, the old 
readpages is called in bigger chunks, so for massive reads we can do 
more effective readahead on metadata.  I don't think any of us actually 
do, but we could.

With this new operation, our window is constant, and much smaller.

>
> The fundamental question is, how do we indicate to the implementation 
> of
> ->readahead what pages to operate on?  I've gone with passing a 
> pagevec.
> This has the obvious advantage that it's a data structure that already
> exists and is used within filemap for batches of pages.  I had to add 
> a
> bit of new infrastructure to support iterating over the pages in the
> pagevec, but with that done, it's quite nice.
>
> I think the biggest problem is that the size of the pagevec is limited
> to 15 pages (60kB).  So that'll mean that if the readahead window 
> bumps
> all the way up to 256kB, we may end up making 5 BIOs (and merging 
> them)
> instead of one.  I'd kind of like to be able to allocate variable 
> length
> pagevecs while allowing regular pagevecs to be allocated on the stack,
> but I can't figure out a way to do that.  eg this doesn't work:
>
> -       struct page *pages[PAGEVEC_SIZE];
> +       union {
> +               struct page *pages[PAGEVEC_SIZE];
> +               struct page *_pages[];
> +       }
>
> and if we just allocate them, useful and wonderful tools are going to
> point out when pages[16] is accessed that we've overstepped the end of
> the array.
>
> I have considered alternatives to the pagevec like just having the
> ->readahead implementation look up the pages in the i_pages XArray
> directly.  That didn't work out too well.
>

Btrfs basically does this now, honestly iomap isn't that far away.  
Given how sensible iomap is for this, I'd rather see us pile into that 
abstraction than try to pass pagevecs for large ranges.  Otherwise, if 
the lists are awkward we can make some helpers to make it less error 
prone?

-chris






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux