Re: [PATCHv6 08/37] filemap: handle huge pages in do_generic_file_read()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 26, 2017 at 02:57:50PM +0300, Kirill A. Shutemov wrote:
> Most of work happans on head page. Only when we need to do copy data to
> userspace we find relevant subpage.
> 
> We are still limited by PAGE_SIZE per iteration. Lifting this limitation
> would require some more work.

Now that I debugged that bit of my brain, here's a more sensible suggestion.

> @@ -1886,6 +1886,7 @@ static ssize_t do_generic_file_read(struct file *filp, loff_t *ppos,
>  			if (unlikely(page == NULL))
>  				goto no_cached_page;
>  		}
> +		page = compound_head(page);
>  		if (PageReadahead(page)) {
>  			page_cache_async_readahead(mapping,
>  					ra, filp, page,

We're going backwards and forwards a lot between subpages and page heads.
I'd like to see us do this:

static inline struct page *pagecache_get_page(struct address_space *mapping,
			pgoff_t offset, int fgp_flags, gfp_t cache_gfp_mask)
{
	struct page *page = pagecache_get_head(mapping, offset, fgp_flags,
								cache_gfp_mask);
	return page ? find_subpage(page, offset) : NULL;
}

static inline struct page *find_get_head(struct address_space *mapping,
					pgoff_t offset)
{
	return pagecache_get_head(mapping, offset, 0, 0);
}

and then we can switch do_generic_file_read() to call find_get_head(),
eliminating the conversion back and forth between subpages and head pages.




[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux