Re: [PATCH 1/3] mpage: mpage_readpages() should submit IO as read-ahead

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/29/18 4:18 PM, Jens Axboe wrote:
> On 5/29/18 3:59 PM, Andrew Morton wrote:
>> On Tue, 29 May 2018 08:36:41 -0600 Jens Axboe <axboe@xxxxxxxxx> wrote:
>>
>>> On 5/24/18 1:43 PM, Andrew Morton wrote:
>>>> On Thu, 24 May 2018 10:02:52 -0600 Jens Axboe <axboe@xxxxxxxxx> wrote:
>>>>
>>>>> a_ops->readpages() is only ever used for read-ahead, yet we don't
>>>>> flag the IO being submitted as such. Fix that up. Any file system
>>>>> that uses mpage_readpages() as it's ->readpages() implementation
>>>>> will now get this right.
>>>>>
>>>>> Signed-off-by: Jens Axboe <axboe@xxxxxxxxx>
>>>>> ---
>>>>>  fs/mpage.c | 17 +++++++++--------
>>>>>  1 file changed, 9 insertions(+), 8 deletions(-)
>>>>>
>>>>> diff --git a/fs/mpage.c b/fs/mpage.c
>>>>> index b7e7f570733a..0a5474237f5e 100644
>>>>> --- a/fs/mpage.c
>>>>> +++ b/fs/mpage.c
>>>>> @@ -146,7 +146,7 @@ static struct bio *
>>>>>  do_mpage_readpage(struct bio *bio, struct page *page, unsigned nr_pages,
>>>>>  		sector_t *last_block_in_bio, struct buffer_head *map_bh,
>>>>>  		unsigned long *first_logical_block, get_block_t get_block,
>>>>> -		gfp_t gfp)
>>>>> +		gfp_t gfp, bool is_readahead)
>>>>
>>>> That's a lot of arguments.
>>>>
>>>> I suspect we'll have a faster kernel if we mark this __always_inline. 
>>>> I think my ancient "This isn't called much at all" over
>>>> mpage_readpage() remains true.  Almost all callers come in via
>>>> mpage_readpages(), which would benefit from the inlining.  But mpage.o
>>>> gets 1.5k fatter.  hm.
>>>
>>> Was going to send out a v2, but would be nice to get some consensus on
>>> what you prefer here. I can either do the struct version, or I can
>>> keep it as-is (going from 8 to 9 arguments). For the struct version,
>>> I'd prefer to do that as a prep patch, so the functional change is
>>> clear.
>>
>> The struct thing makes the code smaller, and presumably faster, doesn't
>> it?  I suppose it saves a bit of stack as well, by letting the callee
>> access the caller's locals rather than a copy of them.  All sounds good
>> to me?
> 
> That's what I thought to, so already prepped the series. Sending it out.

We could actually kill args->gfp as well, since that's dependent on
args->is_readahead anyway. Separate patch, or fold into patch #2?
Incremental below.


diff --git a/fs/mpage.c b/fs/mpage.c
index a6344996f924..b0f9de977526 100644
--- a/fs/mpage.c
+++ b/fs/mpage.c
@@ -137,12 +137,11 @@ struct mpage_readpage_args {
 	struct bio *bio;
 	struct page *page;
 	unsigned nr_pages;
+	bool is_readahead;
 	sector_t last_block_in_bio;
 	struct buffer_head map_bh;
 	unsigned long first_logical_block;
 	get_block_t *get_block;
-	gfp_t gfp;
-	bool is_readahead;
 };
 
 /*
@@ -171,9 +170,18 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args)
 	struct block_device *bdev = NULL;
 	int length;
 	int fully_mapped = 1;
-	int op_flags = args->is_readahead ? REQ_RAHEAD : 0;
+	int op_flags;
 	unsigned nblocks;
 	unsigned relative_block;
+	gfp_t gfp;
+
+	if (args->is_readahead) {
+		op_flags = REQ_RAHEAD;
+		gfp = readahead_gfp_mask(page->mapping);
+	} else {
+		op_flags = 0;
+		gfp = mapping_gfp_constraint(page->mapping, GFP_KERNEL);
+	}
 
 	if (page_has_buffers(page))
 		goto confused;
@@ -295,7 +303,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args)
 				goto out;
 		}
 		args->bio = mpage_alloc(bdev, blocks[0] << (blkbits - 9),
-				min_t(int, args->nr_pages, BIO_MAX_PAGES), args->gfp);
+				min_t(int, args->nr_pages, BIO_MAX_PAGES), gfp);
 		if (args->bio == NULL)
 			goto confused;
 	}
@@ -376,7 +384,6 @@ mpage_readpages(struct address_space *mapping, struct list_head *pages,
 {
 	struct mpage_readpage_args args = {
 		.get_block = get_block,
-		.gfp = readahead_gfp_mask(mapping),
 		.is_readahead = true,
 	};
 	unsigned page_idx;
@@ -388,7 +395,7 @@ mpage_readpages(struct address_space *mapping, struct list_head *pages,
 		list_del(&page->lru);
 		if (!add_to_page_cache_lru(page, mapping,
 					page->index,
-					args.gfp)) {
+					readahead_gfp_mask(mapping))) {
 			args.page = page;
 			args.nr_pages = nr_pages - page_idx;
 			args.bio = do_mpage_readpage(&args);
@@ -411,7 +418,6 @@ int mpage_readpage(struct page *page, get_block_t get_block)
 		.page = page,
 		.nr_pages = 1,
 		.get_block = get_block,
-		.gfp = mapping_gfp_constraint(page->mapping, GFP_KERNEL),
 	};
 
 	args.bio = do_mpage_readpage(&args);

-- 
Jens Axboe




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux