Re: [PATCH 5/6] iomap: support RWF_UNCACHED for buffered writes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/17/19 6:52 PM, Darrick J. Wong wrote:
> On Tue, Dec 17, 2019 at 07:39:47AM -0700, Jens Axboe wrote:
>> This adds support for RWF_UNCACHED for file systems using iomap to
>> perform buffered writes. We use the generic infrastructure for this,
>> by tracking pages we created and calling write_drop_cached_pages()
>> to issue writeback and prune those pages.
>>
>> Signed-off-by: Jens Axboe <axboe@xxxxxxxxx>
>> ---
>>  fs/iomap/apply.c       | 35 +++++++++++++++++++++++++++++++++++
>>  fs/iomap/buffered-io.c | 28 ++++++++++++++++++++++++----
>>  fs/iomap/trace.h       |  4 +++-
>>  include/linux/iomap.h  |  5 +++++
>>  4 files changed, 67 insertions(+), 5 deletions(-)
>>
>> diff --git a/fs/iomap/apply.c b/fs/iomap/apply.c
>> index 792079403a22..687e86945b27 100644
>> --- a/fs/iomap/apply.c
>> +++ b/fs/iomap/apply.c
>> @@ -92,5 +92,40 @@ iomap_apply(struct iomap_ctx *data, const struct iomap_ops *ops,
>>  				     data->flags, &iomap);
>>  	}
>>  
>> +	if (written <= 0)
>> +		goto out;
>> +
>> +	/*
>> +	 * If this is an uncached write, then we need to write and sync this
>> +	 * range of data. This is only true for a buffered write, not for
>> +	 * O_DIRECT.
>> +	 */
> 
> I tracked down the original conversation, where Dave had this to say:
> 
> "Hence, IMO, this is the wrong layer in iomap to be dealing with¬
> writeback and cache residency for uncached IO. We should be caching¬
> residency/invalidation at a per-IO level, not a per-page level."
> 
> He's right, but I still think it doesn't quite smell right to be putting
> this in iomap_apply, since that's a generic function that implements
> iteration and shouldn't be messing with cache invalidation.
> 
> So I have two possible suggestions for where to put this:
> 
> (1) Add the "flush and maybe invalidate" behavior to the bottom of
> iomap_write_actor like I said in the v4 patchset.  That will issue
> writeback and invalidate pagecache in smallish quantities.
> 
> (2) Find a way to pass the IOMAP_F_PAGE_CREATE state from
> iomap_write_actor back to iomap_file_buffered_write and do the
> flush-and-invalidate for the entire write request once at the end.

Thanks for your suggestion, I'll look into option 2. Option 1 isn't
going to work, as smaller quantities is going to cause a performance
issue for streamed IO.

>> @@ -851,10 +860,18 @@ iomap_write_actor(const struct iomap_ctx *data, struct iomap *iomap,
>>  			break;
>>  		}
>>  
>> -		status = iomap_write_begin(inode, pos, bytes, 0, &page, iomap,
>> -				srcmap);
>> -		if (unlikely(status))
>> +retry:
>> +		status = iomap_write_begin(inode, pos, bytes, flags,
>> +						&page, iomap, srcmap);
>> +		if (unlikely(status)) {
>> +			if (status == -ENOMEM &&
>> +			    (flags & IOMAP_WRITE_F_UNCACHED)) {
>> +				iomap->flags |= IOMAP_F_PAGE_CREATE;
>> +				flags &= ~IOMAP_WRITE_F_UNCACHED;
> 
> What's the strategy here?  We couldn't get a page for an uncached write,
> so try again as a regular cached write?

The idea is that we start with IOMAP_WRITE_F_UNCACHED set, in which case
we only do page lookup, not create. If that works, then we know that the
given page was already in the page cache. If it fails with -ENOMEM, we
store this information as IOMAP_F_PAGE_CREATE, and then clear
IOMAP_WRITE_F_UNCACHED and retry. The retry will create the page, and
now the caller knows that we had to create pages to satisfy this
write. The caller uses this information to invalidate the entire range.

Hope that explains better!

> Thanks for making the updates, it's looking better.

Thanks!

-- 
Jens Axboe




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux