Re: [PATCH 3/5] mm: make buffered writes work with RWF_UNCACHED

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/10/19 5:23 PM, Dave Chinner wrote:
> On Tue, Dec 10, 2019 at 09:24:52AM -0700, Jens Axboe wrote:
>> If RWF_UNCACHED is set for io_uring (or pwritev2(2)), we'll drop the
>> cache instantiated for buffered writes. If new pages aren't
>> instantiated, we leave them alone. This provides similar semantics to
>> reads with RWF_UNCACHED set.
> 
> So what about filesystems that don't use generic_perform_write()?
> i.e. Anything that uses the iomap infrastructure (i.e.
> iomap_file_buffered_write()) instead of generic_file_write_iter())
> will currently ignore RWF_UNCACHED. That's XFS and gfs2 right now,
> but there are likely to be more in the near future as more
> filesystems are ported to the iomap infrastructure.

I'll skip this one as you found it.

> I'd also really like to see extensive fsx and fstress testing of
> this new IO mode before it is committed - this is going to exercise page
> cache coherency across different operations in new and unique
> ways. that means we need patches to fstests to detect and use this
> functionality when available, and new tests that explicitly exercise
> combinations of buffered, mmap, dio and uncached for a range of
> different IO size and alignments (e.g. mixing sector sized uncached
> IO with page sized buffered/mmap/dio and vice versa).
> 
> We are not going to have a repeat of the copy_file_range() data
> corruption fuckups because no testing was done and no test
> infrastructure was written before the new API was committed.

Oh I totally agree, and there's no push from my end on this. I just
think it's a cool feature and could be very useful, but it obviously
needs a healthy dose of testing and test cases written. I'll be doing
that as well.

>> +void write_drop_cached_pages(struct page **pgs, struct address_space *mapping,
>> +			     unsigned *nr)
>> +{
>> +	loff_t start, end;
>> +	int i;
>> +
>> +	end = 0;
>> +	start = LLONG_MAX;
>> +	for (i = 0; i < *nr; i++) {
>> +		struct page *page = pgs[i];
>> +		loff_t off;
>> +
>> +		off = (loff_t) page_to_index(page) << PAGE_SHIFT;
>> +		if (off < start)
>> +			start = off;
>> +		if (off > end)
>> +			end = off;
>> +		get_page(page);
>> +	}
>> +
>> +	__filemap_fdatawrite_range(mapping, start, end, WB_SYNC_NONE);
>> +
>> +	for (i = 0; i < *nr; i++) {
>> +		struct page *page = pgs[i];
>> +
>> +		lock_page(page);
>> +		if (page->mapping == mapping) {
>> +			wait_on_page_writeback(page);
>> +			if (!page_has_private(page) ||
>> +			    try_to_release_page(page, 0))
>> +				remove_mapping(mapping, page);
>> +		}
>> +		unlock_page(page);
>> +	}
>> +	*nr = 0;
>> +}
>> +EXPORT_SYMBOL_GPL(write_drop_cached_pages);
>> +
>> +#define GPW_PAGE_BATCH		16
> 
> In terms of performance, file fragmentation and premature filesystem
> aging, this is also going to suck *really badly* for filesystems
> that use delayed allocation because it is going to force conversion
> of delayed allocation extents during the write() call. IOWs,
> it adds all the overheads of doing delayed allocation, but it reaps
> none of the benefits because it doesn't allow large contiguous
> extents to build up in memory before physical allocation occurs.
> i.e. there is no "delayed" in this allocation....
> 
> So it might work fine on a pristine, empty filesystem where it is
> easy to find contiguous free space accross multiple allocations, but
> it's going to suck after a few months of production usage has
> fragmented all the free space into tiny pieces...

I totally agree on this one, and I'm not a huge fan of it. But
considering your suggestion in the other email, I think we just need to
move this up a notch and do it per-write instead. If we can pass back
information about the state of the page cache for the range we care
about, then there's no reason to do it per-page for the write case.
Reads are still best done that way, and we can avoid the LRU overhead by
doing it that way.

-- 
Jens Axboe




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux