Re: [PATCHv11 8/8] iomap: Add per-block dirty state tracking to improve performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Darrick J. Wong" <djwong@xxxxxxxxxx> writes:

> On Mon, Jul 10, 2023 at 11:49:15PM +0530, Ritesh Harjani wrote:
>> Matthew Wilcox <willy@xxxxxxxxxxxxx> writes:
>> 
>> Sorry for the delayed response. I am currently on travel.
>> 
>> > On Fri, Jul 07, 2023 at 08:16:17AM +1000, Dave Chinner wrote:
>> >> On Thu, Jul 06, 2023 at 06:42:36PM +0100, Matthew Wilcox wrote:
>> >> > On Thu, Jul 06, 2023 at 08:16:05PM +0530, Ritesh Harjani wrote:
>> >> > > > @@ -1645,6 +1766,11 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
>> >> > > >  	int error = 0, count = 0, i;
>> >> > > >  	LIST_HEAD(submit_list);
>> >> > > >  
>> >> > > > +	if (!ifs && nblocks > 1) {
>> >> > > > +		ifs = ifs_alloc(inode, folio, 0);
>> >> > > > +		iomap_set_range_dirty(folio, 0, folio_size(folio));
>> >> > > > +	}
>> >> > > > +
>> >> > > >  	WARN_ON_ONCE(ifs && atomic_read(&ifs->write_bytes_pending) != 0);
>> >> > > >  
>> >> > > >  	/*
>> >> > > > @@ -1653,7 +1779,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
>> >> > > >  	 * invalid, grab a new one.
>> >> > > >  	 */
>> >> > > >  	for (i = 0; i < nblocks && pos < end_pos; i++, pos += len) {
>> >> > > > -		if (ifs && !ifs_block_is_uptodate(ifs, i))
>> >> > > > +		if (ifs && !ifs_block_is_dirty(folio, ifs, i))
>> >> > > >  			continue;
>> >> > > >  
>> >> > > >  		error = wpc->ops->map_blocks(wpc, inode, pos);
>> >> > > > @@ -1697,6 +1823,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
>> >> > > >  		}
>> >> > > >  	}
>> >> > > >  
>> >> > > > +	iomap_clear_range_dirty(folio, 0, end_pos - folio_pos(folio));
>> >> > > >  	folio_start_writeback(folio);
>> >> > > >  	folio_unlock(folio);
>> >> > > >  
>> >> > > 
>> >> > > I think we should fold below change with this patch. 
>> >> > > end_pos is calculated in iomap_do_writepage() such that it is either
>> >> > > folio_pos(folio) + folio_size(folio), or if this value becomes more then
>> >> > > isize, than end_pos is made isize.
>> >> > > 
>> >> > > The current patch does not have a functional problem I guess. But in
>> >> > > some cases where truncate races with writeback, it will end up marking
>> >> > > more bits & later doesn't clear those. Hence I think we should correct
>> >> > > it using below diff.
>> >> > 
>> >> > I don't think this is the only place where we'll set dirty bits beyond
>> >> > EOF.  For example, if we mmap the last partial folio in a file,
>> >> > page_mkwrite will dirty the entire folio, but we won't write back
>> >> > blocks past EOF.  I think we'd be better off clearing all the dirty
>> >> > bits in the folio, even the ones past EOF.  What do you think?
>> 
>> Yup. I agree, it's better that way to clear all dirty bits in the folio.
>> Thanks for the suggestion & nice catch!! 
>> 
>> >> 
>> >> Clear the dirty bits beyond EOF where we zero the data range beyond
>> >> EOF in iomap_do_writepage() via folio_zero_segment()?
>> >
>> > That would work, but I think it's simpler to change:
>> >
>> > -	iomap_clear_range_dirty(folio, 0, end_pos - folio_pos(folio));
>> > +	iomap_clear_range_dirty(folio, 0, folio_size(folio));
>> 
>> Right. 
>> 
>> @Darrick,
>> IMO, we should fold below change with Patch-8. If you like I can send a v12
>> with this change. I re-tested 1k-blocksize fstests on x86 with
>> below changes included and didn't find any surprise. Also v11 series
>> including the below folded change is cleanly applicable on your
>> iomap-for-next branch.
>
> Yes, please fold this into v12.  I think Matthew might want to get these

sure, I can fold this into Patch-8 in v12 then. I need to also rebase it
on top of Matthew's changes then right? 

> iomap folio changes out to for-next even sooner than -rc4.  If there's
> time during this week's ext4 call, let's talk about that.

Sure. Post out call, I can prepare and send a v12.

-ritesh



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux