Re: [RFCv4 3/3] iomap: Support subpage size dirty tracking to improve write performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 04, 2023 at 08:21:09PM +0530, Ritesh Harjani (IBM) wrote:
> @@ -90,12 +116,21 @@ iomap_page_create(struct inode *inode, struct folio *folio, unsigned int flags)
>  	else
>  		gfp = GFP_NOFS | __GFP_NOFAIL;
> 
> -	iop = kzalloc(struct_size(iop, state, BITS_TO_LONGS(nr_blocks)),
> +	/*
> +	 * iop->state tracks two sets of state flags when the
> +	 * filesystem block size is smaller than the folio size.
> +	 * The first state tracks per-filesystem block uptodate
> +	 * and the second tracks per-filesystem block dirty
> +	 * state.
> +	 */

"per-filesystem"?  I think you mean "per-block (uptodate|block) state".
Using "per-block" naming throughout this patchset might help readability.
It's currently an awkward mix of "subpage", "sub-page" and "sub-folio".
It also feels like you're adding a comment to every non-mechanical change
you make, which isn't necessarily helpful.  Changelog, sure, but
sometimes your comments simply re-state what your change is doing.

> -static void iomap_iop_set_range_uptodate(struct folio *folio,
> -		struct iomap_page *iop, size_t off, size_t len)
> +static void iomap_iop_set_range(struct folio *folio, struct iomap_page *iop,
> +		size_t off, size_t len, enum iop_state state)
>  {
>  	struct inode *inode = folio->mapping->host;
> -	unsigned first = off >> inode->i_blkbits;
> -	unsigned last = (off + len - 1) >> inode->i_blkbits;
> +	unsigned int blks_per_folio = i_blocks_per_folio(inode, folio);
> +	unsigned int first_blk = (off >> inode->i_blkbits);
> +	unsigned int last_blk = ((off + len - 1) >> inode->i_blkbits);
> +	unsigned int nr_blks = last_blk - first_blk + 1;
>  	unsigned long flags;
> -	unsigned int nr_blocks = i_blocks_per_folio(inode, folio);
> 
> -	spin_lock_irqsave(&iop->state_lock, flags);
> -	iop_set_range_uptodate(iop, first, last - first + 1, nr_blocks);
> -	if (iop_uptodate_full(iop, nr_blocks))
> -		folio_mark_uptodate(folio);
> -	spin_unlock_irqrestore(&iop->state_lock, flags);
> +	switch (state) {
> +	case IOP_STATE_UPDATE:
> +		if (!iop) {
> +			folio_mark_uptodate(folio);
> +			return;
> +		}
> +		spin_lock_irqsave(&iop->state_lock, flags);
> +		iop_set_range_uptodate(iop, first_blk, nr_blks, blks_per_folio);
> +		if (iop_uptodate_full(iop, blks_per_folio))
> +			folio_mark_uptodate(folio);
> +		spin_unlock_irqrestore(&iop->state_lock, flags);
> +		break;
> +	case IOP_STATE_DIRTY:
> +		if (!iop)
> +			return;
> +		spin_lock_irqsave(&iop->state_lock, flags);
> +		iop_set_range_dirty(iop, first_blk, nr_blks, blks_per_folio);
> +		spin_unlock_irqrestore(&iop->state_lock, flags);
> +		break;
> +	}
>  }

I can't believe this is what Dave wanted you to do.  iomap_iop_set_range()
should be the low-level helper called by iop_set_range_uptodate() and
iop_set_range_dirty(), not the other way around.




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux