Re: [RFCv3 2/3] iomap: Change uptodate variable name to state

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 27, 2023 at 01:13:31AM +0530, Ritesh Harjani (IBM) wrote:
> This patch changes the struct iomap_page uptodate & uptodate_lock
> member names to state and state_lock to better reflect their purpose
> for the upcoming patch. It also introduces the accessor functions for
> updating uptodate state bits in iop->state bitmap. This makes the code
> easy to understand on when different bitmap types are getting referred
> in different code paths.
> 
> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@xxxxxxxxx>
> ---
>  fs/iomap/buffered-io.c | 65 ++++++++++++++++++++++++++++++++----------
>  1 file changed, 50 insertions(+), 15 deletions(-)
....

The mechanical change itself looks fine, so from that perspective:

Reviewed-by: Dave Chinner <dchinner@xxxxxxxxxx>

However, I'm wondering about the efficiency of these bit searches.

> @@ -110,7 +143,7 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio,
>  
>  		/* move forward for each leading block marked uptodate */
>  		for (i = first; i <= last; i++) {
> -			if (!test_bit(i, iop->uptodate))
> +			if (!iop_test_uptodate(iop, i, nr_blocks))
>  				break;
>  			*pos += block_size;
>  			poff += block_size;

Looking at this code, it could have been written to use
find_first_zero_bit() rather than testing each bit individually...

> @@ -120,7 +153,7 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio,
>  
>  		/* truncate len if we find any trailing uptodate block(s) */
>  		for ( ; i <= last; i++) {
> -			if (test_bit(i, iop->uptodate)) {
> +			if (iop_test_uptodate(iop, i, nr_blocks)) {
>  				plen -= (last - i + 1) * block_size;
>  				last = i - 1;
>  				break;

And this is find_first_bit()...

>  static void iomap_set_range_uptodate(struct folio *folio,
> @@ -439,6 +473,7 @@ bool iomap_is_partially_uptodate(struct folio *folio, size_t from, size_t count)
>  	struct iomap_page *iop = to_iomap_page(folio);
>  	struct inode *inode = folio->mapping->host;
>  	unsigned first, last, i;
> +	unsigned int nr_blocks = i_blocks_per_folio(inode, folio);
>  
>  	if (!iop)
>  		return false;
> @@ -451,7 +486,7 @@ bool iomap_is_partially_uptodate(struct folio *folio, size_t from, size_t count)
>  	last = (from + count - 1) >> inode->i_blkbits;
>  
>  	for (i = first; i <= last; i++)
> -		if (!test_bit(i, iop->uptodate))
> +		if (!iop_test_uptodate(iop, i, nr_blocks))
>  			return false;

Again, find_first_zero_bit().

These seem like worthwhile optimisations in light of the heavier use
these bitmaps will get with sub-folio dirty tracking, especially
considering large folios will now use these paths. Do these
interface changes preclude the use of efficient bitmap searching
functions?

-Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux