Re: [PATCH 4/6] xfs: reduce the number of atomic when locking a buffer after lookup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 27, 2022 at 04:08:39PM +1000, Dave Chinner wrote:
> From: Dave Chinner <dchinner@xxxxxxxxxx>
> 
> Avoid an extra atomic operation in the non-trylock case by only
> doing a trylock if the XBF_TRYLOCK flag is set. This follows the
> pattern in the IO path with NOWAIT semantics where the
> "trylock-fail-lock" path showed 5-10% reduced throughput compared to
> just using single lock call when not under NOWAIT conditions. So
> make that same change here, too.
> 
> See commit 942491c9e6d6 ("xfs: fix AIM7 regression") for details.
> 
> Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
> [hch: split from a larger patch]
> Signed-off-by: Christoph Hellwig <hch@xxxxxx>

LGTM
Reviewed-by: Darrick J. Wong <djwong@xxxxxxxxxx>

--D

> ---
>  fs/xfs/xfs_buf.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> index 469e84fe21aa..3bcb691c6d95 100644
> --- a/fs/xfs/xfs_buf.c
> +++ b/fs/xfs/xfs_buf.c
> @@ -534,11 +534,12 @@ xfs_buf_find_lock(
>  	struct xfs_buf          *bp,
>  	xfs_buf_flags_t		flags)
>  {
> -	if (!xfs_buf_trylock(bp)) {
> -		if (flags & XBF_TRYLOCK) {
> +	if (flags & XBF_TRYLOCK) {
> +		if (!xfs_buf_trylock(bp)) {
>  			XFS_STATS_INC(bp->b_mount, xb_busy_locked);
>  			return -EAGAIN;
>  		}
> +	} else {
>  		xfs_buf_lock(bp);
>  		XFS_STATS_INC(bp->b_mount, xb_get_locked_waited);
>  	}
> -- 
> 2.36.1
> 



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux