From: Niels Dossche <dossche.niels@xxxxxxxxx> commit 06bae876634ebf837ba70ea3de532b288326103d upstream. bytes_pinned is always accessed under space_info->lock, except in btrfs_preempt_reclaim_metadata_space, however the other members are accessed under that lock. The reserved member of the rsv's are also partially accessed under a lock and partially not. Move all these accesses into the same lock to ensure consistency. This could potentially race and lead to a flush instead of a commit but it's not a big problem as it's only for preemptive flush. CC: stable@xxxxxxxxxxxxxxx # 5.15+ Reviewed-by: Johannes Thumshirn <johannes.thumshirn@xxxxxxx> Reviewed-by: Josef Bacik <josef@xxxxxxxxxxxxxx> Signed-off-by: Niels Dossche <niels.dossche@xxxxxxxx> Signed-off-by: Niels Dossche <dossche.niels@xxxxxxxxx> Reviewed-by: David Sterba <dsterba@xxxxxxxx> Signed-off-by: David Sterba <dsterba@xxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- fs/btrfs/space-info.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- a/fs/btrfs/space-info.c +++ b/fs/btrfs/space-info.c @@ -1061,7 +1061,6 @@ static void btrfs_preempt_reclaim_metada trans_rsv->reserved; if (block_rsv_size < space_info->bytes_may_use) delalloc_size = space_info->bytes_may_use - block_rsv_size; - spin_unlock(&space_info->lock); /* * We don't want to include the global_rsv in our calculation, @@ -1092,6 +1091,8 @@ static void btrfs_preempt_reclaim_metada flush = FLUSH_DELAYED_REFS_NR; } + spin_unlock(&space_info->lock); + /* * We don't want to reclaim everything, just a portion, so scale * down the to_reclaim by 1/4. If it takes us down to 0,