On Tue, Nov 12, 2019 at 02:05:40AM +0000, Qu WenRuo wrote: > > > On 2019/11/12 上午9:36, coverity-bot wrote: > > Hello! > > > > This is an experimental automated report about issues detected by Coverity > > from a scan of next-20191108 as part of the linux-next weekly scan project: > > https://scan.coverity.com/projects/linux-next-weekly-scan > > > > You're getting this email because you were associated with the identified > > lines of code (noted below) that were touched by recent commits: > > > > 593669fa8fd7 ("btrfs: block-group: Refactor btrfs_read_block_groups()") > > > > Coverity reported the following: > > > > *** CID 1487834: Concurrent data access violations (MISSING_LOCK) > > /fs/btrfs/block-group.c: 1721 in read_one_block_group() > > 1715 * truncate the old free space cache inode and > > 1716 * setup a new one. > > 1717 * b) Setting 'dirty flag' makes sure that we flush > > 1718 * the new space cache info onto disk. > > 1719 */ > > 1720 if (btrfs_test_opt(info, SPACE_CACHE)) > > vvv CID 1487834: Concurrent data access violations (MISSING_LOCK) > > vvv Accessing "cache->disk_cache_state" without holding lock "btrfs_block_group_cache.lock". Elsewhere, "btrfs_block_group_cache.disk_cache_state" is accessed with "btrfs_block_group_cache.lock" held 12 out of 13 times (6 of these accesses strongly imply that it is necessary). > > It's a false alert, as read_one_block_group() is running in mount > context, nobody else can access the fs yet. > > Of course we can hold the lock as it's going to hit fast path and no > performance change at all, but I'm not sure what's the proper way to do > in btrfs. Okay, thanks for double-checking! Yeah, this looks like a hard one to teach Coverity about... I'll add it to my notes! :) -- Kees Cook