On 2022/03/23 21:18, Jan Kara wrote: >>>> Use of atomic_t for lo->lo_disk->part0->bd_openers does not help, for >>>> currently lo->lo_mutex is held in order to avoid races. That is, it is >>>> disk->open_mutex which loop_clr_fd() needs to hold when accessing >>>> lo->lo_disk->part0->bd_openers. >>> >>> It does help because with atomic_t, seeing any intermediate values is not >>> possible even for unlocked readers. >> >> The Linux memory model guarantees atomic reads from 32-bit integers. >> But if it makes everyone happier I could do a READ_ONCE here. > > Sure, the read is atomic wrt other CPU instructions, but it is not atomic > wrt how the compiler decides to implement bdi->bd_openers++. So we need to > make these bd_openers *updates* atomic so that the unlocked reads are > really safe. That being said I consider the concerns mostly theoretical so > I don't insist but some checker will surely complain sooner rather than > later. KCSAN will complain access without data_race(). But what I'm saying here is that loop_clr_fd() needs to hold disk->open_mutex, for blkdev_get_whole() is protected using disk->open_mutex. Then, KCSAN will not complain this access. static int loop_clr_fd(struct loop_device *lo) { int err; if (err) return err; if (lo->lo_state != Lo_bound) { mutex_unlock(&lo->lo_mutex); return -ENXIO; } if (lo->lo_disk->part0->bd_openers > 1) { // Sees lo->lo_disk->part0->bd_openers == 1. lo->lo_flags |= LO_FLAGS_AUTOCLEAR; mutex_unlock(&lo->lo_mutex); return 0; } // Preemption starts. static int blkdev_get_whole(struct block_device *bdev, fmode_t mode) { struct gendisk *disk = bdev->bd_disk; int ret; if (disk->fops->open) { // disk->fops->open == NULL because lo_open() is removed. ret = disk->fops->open(bdev, mode); if (ret) { /* avoid ghost partitions on a removed medium */ if (ret == -ENOMEDIUM && test_bit(GD_NEED_PART_SCAN, &disk->state)) bdev_disk_changed(disk, true); return ret; } } if (!bdev->bd_openers) set_init_blocksize(bdev); if (test_bit(GD_NEED_PART_SCAN, &disk->state)) bdev_disk_changed(disk, false); bdev->bd_openers++; return 0; } // Preemption ends. lo->lo_state = Lo_rundown; mutex_unlock(&lo->lo_mutex); __loop_clr_fd(lo, false); // Despite lo->lo_disk->part0->bd_openers > 1. return 0; }