Re: [PATCH 0/4] Fix a crash when block device is read and block size is changed at the same time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Tue, 25 Sep 2012, Jeff Moyer wrote:

> Jeff Moyer <jmoyer@xxxxxxxxxx> writes:
> 
> > Mikulas Patocka <mpatocka@xxxxxxxxxx> writes:
> >
> >> Hi Jeff
> >>
> >> Thanks for testing.
> >>
> >> It would be interesting ... what happens if you take the patch 3, leave 
> >> "struct percpu_rw_semaphore bd_block_size_semaphore" in "struct 
> >> block_device", but remove any use of the semaphore from fs/block_dev.c? - 
> >> will the performance be like unpatched kernel or like patch 3? It could be 
> >> that the change in the alignment affects performance on your CPU too, just 
> >> differently than on my CPU.
> >
> > It turns out to be exactly the same performance as with the 3rd patch
> > applied, so I guess it does have something to do with cache alignment.
> > Here is the patch (against vanilla) I ended up testing.  Let me know if
> > I've botched it somehow.
> >
> > So, I next up I'll play similar tricks to what you did (padding struct
> > block_device in all kernels) to eliminate the differences due to
> > structure alignment and provide a clear picture of what the locking
> > effects are.
> 
> After trying again with the same padding you used in the struct
> bdev_inode, I see no performance differences between any of the
> patches.  I tried bumping up the number of threads to saturate the
> number of cpus on a single NUMA node on my hardware, but that resulted
> in lower IOPS to the device, and hence consumption of less CPU time.
> So, I believe my results to be inconclusive.

For me, the fourth patch with RCU-based locks performed better, so I am 
submitting that.

> After talking with Vivek about the problem, he had mentioned that it
> might be worth investigating whether bd_block_size could be protected
> using SRCU.  I looked into it, and the one thing I couldn't reconcile is
> updating both the bd_block_size and the inode->i_blkbits at the same
> time.  It would involve (afaiui) adding fields to both the inode and the
> block_device data structures and using rcu_assign_pointer  and
> rcu_dereference to modify and access the fields, and both fields would
> need to protected by the same struct srcu_struct.  I'm not sure whether
> that's a desirable approach.  When I started to implement it, it got
> ugly pretty quickly.  What do others think?

Using RCU doesn't seem sensible to me (except for lock implementation, as 
it is in patch 4). The major problem is that the block layer reads 
blocksize multiple times and when different values are read, a crash may 
happen - RCU doesn't protect you against that - if you read a variable 
multiple times in a RCU-protected section, you can still get different 
results.

If we wanted to use RCU, we would have to read blocksize just once and 
pass the value between all functions involved - that would result in a 
massive code change.

> For now, my preference is to get the full patch set in.  I will continue
> to investigate the performance impact of the data structure size changes
> that I've been seeing.

Yes, we should get the patches to the kernel.

Mikulas

> So, for the four patches:
> 
> Acked-by: Jeff Moyer <jmoyer@xxxxxxxxxx>
> 
> Jens, can you have a look at the patch set?  We are seeing problem
> reports of this in the wild[1][2].
> 
> Cheers,
> Jeff
> 
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=824107
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=812129
> 

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel


[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux