On Wed, 26 Sep 2012, Jeff Moyer wrote: > Mikulas Patocka <mpatocka@xxxxxxxxxx> writes: > > > On Tue, 25 Sep 2012, Jeff Moyer wrote: > > > >> Jeff Moyer <jmoyer@xxxxxxxxxx> writes: > >> > >> > Mikulas Patocka <mpatocka@xxxxxxxxxx> writes: > >> > > >> >> Hi Jeff > >> >> > >> >> Thanks for testing. > >> >> > >> >> It would be interesting ... what happens if you take the patch 3, leave > >> >> "struct percpu_rw_semaphore bd_block_size_semaphore" in "struct > >> >> block_device", but remove any use of the semaphore from fs/block_dev.c? - > >> >> will the performance be like unpatched kernel or like patch 3? It could be > >> >> that the change in the alignment affects performance on your CPU too, just > >> >> differently than on my CPU. > >> > > >> > It turns out to be exactly the same performance as with the 3rd patch > >> > applied, so I guess it does have something to do with cache alignment. > >> > Here is the patch (against vanilla) I ended up testing. Let me know if > >> > I've botched it somehow. > >> > > >> > So, I next up I'll play similar tricks to what you did (padding struct > >> > block_device in all kernels) to eliminate the differences due to > >> > structure alignment and provide a clear picture of what the locking > >> > effects are. > >> > >> After trying again with the same padding you used in the struct > >> bdev_inode, I see no performance differences between any of the > >> patches. I tried bumping up the number of threads to saturate the > >> number of cpus on a single NUMA node on my hardware, but that resulted > >> in lower IOPS to the device, and hence consumption of less CPU time. > >> So, I believe my results to be inconclusive. > > > > For me, the fourth patch with RCU-based locks performed better, so I am > > submitting that. > > > >> After talking with Vivek about the problem, he had mentioned that it > >> might be worth investigating whether bd_block_size could be protected > >> using SRCU. I looked into it, and the one thing I couldn't reconcile is > >> updating both the bd_block_size and the inode->i_blkbits at the same > >> time. It would involve (afaiui) adding fields to both the inode and the > >> block_device data structures and using rcu_assign_pointer and > >> rcu_dereference to modify and access the fields, and both fields would > >> need to protected by the same struct srcu_struct. I'm not sure whether > >> that's a desirable approach. When I started to implement it, it got > >> ugly pretty quickly. What do others think? > > > > Using RCU doesn't seem sensible to me (except for lock implementation, as > > it is in patch 4). The major problem is that the block layer reads > > blocksize multiple times and when different values are read, a crash may > > happen - RCU doesn't protect you against that - if you read a variable > > multiple times in a RCU-protected section, you can still get different > > results. > > SRCU is sleepable, so could be (I think) used in the same manner as your > rw semaphore. The only difference is that it would require changing the > bd_blocksize and the i_blkbits to pointers and protecting them both with > the same srcu struct. Then, the inode i_blkbits would also need to be > special cased, so that we only require such handling when it is > associated with a block device. It got messy. No, it couldn't be used this way. If you do srcu_read_lock(&srcu) ptr1 = srcu_dereference(pointer, &srcu); ptr2 = srcu_dereference(pointer, &srcu); srcu_read_unlock(&srcu) it doesn't guarantee that ptr1 == ptr2. All that it guarantees is that when synchronize_srcu exits, there are no references to the old structure. But after rcu_assign_pointer and before synchronize_srcu exits, readers can read both old and new value of the pointer and it is not specified which value do they read. > > If we wanted to use RCU, we would have to read blocksize just once and > > pass the value between all functions involved - that would result in a > > massive code change. > > If we did that, we wouldn't need rcu at all, would we? Yes, we wouldn't need RCU then. Mikulas > Cheers, > Jeff > -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel