On 2012-09-25 19:59, Jens Axboe wrote: > On 2012-09-25 19:49, Jeff Moyer wrote: >> Jeff Moyer <jmoyer@xxxxxxxxxx> writes: >> >>> Mikulas Patocka <mpatocka@xxxxxxxxxx> writes: >>> >>>> Hi Jeff >>>> >>>> Thanks for testing. >>>> >>>> It would be interesting ... what happens if you take the patch 3, leave >>>> "struct percpu_rw_semaphore bd_block_size_semaphore" in "struct >>>> block_device", but remove any use of the semaphore from fs/block_dev.c? - >>>> will the performance be like unpatched kernel or like patch 3? It could be >>>> that the change in the alignment affects performance on your CPU too, just >>>> differently than on my CPU. >>> >>> It turns out to be exactly the same performance as with the 3rd patch >>> applied, so I guess it does have something to do with cache alignment. >>> Here is the patch (against vanilla) I ended up testing. Let me know if >>> I've botched it somehow. >>> >>> So, I next up I'll play similar tricks to what you did (padding struct >>> block_device in all kernels) to eliminate the differences due to >>> structure alignment and provide a clear picture of what the locking >>> effects are. >> >> After trying again with the same padding you used in the struct >> bdev_inode, I see no performance differences between any of the >> patches. I tried bumping up the number of threads to saturate the >> number of cpus on a single NUMA node on my hardware, but that resulted >> in lower IOPS to the device, and hence consumption of less CPU time. >> So, I believe my results to be inconclusive. >> >> After talking with Vivek about the problem, he had mentioned that it >> might be worth investigating whether bd_block_size could be protected >> using SRCU. I looked into it, and the one thing I couldn't reconcile is >> updating both the bd_block_size and the inode->i_blkbits at the same >> time. It would involve (afaiui) adding fields to both the inode and the >> block_device data structures and using rcu_assign_pointer and >> rcu_dereference to modify and access the fields, and both fields would >> need to protected by the same struct srcu_struct. I'm not sure whether >> that's a desirable approach. When I started to implement it, it got >> ugly pretty quickly. What do others think? >> >> For now, my preference is to get the full patch set in. I will continue >> to investigate the performance impact of the data structure size changes >> that I've been seeing. >> >> So, for the four patches: >> >> Acked-by: Jeff Moyer <jmoyer@xxxxxxxxxx> >> >> Jens, can you have a look at the patch set? We are seeing problem >> reports of this in the wild[1][2]. > > I'll queue it up for 3.7. I can run my regular testing on the 8-way, it > has a nack for showing scaling problems very nicely in aio/dio. As long > as we're not adding per-inode cache line dirtying per IO (and the > per-cpu rw sem looks OK), then I don't think there's too much to worry > about. I take that back. The series doesn't apply to my current tree. Not too unexpected, since it's some weeks old. But more importantly, please send this is a "real" patch series. I don't want to see two implementations of rw semaphores. I think it's perfectly fine to first do a regular rw sem, then a last patch adding the cache friendly variant from Eric and converting to that. In other words, get rid of 3/4. -- Jens Axboe -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html