On Wed, Jul 20, 2011 at 7:32 PM, NeilBrown <neilb@xxxxxxx> wrote: > sh->lock is now mainly used to ensure that two threads aren't running > in the locked part of handle_stripe[56] at the same time. > > That can more neatly be achieved with an 'active' flag which we set > while running handle_stripe. If we find the flag is set, we simply > requeue the stripe for later by setting STRIPE_HANDLE. > > For safety we take ->device_lock while examining the state of the > stripe and creating a summary in 'stripe_head_state / r6_state'. > This possibly isn't needed but as shared fields like ->toread, > ->towrite are checked it is safer for now at least. > > We leave the label after the old 'unlock' called "unlock" because it > will disappear in a few patches, so renaming seems pointless. > > This leaves the stripe 'locked' for longer as we clear STRIPE_ACTIVE > later, but that is not a problem. > This removal reminds me of one thing I have wondered about, but to date have not found the time to measure (maybe someone might beat me to it if the idea is out there), is what is the overhead of all the atomic operations that raid5.c generates? If we can guarantee that certain updates only happen under sh->lock (now STRIPE_ACTIVE) can we downgrade set_bit and clear_bit to their non-atomic __set_bit and __clear_bit versions and recover some cpu cycles? -- Dan -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html