On Wed, Apr 30, 2008 at 09:11:54PM +1000, David Chinner wrote: > On Wed, Apr 30, 2008 at 06:58:32AM -0400, Christoph Hellwig wrote: > > On Wed, Apr 30, 2008 at 08:41:25PM +1000, David Chinner wrote: > > > The only thing that I'm concerned about here is that this will > > > substantially increase the time the l_icloglock is held. This is > > > a severely contended lock on large cpu count machines and putting > > > the wakeup inside this lock will increase the hold time. > > > > > > I guess I can address this by adding a new lock for the waitqueue > > > in a separate patch set. > > > > waitqueues are loked internally and don't need synchronization. With > > a little bit of re-arranging the code the wake_up could probably be > > moved out of the critical section. > > Yeah, I just realised that myself and was about to reply as such.... > > I'll move the wakeup outside the lock. I can't tell whether this race matters ... probably not: N processes come in and queue up waiting for the flush xlog_state_do_callback() is called it unlocks the spinlock a new task comes in and takes the spinlock wakeups happen ie do we care about 'fairness' here, or is it OK for a new task to jump the queue? -- Intel are signing my paycheques ... these opinions are still mine "Bill, look, we understand that you're interested in selling us this operating system, but compare it to ours. We can't possibly take such a retrograde step." -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html