Re: [Ocfs2-devel] [PATCH, RFC 0/3] *** SUBJECT HERE ***

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 03, 2010 at 12:07:03PM -0700, Joel Becker wrote:
> 
> 	The atomic changes make absolute sense.  Ack on them.  I had two
> reactions to the rwlock: first, a lot of your rwlock changes are on
> the write_lock() side.  You get journal start/stop parallelized, but
> what about all the underlying access/dirty/commit paths?  Second,
> rwlocks are known to behave worse than spinlocks when they ping the
> cache line across CPUs.
> 	That said, I have a hunch that you've tested both of the above
> concerns.  You mention 48 core systems, and clearly if cachelines were
> going to be a problem, you would have noticed.  So if the rwlock changes
> are faster on 48 core than the spinlocks, I say ack ack ack.

We don't have the results from the 48-core machine yet.  I was going
to try to get measurements from the 48-core machine I have access to
at $WORK, but it doesn't have enough hard drive spindles on it.  :-(

But yes, I am worried about the cache-line bounce issue, and I'm
hoping that we'll get some input from people who can run some
measurements on an 8-core and 48-core machine.

I haven't worried about the commit paths yet because they haven't
shown up as being significant on any of the lockstat reports.
Remember that with jbd2, the commit code only runs on in kjournald,
and in general only once every 5 seconds or for every fsync.  In
contrast, essentially every single file system syscall that modifies
the filesystem is going to end up calling start_this_handle().  So if
you have multiple threads all creating files, or writing to files, or
even just changing the mtime or permissions, it's going to call
start_this_handle(), so we're seeing nearly all of the contention on
start_this_handle() and to a lesser extent, jbd2_journal_stop(), the
function which retires a handle.

Things would probably be different on a workload that tries to
simulate a mail transfer agent or a database which is _not_ using
O_DIRECT on a preallocated table space file, since there will be many
more fsync() calls and thus much more pressure on the commit code.
But I didn't want to do any premature optimization until we see how
bad it actually gets in those cases.

If you are set up to do some performance measurements on OCFS2, I'd
appreciate if you could give it a try and let me know how the patches
fare on OCFS2.

Thanks,

						- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux