Re: Bobtail vs Argonaut Performance Preview

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/22/2012 12:45 PM, Christoph Hellwig wrote:
On Sat, Dec 22, 2012 at 07:36:41AM -0600, Mark Nelson wrote:
Btw Christoph, thank you for taking the time to read my article.  If
I've done anything dumb or suboptimal regarding xfs, please do let
me know.  Soon I will be doing parametric sweeps over ceph parameter
spaces to see how performance varies on different hardware
configurations.  I want to make sure the tests are setup as
optimally as possible.

You're defintively missing the "inode64" mount option, which we've
always recommended, and which finally made it to be the default in
Linux 3.7.


Is inode64 typically faster than inode32? I thought I remembered dchinner saying that the situation wasn't always particularly clear and it depended on the workload. Having said that, I can't really see it not being a good thing for Ceph to spread metadata out over all of the AGs, especially in the multi-disk raid config. I'll use it for the next set of tests.

Some other things worth playing with, but which aren't guaranteed to
be a win are:

  - use a larger than default log size (e.g. mkfs.xfs -l size=2g)
  - use large directory blocks, similar to what you already do for btrfs
    (mkfs.xfs -n size=16k or 64k)

I'll definitely give them a try at some point. Thanks for the tips Christoph!


Also at least for the benchmarks doing concurrent I/O (or any real life
setup) you're probably much better off with a concatenation than a RAID 0
for the multiple disk setup.



--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux