On Wed, 24 Aug 2011, Gregory Farnum wrote: > On Wed, Aug 24, 2011 at 8:29 AM, Marcus Sorensen <shadowsor@xxxxxxxxx> wrote: > > Just thought I'd share this basic testing I did, comparing cephfs 0.32 > > on 3.1-rc1 to nfs as well as rbd to iscsi. I'm sure you guys see a lot > > of this. Any feedback would be appreciated. > > > > The data is here: > > > > http://learnitwithme.com/wp-content/uploads/2011/08/ceph-nfs-iscsi-benchmarks.ods > > > > and the writeup is here: > > > > http://learnitwithme.com/?p=303 > > We see less of it than you'd think, actually. Thanks! > > To address a few things specifically > Ceph is both the name of the project and of the POSIX-compliant > filesystem. RADOS stands for Reliable Autonomous Distributed Object > Store. Apparently we should publish this a bit more. :) > > Looks like most of the differences in your tests have to do with our > relatively lousy read performance -- this is probably due to lousy > readahead, which nobody's spent a lot of time optimizing as we focus > on stability. Sage made some improvements a few weeks ago but I don't > remember what version of stuff they ended up in. :) (Optimizing > cross-server reads is hard!) The readahead improvements are in the 'master' branch of ceph-client.git, and will go upstream for Linux 3.2-rc1 (I just missed the 3.1-rc1 cutoff). In my tests I was limited by the wire speed with these patches. I'm guessing you were using 3.0 or earlier kernel? The file copy test was also surprising. I think there is a regression there somewhere, taking a look. sage -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html