On Sun, May 8, 2011 at 7:43 PM, Simon Tian <aixt2006@xxxxxxxxx> wrote: > Hi folks, > >   ÂI am testing the replication performance of ceph-0.26 with > libceph, write 1G data in with ceph_write() and read it out with > ceph_read(), > > rep_size  Â1                2 >  Â3           Â4 > writeï Â78.8 MB/s        39.38 MB/s       Â27.7 MB/s        20.90 MB/s > readï  85.3 MB/s        85.33 MB/s       Â78.77MB/s        78.77MB/s > > I think if the replication strategy is splay or primary copy, not the > chain, as the thesis said,  writing speed for 3, 4 or even more > replication will be a little worse than Â2 replication, should be near > with 39.38 MB/s. > But the write performance ÂI got is affect so much by size of replication. > > What is the replication strategy in ceph-0.26, not splay? ÂIf splay, > why not near with 39.38 MB/s? > > There is 5 OSDs in 2 hosts, 2 in one and 3 int the other. The replication strategy has been fixed at primary copy for several years now. At expected replication levels (2-3) there just isn't a big difference between the strategies, and limiting it to primary-copy replication makes a lot of the bookkeeping for data safety much easier to handle. -Greg -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html