So, I want a speed like this, not a linear reducation when replication size increase: rep_size 1 2 3 4 5 ... n writeï 78.8 MB/s 39.38 MB/s (38.7 MB/s) (37.90 MB/s) (37.03 MB/s) ... (32.13 MB/s) readï 85.3 MB/s 85.33 MB/s 78.77MB/s 78.77MB/s (78.77MB/s ) ..... (78.77MB/s ) Hmm, So how can I got a perfect result like this with primary-copy replication? Thx very much! Simon 2011/5/9 Simon Tian <aixt2006@xxxxxxxxx>: > 2011/5/9 Gregory Farnum <gregf@xxxxxxxxxxxxxxx>: >> On Sun, May 8, 2011 at 7:43 PM, Simon Tian <aixt2006@xxxxxxxxx> wrote: >>> Hi folks, >>> >>>   ÂI am testing the replication performance of ceph-0.26 with >>> libceph, write 1G data in with ceph_write() and read it out with >>> ceph_read(), >>> >>> rep_size  Â1                2 >>>  Â3           Â4 >>> writeï Â78.8 MB/s        39.38 MB/s       Â27.7 MB/s        20.90 MB/s >>> readï  85.3 MB/s        85.33 MB/s       Â78.77MB/s        78.77MB/s >>> >>> I think if the replication strategy is splay or primary copy, not the >>> chain, as the thesis said,  writing speed for 3, 4 or even more >>> replication will be a little worse than Â2 replication, should be near >>> with 39.38 MB/s. >>> But the write performance ÂI got is affect so much by size of replication. >>> >>> What is the replication strategy in ceph-0.26, not splay? ÂIf splay, >>> why not near with 39.38 MB/s? >>> >>> There is 5 OSDs in 2 hosts, 2 in one and 3 int the other. >> >> The replication strategy has been fixed at primary copy for several >> years now. At expected replication levels (2-3) there just isn't a big >> difference between the strategies, and limiting it to primary-copy >> replication makes a lot of the bookkeeping for data safety much easier >> to handle. > > As you know, I am a fresh to ceph, haha > > For primary copy, I think when the replication size is 3, 4, or even > more, the writing speed should also near with 2 replication. Because > the 2nd, 3rd, 4th, ... replication are written parallelly. ÂThe speed > I got for 3, 4 replication is not near with the speed of 2, in fact, > like linear reduce. > > Thx very much! > -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html