On Tue, Aug 28, 2012 at 12:47 AM, Sébastien Han <han.sebastien@xxxxxxxxx> wrote: > Hi community, > > For those of you who are interested, I performed several benchmarks of > RADOS and RBD on different types of hardware and use case. > You can find my results here: > http://www.sebastien-han.fr/blog/2012/08/26/ceph-benchmarks/ > > Hope it helps :) > > Feel free to comment, critic... :) > > Cheers! My two cents - on ultrafast journal(tmpfs) it means which tcp congestion control algorithm you using. For default CUBIC delays aggregated sixteen-osd writing speed is about 450MBps, but for DCTCP it raising up to 550MBps. For such device as SLC disk(ext4,^O journal, commit=100) there is no observable difference - both times aggregated speed measured about 330MBps. I do not tried yet H(S)TCP, it should do the same as DCTCP. For delays lower than regular gigabit ethernet different congestion algorithms should show bigger difference, though. > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html