Hi. ceph osd pool set data size 1 dd if=/dev/zero of=aaa bs=1024000 count=4000 4096000000 bytes (4.1 GB) copied, 31.3153 s, 131 MB/s ceph osd pool set data size 2 4096000000 bytes (4.1 GB) copied, 72.7146 s, 56.3 MB/s ceph osd pool set data size 3 4096000000 bytes (4.1 GB) copied, 136.263 s, 30.1 MB/s Why? I thought increase in the number of copies should increase the performance (in the worst case does not affect). WBR, Fyodor. -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html