Re: Erasure Coding performance for IO < stripe_width

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We're seeing ~5800 IOPS, ~23 MiB/s on 4 KiB IO (stripe_width 8192) on a
pool that could do 3 GiB/s with 4M blocksize. So, yeah, well, that is
rather harsh, even for EC.

4kb IO is slow in Ceph even without EC. Your 3 GB/s linear writes don't matter anything. Ceph adds a significant overhead to each operation.

From my observations 4kb random write throughput with iodepth=128 in a full-flash cluster is only ~30% lower with EC 2+1 than with 3 replicas.

With iodepth=1 and in an HDD+SSD setup it's worse: I get 100-120 write iops with EC and 500+ write iops with 3 replicas. I guess this is because in a replicated pool Ceph can just write new block to the deferred write queue and with EC it must first read the corresponding block from another OSD, and SSD journal doesn't help reads. But I don't remember exact test results for iodepth=128...

--
With best regards,
  Vitaliy Filippov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux