Re: Erasure Coding performance for IO < stripe_width

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 08/07/2019 13:02, Lars Marowsky-Bree wrote:
On 2019-07-08T12:25:30, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:

Is there a specific bench result you're concerned about?
We're seeing ~5800 IOPS, ~23 MiB/s on 4 KiB IO (stripe_width 8192) on a
pool that could do 3 GiB/s with 4M blocksize. So, yeah, well, that is
rather harsh, even for EC.

I would think that small write perf could be kept reasonable thanks to
bluestore's deferred writes.
I believe we're being hit by the EC read-modify-write cycle on
overwrites.

FWIW, our bench results (all flash cluster) didn't show a massive
performance difference between 3 replica and 4+2 EC.
I'm guessing that this was not 4 KiB but a more reasonable blocksize
that was a multiple of stripe_width?


Regards,
     Lars

Hi Lars,

Maybe not related, but we find with rbd, random 4k write iops start very low at first for a new image and then increase over time as we write. If we thick provision the image it work does not show this. This happens on random small block and not sequential or large. Probably related to initial obkect/chunk creation.

Also we use the default stripe width, maybe you try a pool with default width and see if it is a factor.


/Maged

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux