Quoting Ilya Dryomov <idryomov@xxxxxxxxx>:
On Fri, Jun 29, 2018 at 8:08 PM Nick Fisk <nick@xxxxxxxxxx> wrote:
This is for us peeps using Ceph with VMWare.
My current favoured solution for consuming Ceph in VMWare is via
RBD’s formatted with XFS and exported via NFS to ESXi. This seems
to perform better than iSCSI+VMFS which seems to not play nicely
with Ceph’s PG contention issues particularly if working with thin
provisioned VMDK’s.
I’ve still been noticing some performance issues however, mainly
noticeable when doing any form of storage migrations. This is
largely due to the way vSphere transfers VM’s in 64KB IO’s at a QD
of 32. vSphere does this so Arrays with QOS can balance the IO
easier than if larger IO’s were submitted. However Ceph’s PG
locking means that only one or two of these IO’s can happen at a
time, seriously lowering throughput. Typically you won’t be able to
push more than 20-25MB/s during a storage migration
There is also another issue in that the IO needed for the XFS
journal on the RBD, can cause contention and effectively also means
every NFS write IO sends 2 down to Ceph. This can have an impact on
latency as well. Due to possible PG contention caused by the XFS
journal updates when multiple IO’s are in flight, you normally end
up making more and more RBD’s to try and spread the load. This
normally means you end up having to do storage migrations…..you can
see where I’m getting at here.
I’ve been thinking for a while that CephFS works around a lot of
these limitations.
1. It supports fancy striping, so should mean there is less
per object contention
Hi Nick,
Fancy striping is supported since 4.17. I think its primary use case
is small sequential I/Os, so not sure if it is going to help much, but
it might be worth doing some benchmarking.
Thanks Ilya, I will try to find sometime to also investigate this.
Nick
Thanks,
Ilya
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com