Re: VMware + Ceph using NFS sync/async ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Nick,

Thanks for replying! If Ceph is combined with Openstack then, does that mean that actually when openstack writes are happening, it is not fully sync'd (as in written to disks) before it starts receiving more data, so acting as async ? In that scenario there is a chance for data loss if things go bad, i.e power outage or something like that ?

As for the slow operations, reading is quite fine when I compare it to a SAN storage system connected to VMware. It is writing data, small chunks or big ones, that suffer when trying to use the sync option with FIO for benchmarking.

In that case, I wonder, is no one using CEPH with VMware in a production environment ?

Cheers.

Regards,
Ossi



 

Hi Osama,

 

This is a known problem with many software defined storage stacks, but potentially slightly worse with Ceph due to extra overheads. Sync writes have to wait until all copies of the data are written to disk by the OSD and acknowledged back to the client. The extra network hops for replication and NFS gateways add significant latency which impacts the time it takes to carry out small writes. The Ceph code also takes time to process each IO request.

 

What particular operations are you finding slow? Storage vmotions are just bad, and I don’t think there is much that can be done about them as they are split into lots of 64kb IO’s.

 

One thing you can try is to force the CPU’s on your OSD nodes to run at C1 cstate and force their minimum frequency to 100%. This can have quite a large impact on latency. Also you don’t specify your network, but 10G is a must.

 

Nick



From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Osama Hasebou
Sent: 14 August 2017 12:27
To: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: VMware + Ceph using NFS sync/async ?

 

Hi Everyone,

 

We started testing the idea of using Ceph storage with VMware, the idea was to provide Ceph storage through open stack to VMware, by creating a virtual machine coming from Ceph + Openstack , which acts as an NFS gateway, then mount that storage on top of VMware cluster.

 

When mounting the NFS exports using the sync option, we noticed a huge degradation in performance which makes it very slow to use it in production, the async option makes it much better but then there is the risk of it being risky that in case a failure shall happen, some data might be lost in that Scenario.

 

Now I understand that some people in the ceph community are using Ceph with VMware using NFS gateways, so if you can kindly shed some light on your experience, and if you do use it in production purpose, that would be great and how did you mitigate the sync/async options and keep write performance.

 

 

Thanks you!!!

 

Regards,
Ossi



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux