Hi Nick, Yes our application is doing small Random IO and I did not realize that the snapshotting feature could so much degrade performances in that case. We have just deactivated it and deleted all snapshots. Will notify you if it drastically reduce the blocked ops and consequently the IO freeze on client side. Thanks Thomas From: Nick Fisk [mailto:nick@xxxxxxxxxx]
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx]
On Behalf Of Thomas Danan Very interesting ... Any idea why optimal tunable would help here ? on our cluster we have 500TB of data, I am a bit concerned about changing it without taking lot of precautions . ... I am curious to know how much time it takes you to change tunable, size of your cluster and observed impacts on client IO ... Yes We do have daily rbd snapshot from 16 different ceph RBD clients. Snapshoting the RBD image is quite immediate while we are seing the issue continuously during the day... Just to point out that when you take a snapshot any writes to the original RBD will mean that the full 4MB object is copied into the snapshot.
If you have a lot of small random IO going on the original RBD this can lead to massive write amplification across the cluster and may cause issues such as what you describe. Also be aware that deleting large snapshots also puts significant strain on the OSD’s as they try and delete hundreds of thousands of objects. Will check all of this tomorrow . .. Thanks again Thomas Sent from my Samsung device
On 11/15/16 14:05, Thomas Danan wrote:
This electronic message contains information from Mycom which may be privileged or confidential. The information is intended to be for the use of the individual(s) or entity named above. If you are not the intended recipient, be aware that any disclosure, copying, distribution or any other use of the contents of this information is prohibited. If you have received this electronic message in error, please notify us by post or telephone (to the numbers or correspondence address above) or by email (at the email address above) immediately. |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com