Hi,
Can you try modifying osd_snap_trim_sleep ? The default value is 0, I have good results with 0.25 with a ceph cluster using SATA disks :
ceph tell osd.* injectargs -- --osd_snap_trim_sleep 0.25
Best regards,
----- Le 10 Déc 15, à 7:52, Wukongming <wu.kongming@xxxxxxx> a écrit :
Hi, All
I used a rbd command to create a 6TB-size image, And then created a snapshot of this image. After that, I kept writing something like modifying files so the snapshots would be cloned one by one.
At this time, I did the fellow 2 ops simultaneously.
1. keep client io to this image.
2. excute a rbd snap rm command to delete snapshot.
Finally ,I found client io blocked for quite a long time. I used SATA disk to test, and felt that ceph makes it a priority to remove snapshot.
Also we use iostat tool to help watch the disk state, and it runs in full workload.
So, should we have a priority to deal with client io instead of removing snapshot?
---------------------------------------------
wukongming ID: 12019
Tel:0571-86760239
Dept:2014 UIS2 ONEStor
-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!
N嫥叉靣笡y氊b瞂千v豝�)藓{.n�+壏渮榏z鳐妠ay�蕠跈�,jf"穐殝鄗�畐ア�⒎:+v墾妛鑚豰稛��珣赙zZ+凒殠娸"濟!秈
--
Florent Manens
BeeZim
BeeZim
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com