Hello everyone, I'm running a Cuttlefish cluster that hosts a lot of RBDs. I recently removed a snapshot of a large one (rbd snap rm -- 12TB), and I noticed that all of the clients had markedly decreased performance. Looking at iostat on the OSD nodes had most disks pegged at 100% util. I know there are thread priorities that can be set for clients vs recovery, but I'm not sure what deleting a snapshot falls under. I couldn't really find anything relevant. Is there anything I can tweak to lower the priority of such an operation? I didn't need it to complete fast, as "rbd snap rm" returns immediately and the actual deletion is done asynchronously. I'd be fine with it taking longer at a lower priority, but as it stands now it brings my cluster to a crawl and is causing issues with several VMs. I see an "osd snap trim thread timeout" option in the docs -- Is the operation occuring here what you would call snap trimming? If so, any chance of adding an option for "osd snap trim priority" just like there is for osd client op and osd recovery op? Hope what I am saying makes sense... - Travis _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com