snaptrim blocks io on ceph pacific even on fast NVMEs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have upgraded my ceph cluster to pacific in August and updated to pacific 16.2.6 in September without problems. 

I had no performance issues at all, the cluster has 3 nodes 64 core each, 15 blazing fast Samsung PM1733 NVME osds, 25 GBit/s Network and around 100 vms. The cluster was really fast. I never saw something like "snaptrim" in the ceph status output.

But the cluster seemed to slowly "eat" storage space. So yesterday I decided to add 3 more NVMEs, 1 for each node. In the second i added the first nvme as ceph osd the cluster was crashing. I had high loads on all osds and all the osds where dying again and again until i set nodown,noout,noscrub,nodeep-scrub and rtemoved the new osd. The the cluster recovered but had slow io and lots of snaptrim and snaptrim wait processes.

I made this smoother by setting --osd_snap_trim_sleep=3.0 

Over night the snaptrim_wait pgs became 0 and i had 15% mor free space in the ceph cluster. But during the day the snaptrim_waits increased and increased.

I then set osd_snap_trim_sleep to 0.0 again and most vms had extremely high iowaits ore crashed.

Now I did a ceph osd set nosnaptrim and the cluster is flying again. Iowait 0 on all vms but count
of snaptrim wait is slowly increasing.

How can I get the snaptrims running fast and not affect ceph io performance ?
My theory is until yesterday for some reasons the snaptrims were not running for some reason and therefore the cluster was "eating" storage space. After crash yesterday and restarting the snaptrims the started.

In the logs I do not find the info whats going on. From what I read in the mailing lists and forums i suppose the problem might have somethin to do with ceph osds and omaps and compaction and rocksdb format or maybe with osd on disk format ? 

Any ideas what the next steps could be ?

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux