osd_memory_target=level0 ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Still suffering with the spilledover disks and stability issue in 3 of my cluster after uploaded 6-900 millions objects to the cluster. (Octopus 15.2.10).

I’ve set memory target around 31-32GB so could that be that the spilledover issue is coming from here? 
So have mem target 31GB, next level would be 310 and after go to the underlaying ssd disk. So the 4 level doesn’t have space on the nvme.

Let’s say set to default 4GB, it would be 444GB the level0-3 so it should fit in on the
600GB lvm assigned on the nvme for db with wal.

This is how it looks like, eg. Osd 27 even after 2 times manual compact still spilled over :(

osd.1 spilled over 198 GiB metadata from 'db' device (303 GiB used of 596 GiB) to slow device
     osd.5 spilled over 251 GiB metadata from 'db' device (163 GiB used of 596 GiB) to slow device
     osd.8 spilled over 61 GiB metadata from 'db' device (264 GiB used of 596 GiB) to slow device
     osd.11 spilled over 260 GiB metadata from 'db' device (242 GiB used of 596 GiB) to slow device
     osd.12 spilled over 149 GiB metadata from 'db' device (238 GiB used of 596 GiB) to slow device
     osd.15 spilled over 259 GiB metadata from 'db' device (195 GiB used of 596 GiB) to slow device
     osd.17 spilled over 10 GiB metadata from 'db' device (314 GiB used of 596 GiB) to slow device
     osd.21 spilled over 324 MiB metadata from 'db' device (346 GiB used of 596 GiB) to slow device
     osd.27 spilled over 12 GiB metadata from 'db' device (486 GiB used of 596 GiB) to slow device
     osd.29 spilled over 61 GiB metadata from 'db' device (306 GiB used of 596 GiB) to slow device
     osd.31 spilled over 59 GiB metadata from 'db' device (308 GiB used of 596 GiB) to slow device
     osd.46 spilled over 69 GiB metadata from 'db' device (308 GiB used of 596 GiB) to slow device

Also is there a way to fasten compaction? It takes 1-1.5 hours /osd to compact.

Thank you
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux