Re: Memory footprint of increased PG number

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I don't think increasing the PGs has an impact on the OSD's memory, at least I'm not aware of such reports and haven't seen it myself. But your cluster could get in trouble as it already is, only 24 GB for 16 OSDs is too low. It can work (and apparently does) when everything is calm, but during recovery the memory usage spikes. The default is 4 GB per OSD, there have been several reports over the years where users couldn't get their OSDs back up after a failure because of low memory settings. I'd recommend to increase RAM.

Regards,
Eugen

Zitat von Nicola Mori <mori@xxxxxxxxxx>:

Dear Ceph user,

I'm wondering how much an increase of PG number would impact on the memory occupancy of OSD daemons. In my cluster I currently have 512 PGs and I would like to increase it to 1024 to mitigate some disk occupancy issues, but having machines with low amount of memory (down to 24 GB for 16 OSDs) I fear this could kill my cluster. Is it possible to evaluate the relative increase in OSD memory footprint when doubling the number of PGs (hopefully not a linear scaling)? Or is there a way to experiment without crashing everything?
Thank you.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux