RE: Walkthrough of the Ceph release process by David Galloway

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi 
i m suffering from an issue that we went to production without any limit for osd_max_pg_log_entries and osd_min_pg_log_entries (nautilus version)
the problem is that the memory of the osd got very very high with many objects and big number of PG's
i saw in similar case recommendation for this 
but i do not understand based on which parameters like : osd device size ? block.db size ?
osd_min_pg_log_entries=500
osd_max_pg_log_entries=500
bluefs_buffered_io=false

https://tracker.ceph.com/issues/53729
can someone share more light , what are the parameters for decided those values ?
osd_min_pg_log_entries=500
osd_max_pg_log_entries=500

Side note: we understand that Once we have a big number of Pg log entries , the limit does snot update on the fly (running system) and not evenm on boot
And using manual log trim procedure is required - am I right ?

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux