Re: Questions re mon_osd_cache_size increase

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The osd_map_cache_size controls the OSD’s cache of maps; the change in 13.2.3 is to the default for the monitors’.
On Mon, Jan 7, 2019 at 8:24 AM Anthony D'Atri <aad@xxxxxxxxxxxxxx> wrote:


> * The default memory utilization for the mons has been increased
>  somewhat.  Rocksdb now uses 512 MB of RAM by default, which should
>  be sufficient for small to medium-sized clusters; large clusters
>  should tune this up.  Also, the `mon_osd_cache_size` has been
>  increase from 10 OSDMaps to 500, which will translate to an
>  additional 500 MB to 1 GB of RAM for large clusters, and much less
>  for small clusters.


Just I don't perseverate on this:   mon_osd_cache_size is a [mon] setting for ceph-mon only?  Does it relate to osd_map_cache_size?  ISTR that in the past the latter defaulted to 500; I had seen a presentation (I think from Dan) at an OpenStack Summit advising its decrease and it defaults to 50 now. 

I like to be very clear about where additional memory is needed, especially for dense systems.

-- Anthony

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux