Re: Number of OSD map versions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Dan,

I'll use these ones from Infernalis:


[global]
osd map message max = 100

[osd]
osd map cache size = 200
osd map max advance = 150
osd map share max epochs = 100
osd pg epoch persisted max stale = 150


George

On Mon, Nov 30, 2015 at 4:20 PM, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:

I wouldn't run with those settings in production. That was a test to squeeze too many OSDs into too little RAM.

Check the values from infernalis/master. Those should be safe.

--
Dan

On 30 Nov 2015 21:45, "George Mihaiescu" <lmihaiescu@xxxxxxxxx> wrote:
Hi,

I've read the recommendation from CERN about the number of OSD maps (https://cds.cern.ch/record/2015206/files/CephScaleTestMarch2015.pdf, page 3) and I would like to know if there is any negative impact from these changes:

[global]
osd map message max = 10

[osd]
osd map cache size = 20
osd map max advance = 10
osd map share max epochs = 10
osd pg epoch persisted max stale = 10


We are running Hammer with nowhere closer to 7000 OSDs, but I don't want to waste memory on OSD maps which are not needed.

Are there are large production deployments running with these or similar settings?

Thank you,
George


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux