Re: RAM recommendation with large OSDs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Paul. I was speaking more about total OSDs and RAM, rather than a single node. However, I am considering building a cluster with a large OSD/node count. This would be for archival use, with reduced performance and availability requirements. What issues would you anticipate with a large OSD/node count? Is the concern just the large rebalance if a node fails and takes out a large portion of the OSDs at once?

-----Original Message-----
From: Paul Emmerich <paul.emmerich@xxxxxxxx> 
Sent: Tuesday, October 01, 2019 3:00 PM
To: Darrell Enns <darrelle@xxxxxxxxxxxx>
Cc: ceph-users@xxxxxxx
Subject: Re:  RAM recommendation with large OSDs?

On Tue, Oct 1, 2019 at 6:12 PM Darrell Enns <darrelle@xxxxxxxxxxxx> wrote:
>
> The standard advice is “1GB RAM per 1TB of OSD”. Does this actually still hold with large OSDs on bluestore?

No

> Can it be reasonably reduced with tuning?

Yes


> From the docs, it looks like bluestore should target the “osd_memory_target” value by default. This is a fixed value (4GB by default), which does not depend on OSD size. So shouldn’t the advice really by “4GB per OSD”, rather than “1GB per TB”? Would it also be reasonable to reduce osd_memory_target for further RAM savings?

Yes

> For example, suppose we have 90 12TB OSD drives:

Please don't put 90 drives in one node, that's not a good idea in 99.9% of the use cases.

>
> “1GB per TB” rule: 1080GB RAM
> “4GB per OSD” rule: 360GB RAM
> “2GB per OSD” (osd_memory_target reduced to 2GB): 180GB RAM
>
>
>
> Those are some massively different RAM values. Perhaps the old advice was for filestore? Or there is something to consider beyond the bluestore memory target? What about when using very dense nodes (for example, 60 12TB OSDs on a single node)?

Keep in mind that it's only a target value, it will use more during recovery if you set a low value.
We usually set a target of 3 GB per OSD and recommend 4 GB of RAM per OSD.

RAM saving trick: use fewer PGs than recommended.


Paul



--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux