Re: OSD RAM recommendations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The truth of the matter is that folks try to boil this down to some kind of hard and fast rule but it's often not that simple. With our current default settings for pglog, rocksdb WAL buffers, etc, the OSD basically needs about 1GB of RAM for bare-bones operation (not under recovery or extreme write workload) and any additional space is used for various caches.  How much memory you need for caches depends on a variety of factors, but the big ones are how likely you are to miss bluestore onode reads, omap reads, how big the bloomfilters/indexes are in rocksdb (which scale with the total number of objects), whether or not cached data is important, etc.

So basically the answer is that how much memory you need depends largely on how much you care about performance, how many objects are present on an OSD, and how many objects (and how much data) you have in your active data set.  4GB is sort of our current default memory target per OSD, but as someone else mentioned bumping that up to 8-12GB per OSD might make sense for OSDs on large NVMe drives.  You can also lower that down to about 2GB before you start having real issues, but it definitely can have an impact on OSD performance.


Mark


On 6/7/19 12:00 PM, Jorge Garcia wrote:
I'm a bit confused by the RAM recommendations for OSD servers. I have also seen conflicting information in the lists (1 GB RAM per OSD, 1 GB RAM per TB, 3-5 GB RAM per OSD, etc.). I guess I'm a lot better with a concrete example:

Say this is your cluster (using Bluestore):

8 Machines serving OSDs. Each machine is the same:

12 x 10 TB disks for data for 120 TB total per machine (1 disk per OSD)

Each machine is running 12 OSD daemons. The whole cluster has 96 OSDs (8 x 12) and a total of 960 TB of space.

What is the recommended amount of RAM for each of the 8 machines serving OSDs?

- 12 GB (1 GB per OSD)
- 10 GB (1 GB per TB of each OSD)
- 120 GB (1 GB per TB per machine)
- 960 GB (1 GB per TB for the whole cluster)
- 36 to 60 GB (3-5 GB per OSD)
- None of the above (then what is the answer?)

Thanks!

Jorge

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux