RAM recommendation with large OSDs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The standard advice is “1GB RAM per 1TB of OSD”. Does this actually still hold with large OSDs on bluestore? Can it be reasonably reduced with tuning?

 

From the docs, it looks like bluestore should target the “osd_memory_target” value by default. This is a fixed value (4GB by default), which does not depend on OSD size. So shouldn’t the advice really by “4GB per OSD”, rather than “1GB per TB”? Would it also be reasonable to reduce osd_memory_target for further RAM savings?

 

For example, suppose we have 90 12TB OSD drives:

  • “1GB per TB” rule: 1080GB RAM
  • “4GB per OSD” rule: 360GB RAM
  • “2GB per OSD” (osd_memory_target reduced to 2GB): 180GB RAM

 

Those are some massively different RAM values. Perhaps the old advice was for filestore? Or there is something to consider beyond the bluestore memory target? What about when using very dense nodes (for example, 60 12TB OSDs on a single node)?

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux