Hi, I am contemplating using a NVRAM card for OSD journals in place of SSD drives in our ceph cluster. Configuration:
·
4 Ceph servers
·
Each server has 24 OSDs (each OSD is a 1TB SAS drive)
·
1 PCIe NVRAM card of 16GB capacity per ceph server
·
Both Client & cluster network is 10Gbps As per ceph documents:
The expected throughput number should include the expected disk throughput (i.e., sustained data transfer rate), and network throughput. For example, a 7200 RPM disk will likely
have approximately 100 MB/s. Taking the min() of the disk and
network throughput should provide a reasonable expected throughput. Some users just start off with a 10GB journal size. For example:
osd journal size = 10000 Given that I have a single 16GB card per server that has to be carved among all 24OSDs, I will have to configure each OSD journal to be much smaller around 600MB, i.e., 16GB/24 drives.
This value is much smaller than 10GB/OSD journal that is generally used. So, I am wondering if this configuration and journal size is valid. Is there a performance benefit of having a journal that is this small? Also, do I have to reduce the default “filestore maxsync interval”
from 5 seconds to a smaller value say 2 seconds to match the smaller journal size? Have people used NVRAM cards in the Ceph clusters as journals? What is their experience?
Any thoughts?
Legal Disclaimer: The information contained in this message may be privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and delete or destroy any copy of this message! |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com