I am using Intel P3700DC 400G cards in a similar configuration (two per host) – perhaps you could look at cards of that capacity to meet your needs. I would suggest having such small journals would mean you will be constantly blocking on journal flushes which will impact write performance and latency, you would be better off with larger journals to accommodate
the expected throughput you are after. Also for redundancy I would suggest more than a single journal – if you lose the journal you will need to rebuild all the OSDs on the host which will be a significant performance impact and depending on your
replication level opens up the risk of data loss should another OSD fail for whatever reason. From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx]
On Behalf Of EP Komarla Hi, I am contemplating using a NVRAM card for OSD journals in place of SSD drives in our ceph cluster. Configuration:
·
4 Ceph servers
·
Each server has 24 OSDs (each OSD is a 1TB SAS drive)
·
1 PCIe NVRAM card of 16GB capacity per ceph server
·
Both Client & cluster network is 10Gbps As per ceph documents:
The expected throughput number should include the expected disk throughput (i.e., sustained data transfer rate), and network throughput. For example, a 7200 RPM disk
will likely have approximately 100 MB/s. Taking the min() of
the disk and network throughput should provide a reasonable expected throughput. Some users just start off with a 10GB journal size. For example: osd journal size = 10000 Given that I have a single 16GB card per server that has to be carved among all 24OSDs, I will have to configure each OSD journal to be much smaller around 600MB, i.e., 16GB/24
drives. This value is much smaller than 10GB/OSD journal that is generally used. So, I am wondering if this configuration and journal size is valid. Is there a performance benefit of having a journal that is this small? Also, do I have to reduce the default
“filestore maxsync interval”
from 5 seconds to a smaller value say 2 seconds to match the smaller journal size? Have people used NVRAM cards in the Ceph clusters as journals? What is their experience?
Any thoughts?
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com