Steven, I've recently done some performance testing on dell hardware. Here are some of my messy results. I was mainly testing the effects of the R0 stripe sizing on the perc card. Each disk has it's own R0 so that write back is enabled. VDs were created like this but with different stripesize `omconfig storage controller controller=1 action="" raid=r0 size=max pdisk=0:0:0 name=sdb readpolicy=ra writepolicy=wb stripesize=1mb`. I have a few generations of perc cards in my cluster and it seems to me that a single disk R0 with at least a 64k stripesize works well. R0 is better for writes than the non-raid jbod option of some perc cards because it uses the write back cache. Especially in my situation where there are no SSD journals in place. The stripesize does make a difference, larger seems better to a certain point for mixed cluster use. There are a ton of different configurations to test but I only did a few focused on writes. Kevin R440, Perc H840 with 2 MD1400 attached with 12 10TB NLSAS drives per md1400. Xfs filestore with 10gb journal lv on each 10tb disk. Ceph cluster set up as a single mon/mgr/osd server for testing. These tables pasted well in my email client, hopefully they stay that way.
Another set of testing using R740xd, perc h740p, 24 1.2TB 10K SAS. Filestore and bluestore testing, filestore has 10gb journal LV. Cluster is a single node mon/mgr/osd server. This hardware was being testing for a small rbd pool so rbd bench was used.
On 01/31/2018 09:39 AM, Steven
Vacaroaia wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com