On 6/26/2013 4:49 PM, Gregory Farnum
wrote:
On Wednesday, June 26, 2013, Oliver Fuckner wrote:
Hi,
I am fairly new to ceph and just built my first 4 systems.
I use:
Supermicro X9SCL-Board with E3-1240 (4*3.4GHz) CPU and 32GB RAM
LSI 9211-4i SAS HBA with 24 SATA disks and 2 SSDs (Intel 3700,
100GB), all connected through a 6GBit-SAS expander
CentOS 6.4 with Kernel 2.6.32-358.11.1, 64bit
ceph 0.61.4
Intel 10GigEthernet NICs are used to connect the nodes together
xfs is used on journal and osds
The SSDs are configured in a mdadm raid1 and used for journals.
The SSDs can write 400MBytes/sec each, but the sum of all disks
is exactly half of it, 200MBytes/sec.
So there are 2 journal writes for every write to the osd?
No.
Is this
expected behaviour? Why?
No, but at a guess your expanders aren't behaving properly.
Alternatively, your SSDs don't handle twelve write streams so
well -- that's quite a lot of oversubscription.
How do I debug expander behaviour? I know lsiutil, but is there
something like iostat for sas lanes/phys? Talking about
oversubscription: What I really try is 24 streams to one SSD-mirror.
So I will probably need more ssds, okay...
I would test the write behavior of your disks independently
of Ceph (but simultaneously!) and see what happens.
well dd to the ssds also shows 400MByte/sec with 4MByte blocks.
Thanks,
Oliver
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com