Hi Sumit,
A couple questions:
What brand/model SSD?
What brand/model HDD?
Also how they are connected to controller/motherboard? Are they sharing a bus (ie SATA expander)?
RAM?
Also look at the output of "iostat -x" or similiar, are the SSDs hitting 100% utilisation?
I suspect that the 5:1 ratio of HDDs to SDDs is not ideal, you now have 5x the write IO trying to fit into a single SSD. I'll take a punt on it being a SATA connected SSD (most common), 5x ~130 megabytes/second gets very close to most SATA bus limits. If its a shared BUS, you possibly hit that limit even earlier (since all that data is now being written twice out over the bus).
cheers;
\Chris
From: "Sumit Gaur" <sumitkgaur@xxxxxxxxx>
To: ceph-users@xxxxxxxxxxxxxx
Sent: Thursday, 12 February, 2015 9:23:35 AM
Subject: ceph Performance with SSD journal
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
To: ceph-users@xxxxxxxxxxxxxx
Sent: Thursday, 12 February, 2015 9:23:35 AM
Subject: ceph Performance with SSD journal
Hi Ceph-Experts,
Have a small ceph architecture related question
As blogs and documents suggest that ceph perform much better if we use journal on SSD.
I have made the ceph cluster with 30 HDD + 6 SSD for 6 OSD nodes. 5 HDD + 1 SSD on each node and each SSD have 5 partition for journaling 5 OSDs on the node.
Now I ran similar test as I ran for all HDD setup.
What I saw below two reading goes in wrong direction as expected
1) 4K write IOPS are less for SSD setup, though not major difference but less.
2) 1024K Read IOPS are less for SSD setup than HDD setup.
On the other hand 4K read and 1024K write both have much better numbers for SSD setup.
Let me know if I am missing some obvious concept.
Thanks
sumit
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com