Yes.
On my lab (not production yet) with 9 7200 SATA (OSD) and one INTEL
SSDSC2BB800G4 (800G, 9 journals) during random write I got ~90%
utilization of 9 HDD with ~5% utilization of SSD (2.4k IOPS). With
linear writing it somehow worse: I got 250Mb/s on SSD, which translated
to 240Mb of all OSD combined.
Obviously, it sucked with cold randread too (as expected).
Just for comparacment, my baseline benchmark (fio/librbd, 4k,
iodepth=32, randwrite) for single OSD in the pool with size=1:
Intel 53x and Pro 2500 Series SSDs - 600 IOPS
Intel 730 and DC S35x0/3610/3700 Series SSDs - 6605 IOPS
Samsung SSD 840 Series - 739 IOPS
EDGE Boost Pro Plus 7mm - 1000 IOPS
(so 3500 is clear winner)
On 07/06/2016 03:22 PM, Alwin Antreich wrote:
Hi George,
interesting result for your benchmark. May you please supply some more numbers? As we didn't get that good of a result
on our tests.
Thanks.
Cheers,
Alwin
On 07/06/2016 02:03 PM, George Shuklin wrote:
Hello.
I've been testing Intel 3500 as journal store for few HDD-based OSD. I stumble on issues with multiple partitions (>4)
and UDEV (sda5, sda6,etc sometime do not appear after partition creation). And I'm thinking that partition is not that
useful for OSD management, because linux do no allow partition rereading with it contains used volumes.
So my question: How you store many journals on SSD? My initial thoughts:
1) filesystem with filebased journals
2) LVM with volumes
Anything else? Best practice?
P.S. I've done benchmarking: 3500 can support up to 16 10k-RPM HDD.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com