On 2013-09-02 05:19, Fuchs, Andreas (SwissTXT) wrote:
Reading through the documentation and talking to several peaople leads to the conclusion that it's a best practice to place the journal of an OSD instance to a separate SSD disk to speed writing up.
But is this true? i have 3 new dell servers for testing available with 12 x 4 TB SATA and 2 x 100GB SSD disks. I don't have the exact specs at hand but tests show:
The SATA's sequential write speed is 300MB/s
The SSD which is in RAID1 config is only 270MB/s ! was probably not the most expensive.
When we put the journals on the OSD's i can expect a sequential wtite speed of 12 x 150MB/s (on write to journal, one to disk), this is 1800MB/s per Node.
The thing is that, unless you've got a magical workload, you're not
going to be seeing sequential write speeds from your spinning disks,
because, at a minimum, a write to the journal at the beginning of the
disk, and a write to data at a different portion of the disk is going to
perform the same as random i/o... because the disk is going to have to
seek, on average half-way across the platter each time it commits a new
transaction to disk... this gets worse when you also take into account
random reads, which also cause more disk seeks.
Sequential read on the disks I've got is at about 180M/s (they're cheap
slow disk), random read/write on the array seems to be peaking around
10M/s a disk.
I'd benchmark your random i/o performance, and use that to choose how
much, and how fast, a set of SSDs you will need.
I've actually got a 4-disk external hot-swap sata cage on order, that
connects over a usb3 or esata link... sequential read/write even with
the slow disk I've got will saturate the link... but filled with
spinning disk doing random i/o, there should be plenty of headroom
available... it'll be interesting to see if it's a worthwhile
investment, as opposed to having to open a computer up to change disks.
--
Martin Rudat
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com