Another option is to run journals on individually presented SSDs, in a
5:1 ratio (spinning-disk:ssd) and have the OS somewhere else. Then the
failure domain is smaller.
Ideally implement some way to monitor SSD write life SMART data - at
least it gives a guide as to device condition compared to its rated
life. That can be done with smartmontools, but it would be nice to have
it on the InkTank dashboard for example.
On 2013-12-05 14:26, Sebastien Han wrote:
Hi guys,
I won’t do a RAID 1 with SSDs since they both write the same data.
Thus, they are more likely to “almost” die at the same time.
What I will try to do instead is to use both disk in JBOD mode or
(degraded RAID0).
Then I will create a tiny root partition for the OS.
Then I’ll still have something like /dev/sda2 and /dev/sdb2 and then
I can take advantage of the 2 disks independently.
The good thing with that is that you can balance your journals across
both SSDs.
From a performance perspective this is really good.
The bad thing as always is that if you loose a SSD you loose all the
journals attached to it.
Cheers.
––––
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien.han@xxxxxxxxxxxx
Address : 10, rue de la Victoire - 75009 Paris
Web : www.enovance.com - Twitter : @enovance
On 05 Dec 2013, at 10:53, Gandalf Corvotempesta
<gandalf.corvotempesta@xxxxxxxxx> wrote:
2013/12/4 Simon Leinen <simon.leinen@xxxxxxxxx>:
I think this is a fine configuration - you won't be writing to the
root
partition too much, outside journals. We also put journals on the
same
SSDs as root partitions (not that we're very ambitious about
performance...).
Do you suggest a RAID1 for the OS partitions on SSDs ? Is this safe
or
a RAID1 will decrease SSD life?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com