Once you've replaced an OSD, you'll see
it is quite simple... doing it for a few is not much more work
(you've scripted it, right?). I don't see RAID as giving any
benefit here at all. It's not tricky...it's perfectly normal
operation. Just get used to ceph, and it'll be as normal as
replacing a RAID disk. And for performance degradation, maybe it
could be better on either... or better on ceph if you don't mind
setting the rate to the lowest... but when the QoS functionality
is ready, probably ceph will be much better. Also RAID will cost
you more for hardware.
And raid5 is really bad for IOPS. And ceph already replicates, so you will have 2 layers of redundancy... and ceph does it cluster wide, not just one machine. Using ceph with replication is like all your free space as hot spares... you could lose 2 disks on all your machines, and it can still run (assuming it had time to recover in between, and enough space). And you don't want min_size=1, and if you have 2 layers of redundancy, you'll be tempted to do that probably. But for some workloads, like RBD, ceph doesn't balance out the workload very evenly for a specific client, only many clients at once... raid might help solve that, but I don't see it as worth it. I would just software RAID1 the OS and mons, and mds, not the OSDs. On 11/13/17 12:26, Oscar Segarra wrote:
-- -------------------------------------------- Peter Maloney Brockmann Consult Max-Planck-Str. 2 21502 Geesthacht Germany Tel: +49 4152 889 300 Fax: +49 4152 889 333 E-mail: peter.maloney@xxxxxxxxxxxxxxxxxxxx Internet: http://www.brockmann-consult.de -------------------------------------------- |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com