On 12.08.2016 13:41, Félix Barbeira
wrote:
Hi,
I'm planning to make a ceph cluster but I have a serious
doubt. At this moment we have ~10 servers DELL R730xd with
12x4TB SATA disks. The official ceph docs says:
"We
recommend using a dedicated drive for the operating system
and software, and one drive for each Ceph OSD Daemon you run
on the host."
I could use for example 1 disk for the OS and 11 for OSD
data. In the operating system I would run 11 daemons to
control the OSDs. But...what happen to the cluster if the disk
with the OS fails?? maybe the cluster thinks that 11 OSD
failed and try to replicate all that data over the
cluster...that sounds no good.
Should I use 2 disks for the OS making a RAID1? in this
case I'm "wasting" 8TB only for ~10GB that the OS needs.
In all the docs that i've been reading says ceph has no
unique single point of failure, so I think that this scenario
must have a optimal solution, maybe somebody could help me.
Thanks in advance.
--
Félix
Barbeira.
if you do not have dedicated slots on the back for OS disks, then i
would recomend using SATADOM flash modules directly into a SATA port
internal in the machine. Saves you 2 slots for osd's and they are
quite reliable. you could even use 2 sd cards if your machine have
the internal SD slot
http://www.dell.com/downloads/global/products/pedge/en/poweredge-idsdm-whitepaper-en.pdf
kind regards
Ronny Aasen
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com