Re: Large numbers of OSD per node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2012/11/6 Wido den Hollander <wido@xxxxxxxxx>:
> It works for them. There is no journaling though, but this setup is only
> being used for the RADOS Gateway, not for RBD.
>
> You might want to insert a couple of SSDs in there to do journaling for you.

Yes, I'll add 2 SSD on each server, probably in RAID1, because i'll
also install the OS on it.

> But the rule also applies here. When you use 3 of these machines, when
> loosing one, you will lose 33% of your cluster.
>
> The setup described on that page has 90 nodes, so one node failing is a
> little over 1% of the cluster which fails.

Actually I'll start with 2 or 3  3TB disks for each node and I'll have
3 nodes as testbed and 5 as production.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux