Re: Large numbers of OSD per node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 06-11-12 10:36, Gandalf Corvotempesta wrote:
2012/11/6 Wido den Hollander <wido@xxxxxxxxx>:
You shouldn't only think about a complete failure solution. The distributed
architecture of Ceph also gives you the freedom to take out a node whenever
you want to do maintenance or just don't trust the node and you want to
investigate.

The scenario is still the same. Use smaller nodes so taking out one node
(for what reason) doesn't impact your cluster that much.

Here:
http://ceph.com/docs/master/install/hardware-recommendations/
is wrote that a production cluster has been made with many R515 with
12 disks of 3TB each. This give us 36TB of storage.

is this configuration considered good ? I'm planning to have the same server.
--

It works for them. There is no journaling though, but this setup is only being used for the RADOS Gateway, not for RBD.

You might want to insert a couple of SSDs in there to do journaling for you.

But the rule also applies here. When you use 3 of these machines, when loosing one, you will lose 33% of your cluster.

The setup described on that page has 90 nodes, so one node failing is a little over 1% of the cluster which fails.

Wido

To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux