Re: Large numbers of OSD per node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2012/11/6 Wido den Hollander <wido@xxxxxxxxx>:
> You shouldn't only think about a complete failure solution. The distributed
> architecture of Ceph also gives you the freedom to take out a node whenever
> you want to do maintenance or just don't trust the node and you want to
> investigate.
>
> The scenario is still the same. Use smaller nodes so taking out one node
> (for what reason) doesn't impact your cluster that much.

Here:
http://ceph.com/docs/master/install/hardware-recommendations/
is wrote that a production cluster has been made with many R515 with
12 disks of 3TB each. This give us 36TB of storage.

is this configuration considered good ? I'm planning to have the same server.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux