Re: Large numbers of OSD per node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2012/11/6 Wido den Hollander <wido@xxxxxxxxx>:
> The setup described on that page has 90 nodes, so one node failing is a
> little over 1% of the cluster which fails.

I think i'm missing something.
In case of a failure, they will always have to resync 36 TB of data,
no matter if they have 90 servers.
Each server is 36TB, so every times they  need to resync the whole server.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux