Re: Large numbers of OSD per node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2012/11/6 Stefan Kleijkers <stefan@xxxxxxxxxxxxxxxxxxxx>:
> True, but it's a huge difference if you have to redistribute the 36T between
> 2 remaining nodes or between 89 remaining nodes. And with such a few nodes
> you hit probably a couple of other bottlenecks like CPU power per node,
> networking bandwidth per node, etc... I have noticed this the hard way with
> 3 nodes and 24 disks/osds per node.

Ok, now it's clear.
In a 90 nodes cluster, 36TB will move 400GB for each node while in a
10 nodes cluster, the same 36TB will move 3.6TB for each node.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux