Yes, rebuild in case of a whole chassis failure is indeed an issue. That
depends on how the failure domain looks like.
I'm currently thinking of initially not running fully equipped nodes.
Let's say four of these machines with 60x 6TB drives each, so only
loaded 2/3.
That's raw 1440TB distributed over eight OSD nodes.
Each individual OSD-node would therefore host "only" 30 OSDs but still
allow for fast expansion.
Usually delivery and installation of a bunch of HDDs is much faster than
servers.
I really wonder how easy it is to add additional disks and whether
chance for node- or even chassis-failure increases.
Cheers, Bastian
Am 2016-03-21 10:33, schrieb David:
Sounds like you’ll have a field day waiting for rebuild in case of a
node failure or an upgrade of the crush map ;)
David
21 mars 2016 kl. 09:55 skrev Bastian Rosner <bro@xxxxxxxx>:
Hi,
any chance that somebody here already got hands on Dell DSS 7000
machines?
4U chassis containing 90x 3.5" drives and 2x dual-socket server sleds
(DSS7500). Sounds ideal for high capacity and density clusters, since
each of the server-sleds would run 45 drives, which I believe is a
suitable number of OSDs per node.
When searching for this model there's not much detailed information
out there.
Sadly I could not find a review from somebody who actually owns a
bunch of them and runs a decent PB-size cluster with it.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com