Re: DSS 7000 for large scale object storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>From my experience you’ll be better off planning exactly how many OSD’s and nodes you’re going to have and if possible equip them from the start.

By just adding a new drive to the same pool ceph will start to rearrange data across the whole cluster which might lead to less client IO depending on what you’re comfortable with. In a worst case scenario, your clients won’t have enough IO and your services might be ”down” until it’s healthy again.

Rebuilding 60 x 6TB drives will take quite some time. Each SATA drive has about 75MB-125MB throughput at best, so a rebuild of once such drive would take approx. 16-17 hours. Usually it takes some x2 or x3 times longer in a normal case and if your controllers or network is limited.

// david


> 21 mars 2016 kl. 13:13 skrev Bastian Rosner <bro@xxxxxxxx>:
> 
> Yes, rebuild in case of a whole chassis failure is indeed an issue. That depends on how the failure domain looks like.
> 
> I'm currently thinking of initially not running fully equipped nodes.
> Let's say four of these machines with 60x 6TB drives each, so only loaded 2/3.
> That's raw 1440TB distributed over eight OSD nodes.
> Each individual OSD-node would therefore host "only" 30 OSDs but still allow for fast expansion.
> 
> Usually delivery and installation of a bunch of HDDs is much faster than servers.
> 
> I really wonder how easy it is to add additional disks and whether chance for node- or even chassis-failure increases.
> 
> Cheers, Bastian
> 
> Am 2016-03-21 10:33, schrieb David:
>> Sounds like you’ll have a field day waiting for rebuild in case of a
>> node failure or an upgrade of the crush map ;)
>> David
>>> 21 mars 2016 kl. 09:55 skrev Bastian Rosner <bro@xxxxxxxx>:
>>> Hi,
>>> any chance that somebody here already got hands on Dell DSS 7000 machines?
>>> 4U chassis containing 90x 3.5" drives and 2x dual-socket server sleds (DSS7500). Sounds ideal for high capacity and density clusters, since each of the server-sleds would run 45 drives, which I believe is a suitable number of OSDs per node.
>>> When searching for this model there's not much detailed information out there.
>>> Sadly I could not find a review from somebody who actually owns a bunch of them and runs a decent PB-size cluster with it.
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux