Re: What a maximum theoretical and practical capacity in ceph cluster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've been looking at various categories of disks and how the
performance/reliability/cost varies.

There seems to be 5 main categories: (WD disks given as example)-


Budget (WD Green - 5400 no TLER)
Desktop Drives (WD Blue - /7200RPM no TLER)
NAS Drives (WD Red - 5400RPM TLER)
Enterprise Capacity (WD SE - 7200RPM TLER)
Enterprise Performance (WD RE - 7200RPM TLER)
SAS Enterprise Performance

I would definitely not use the Green drives as they seem to park the heads
very frequently and seem to suffer high failure rates in Enterprise work
loads.

The Blue drives, I'm not sure about, they definitely can't be used in RAID
as they have very high error timeouts, but I don't know how CEPH handles
this and if it's worth the risk for the cheaper cost.

The RED drives are interesting, they are very cheap and if performance is
not of top importance (cold storage/archive) they would seem to be a good
choice as they are designed for 24x7 use and support 7s error timeout.

The two enterprise drives vary by performance with the later also costing
more. To be honest I don't see the point of the capacity version, if you
don't need the extra performance, you would be better going with the Red
drive.

And finally the SAS drive. For CEPH I don't see this drive making much
sense. Most manufacturers enterprise SATA drives are identical to the SAS
version with just the different interface. Performance seems identical in
all comparisons I have seen, apart from the fact that SATA can only queue up
to 32 IO's, not sure how important this is? But they also command a price
premium.

In terms of the 72 Disk chassis, if I was to use one (which I probably
wouldn't)  I would design the cluster to tolerate high numbers of failures
before requiring replacement and then do large batch replacements every few
months. This would probably involve setting noout and shutting down each
server in turn to replace the disks, to work around the 2 disks in one tray
design.

Nick

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
Mariusz Gronczewski
Sent: 28 October 2014 09:34
To: Christian Balzer
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  What a maximum theoretical and practical capacity
in ceph cluster?

On Tue, 28 Oct 2014 11:32:34 +0900, Christian Balzer <chibi@xxxxxxx>
wrote:

> On Mon, 27 Oct 2014 19:30:23 +0400 Mike wrote:

> The fact that they make you buy the complete system with IT mode 
> controllers also means that if you would want to do something like 
> RAID6, you'd be forced to do it in software.

If you are using cheap consumer drives, you definitely DO want to use IT
mode controller (and software RAID if needed), non-RAID-designed drives
perform very poorly behind hardware level abstraction. We had LSI SAS 2208
(no IT mode flash avaliable) and it just turned disks off, had problems with
disk timeout (disks were shitty segates *DM001, no
TLER) so it dropped whole drive from raid. And using MegaCli for everything
is not exactly ergonomic.

But yeah, 72 drives in 4U only makes sense if you use it for bulk storage


--
Mariusz Gronczewski, Administrator

Efigence S. A.
ul. Wołoska 9a, 02-583 Warszawa
T: [+48] 22 380 13 13
F: [+48] 22 380 13 14
E: mariusz.gronczewski@xxxxxxxxxxxx
<mailto:mariusz.gronczewski@xxxxxxxxxxxx>




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux