Re: 1256 OSD/21 server ceph cluster performance issues.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Thu, 18 Dec 2014 23:45:57 -0600 Sean Sullivan wrote:

> Wow Christian,
> 
> Sorry I missed these in line replies. Give me a minute to gather some
> data. Thanks a million for the in depth responses!
> 
No worries.

> I thought about raiding it but I needed the space unfortunately. I had a 
> 3x60 osd node test cluster that we tried before this and it didn't have 
> this flopping issue or rgw issue I am seeing .
>
I think I remember that...

You do realize that the RAID6 configuration option I mentioned would
actually give you MORE space (replication of 2 is sufficient with reliable
OSDs) than what you have now? 
Albeit probably at reduced performance, how much would also depend on the
controllers used, but at worst the RAID6 OSD performance would be
equivalent to that of single disk. 
So a Cluster (performance wise) with 21 nodes and 8 disks each.
 
> I can quickly answered the case/make questions, the model will need to
> wait till I get home :)
> 
> Case is a 72 disk supermicro chassis, I'll grab the exact model in my
> next reply.
>
No need, now that strange monitor configuration makes sense, you (or
whoever spec'ed this) went for the Supermicro Ceph solution, right?

In my not so humble opinion, this the worst storage chassis ever designed
by a long shot and totally unsuitable for Ceph. 
I told the Supermicro GM for Japan as much. ^o^

Every time a HDD dies, you will have to go and shut down the other OSD
that resides on the same tray (and set the cluster to noout).
Even worse of course if a SSD should fail.
And if somebody should just go and hotswap things w/o that step first,
hello data movement storm (2 or 10 OSDs instead of 1 or 5 respectively).

Christian
 
> Drives are HGST 4TB drives, ill grab the model once I get home as well.
> 
> The 300 was completely incorrect and it can push more, it was just meant 
> for a quick comparison but I agree it should be higher.
> 
> Thank you so much. Please hold up and ill grab the extra info ^~^
> 
> 
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux