Re: Again: full ssd ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,
On 12/11/2014 11:35 AM, Christian Balzer wrote:
> 
> Hello,
> 
> On Wed, 10 Dec 2014 18:08:23 +0300 Mike wrote:
> 
>> Hello all!
>> Some our customer asked for only ssd storage.
>> By now we looking to 2027R-AR24NV w/ 3 x HBA controllers (LSI3008 chip,
>> 8 internal 12Gb ports on each), 24 x Intel DC S3700 800Gb SSD drives, 2
>> x mellanox 40Gbit ConnectX-3 (maybe newer ConnectX-4 100Gbit) and Xeon
>> e5-2660V2 with 64Gb RAM.
> 
> A bit skimpy on the RAM given the amount of money you're willing to spend
> otherwise.
I think more amount of RAM can help with re-balance process in cluster
when one node fail.

> And while you're giving it 20 2.2GHz cores, that's not going to cut, not
> by a long shot. 
> I did some brief tests with a machine having 8 DC S3700 100GB for OSDs
> (replica 1) under 0.80.6 and the right (make that wrong) type of load
> (small, 4k I/Os) did melt all of the 8 3.5GHz cores in that box.
We can choose something more powerful from E5-266xV3 family.

> The suggest 1GHz per OSD by the Ceph team is for pure HDD based OSDs, the
> moment you add journals on SSDs it already becomes barely enough with 3GHz
> cores when dealing with many small I/Os.
> 
>> Replica is 2.
>> Or something like that but in 1U w/ 8 SSD's.
>>
> The potential CPU power to OSD ratio will be much better with this.
> 
Yes, it looks more right.

>> We see a little bottle neck on network cards, but the biggest question
>> can ceph (giant release) with sharding io and new cool stuff release
>> this potential?
>>
> You shouldn't worry too much about network bandwidth unless you're going
> to use this super expensive setup for streaming backups. ^o^ 
> I'm certain you'll run out of IOPS long before you'll run out of network
> bandwidth.
> 
I think about a bottle neck in kernel IO subsystem also.

> Given that what I recall of the last SSD cluster discussion, most of the
> Giant benefits were for read operations and the write improvement was
> about double that of Firefly. While nice, given my limited tests that is
> still a far cry away from what those SSDs can do, see above.
> 
I also read all this treads about giant read perfomans. But on write it
double worst now?

>> Any ideas?
>>
> Somebody who actually has upgraded an SSD cluster from Firefly to Giant
> would be in the correct position to answer that.
> 
> Christian
> 

Thank you for useful opinion, Christian!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux