Re: Again: full ssd ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

Is it possible to share performance results with this kind of config? How many iops? Bandwidth ? Latency ?
Thanks

Sent from my iPhone

> On 11 déc. 2014, at 09:35, Christian Balzer <chibi@xxxxxxx> wrote:
> 
> 
> Hello,
> 
>> On Wed, 10 Dec 2014 18:08:23 +0300 Mike wrote:
>> 
>> Hello all!
>> Some our customer asked for only ssd storage.
>> By now we looking to 2027R-AR24NV w/ 3 x HBA controllers (LSI3008 chip,
>> 8 internal 12Gb ports on each), 24 x Intel DC S3700 800Gb SSD drives, 2
>> x mellanox 40Gbit ConnectX-3 (maybe newer ConnectX-4 100Gbit) and Xeon
>> e5-2660V2 with 64Gb RAM.
> 
> A bit skimpy on the RAM given the amount of money you're willing to spend
> otherwise.
> And while you're giving it 20 2.2GHz cores, that's not going to cut, not
> by a long shot. 
> I did some brief tests with a machine having 8 DC S3700 100GB for OSDs
> (replica 1) under 0.80.6 and the right (make that wrong) type of load
> (small, 4k I/Os) did melt all of the 8 3.5GHz cores in that box.
> 
> The suggest 1GHz per OSD by the Ceph team is for pure HDD based OSDs, the
> moment you add journals on SSDs it already becomes barely enough with 3GHz
> cores when dealing with many small I/Os.
> 
>> Replica is 2.
>> Or something like that but in 1U w/ 8 SSD's.
> The potential CPU power to OSD ratio will be much better with this.
> 
>> We see a little bottle neck on network cards, but the biggest question
>> can ceph (giant release) with sharding io and new cool stuff release
>> this potential?
> You shouldn't worry too much about network bandwidth unless you're going
> to use this super expensive setup for streaming backups. ^o^ 
> I'm certain you'll run out of IOPS long before you'll run out of network
> bandwidth.
> 
> Given that what I recall of the last SSD cluster discussion, most of the
> Giant benefits were for read operations and the write improvement was
> about double that of Firefly. While nice, given my limited tests that is
> still a far cry away from what those SSDs can do, see above.
> 
>> Any ideas?
> Somebody who actually has upgraded an SSD cluster from Firefly to Giant
> would be in the correct position to answer that.
> 
> Christian
> -- 
> Christian Balzer        Network/Systems Engineer                
> chibi@xxxxxxx       Global OnLine Japan/Fusion Communications
> http://www.gol.com/
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux