Re: features of the next stable release

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

>From my tests with giant, this was the cpu which limit the performance on osd.


I'm going to do some benchmark with 2x10 cores 3,1ghz for 6ssd next month.

I'll post results on the mailing list.



----- Mail original -----
De: "mad Engineer" <themadengin33r@xxxxxxxxx>
À: "Gregory Farnum" <greg@xxxxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Mardi 3 Février 2015 09:05:20
Objet: Re:  features of the next stable release

I am also planning to create SSD only cluster using multiple OSD on 
using few hosts,Whats the best way to get maximum performance out of 
SSD disks. 
I dont have the cluster running,but seeing this thread makes me worry 
that RBD will not be able to extract full capability of SSD disks.I am 
beginner in ceph,still learning about ceph. 

On Tue, Feb 3, 2015 at 1:00 AM, Gregory Farnum <greg@xxxxxxxxxxx> wrote: 
> On Mon, Feb 2, 2015 at 11:28 AM, Andrei Mikhailovsky <andrei@xxxxxxxxxx> wrote: 
>> 
>> ________________________________ 
>> 
>> 
>> I'm not sure what you mean about improvements for SSD disks, but the 
>> OSD should be generally a bit faster. There are several cache tier 
>> improvements included that should improve performance on most 
>> workloads that others can speak about in more detail than I. 
>> 
>> 
>> What i mean by the SSD disk improvement is that currently, the cluster made 
>> of all ssd disks is pretty slow. You will not get the IO throughput of the 
>> SSDs. My tests show the limit seems to be around 3K IOPs even though the 
>> ssds can easily do 50K+ IOps. This makes it impossible to have a decent 
>> database workload on the ceph cluster. 
> 
> 
> Yeah, Hammer is notably faster than Firefly (and I think better than 
> Giant), but it's incremental work. You're not suddenly going to get 
> 50k IOPS in a single-threaded RBD workload against one OSD. 
> -Greg 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users@xxxxxxxxxxxxxx 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux