SSD IO performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What queue depth are you testing at?

 

You will struggle to get much more than about 500iops for a single threaded write, no matter what the backing disk is.

 

Nick

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of lixuehui555 at 126.com
Sent: 27 May 2015 00:55
To: Vasiliy Angapov; Karsten Heymann
Cc: ceph-users
Subject: Re: SSD IO performance

 

Hi,
Sorry for all  , the network is 1000Mbit/s ,I've a state erroe before!  The network is not 100Mbit/s .

 

  _____  

lixuehui555 at 126.com <mailto:lixuehui555 at 126.com> 

 

From: Vasiliy Angapov <mailto:angapov@xxxxxxxxx> 

Date: 2015-05-26 22:36

To: Karsten Heymann <mailto:karsten.heymann at gmail.com> ; lixuehui555 at 126.com <mailto:lixuehui555 at 126.com> 

CC: ceph-users <mailto:ceph-users at lists.ceph.com> 

Subject: Re: SSD IO performance

Hi,

 

Hi, I guess the author here means that for random loads 100Mb network should generate 2500-3000 IOPS for 4k blocks.

So the complaint is reasonable, I suppose.

 

Regards, Vasily.  

 

On Tue, May 26, 2015 at 5:27 PM, Karsten Heymann <karsten.heymann at gmail.com <mailto:karsten.heymann at gmail.com> > wrote:

Hi ,

you should definitely increase the speed of the network. 100Mbit/s is
way too slow for all use cases I could think of, as it results in a
maximum data transfer of less than 10 Mbyte per second, which is
slower than a usb 2.0 thumb drive.

Best,
Karsten


2015-05-26 15:53 GMT+02:00 lixuehui555 at 126.com <mailto:lixuehui555 at 126.com>  <lixuehui555 at 126.com <mailto:lixuehui555 at 126.com> >:
>
> Hi ALL:
>     I've built a ceph0.8 cluster including 2 nodes ,which  contains 5
> osds(ssd) each , with 100MB/s network . Testing a rbd device with default
> configuration ,the result is no ideal.To got better performance ,except the
> capability of random r/w  of  SSD, which should to give a change?
>
>     2 nodes  5 osds(SSD) *2  , 1 mon, 32GB RAM
>     100MB/S network
> and now the whole iops is just 500 . Should we change the filestore or
> journal part ? Thanks for any help!
>
> ________________________________
> lixuehui555 at 126.com <mailto:lixuehui555 at 126.com> 
>

> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com> 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com> 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20150527/613762d9/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux