Re: Having trouble getting good performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 






> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> J David
> Sent: 24 April 2015 15:40
> To: Nick Fisk
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Having trouble getting good performance
> 
> On Fri, Apr 24, 2015 at 6:39 AM, Nick Fisk <nick@xxxxxxxxxx> wrote:
> > From the Fio runs, I see you are getting around 200 iops at 128kb
> > write io size. I would imagine you should be getting somewhere around
> > 200-300 iops for the cluster you posted in the initial post, so it
> > looks like its performing about right.
> 
> Why is that?  Why would it not be:
> 
> 28 drives * 100 IOPs/drive / 2 copies * %efficiency
> 
> How did you arrive at this estimate?  Or is the above more or less correct
and
> %efficiency really is limited to the 15-20% range?
> 

7.2k drives tend to do about 80 iops at 4kb IO sizes, as the IO size
increases the number of iops will start to fall. You will probably get
around 70 iops for 128kb. But please benchmark your raw disks to get some
accurate numbers if needed.

Next when you use on-disk journals you write 1st to the journal and then
write the actual data. There is also a small levelDB write which stores ceph
metadata so depending on IO size you will get slightly less than half the
native disk performance.

You then have 2 copies, as Ceph won't ACK until both copies have been
written the average latency will tend to stray upwards.

There is also latency in Ceph and the network which adds a few ms.

So the calculation will look something like:-

(28 disk *70iops)/2.5 writes for journal /2 copies

Which equals about 400 for the possible raw disk performance, throw in the
networking and ceph overheads and the fact that IO won't be completely
evenly distributed over all the disks all the time and this gets you nearer
to the 200-300 figure I first quoted.

> In any case, this is all very interesting and informative, and I really
appreciate
> the feedback.
> 
> Thanks!
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux