Re: Having trouble getting good performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi David,

I suspect you are hitting problems with sync writes, which Ceph isn't known
for being the fastest thing for.

I'm not a big expert on ZFS but I do know that a SSD ZIL is normally
recommended to allow fast sync writes. If you don't have this you are
waiting on Ceph to Ack the write across both OSD copies. The fact that you
are getting 50-75 iops sounds like you are doing single threaded sync writes
to 7.2k drives and ZFS is waiting on these before doing the next operation.

SSD Ceph journals may give you around 200-300 iops

The other option is to do something on the ZFS side to get the sync writes
faster, can you provision a ZIL SSD or does this VM need to be able to move
around?

Nick

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Somnath Roy
> Sent: 22 April 2015 21:08
> To: J David
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Having trouble getting good performance
> 
> So, it seems you are not limited by anything..
> 
> I am suggesting synthetic workload like fio to run on top of VM to
identify
> where the bottleneck is. For example, if fio is giving decent enough
output, I
> guess ceph layer is doing fine. It is your client that is not driving
enough.
> 
> Thanks & Regards
> Somnath
> 
> -----Original Message-----
> From: jdavidlists@xxxxxxxxx [mailto:jdavidlists@xxxxxxxxx] On Behalf Of J
> David
> Sent: Wednesday, April 22, 2015 12:14 PM
> To: Somnath Roy
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Having trouble getting good performance
> 
> On Wed, Apr 22, 2015 at 2:54 PM, Somnath Roy
> <Somnath.Roy@xxxxxxxxxxx> wrote:
> > What ceph version are you using ?
> 
> Firefly, 0.80.9.
> 
> > Could you try with rbd_cache=false or true and see if behavior changes ?
> 
> As this is ZFS, running a cache layer below it that it is not aware of
violates
> data integrity and can cause corruption, so that type of caching is not an
> option.
> 
> > What is the client side cpu util ?
> 
> Inside the VM, it appears fairly low: 0-2% user, 10-25% of one core in
system,
> and the rest in wait I/O.  On the KVM host it shows as 80-90% of one CPU
> core.
> 
> > Performance also depends on the QD you are driving with.
> 
> How is this set for a real-world virtual machine client (i.e. not
synthetic
> workload like fio)?  In practice, we have very little control over what
ZFS does
> in terms of I/O.
> 
> Thanks!
> 
> ________________________________
> 
> PLEASE NOTE: The information contained in this electronic mail message is
> intended only for the use of the designated recipient(s) named above. If
the
> reader of this message is not the intended recipient, you are hereby
notified
> that you have received this message in error and that any review,
> dissemination, distribution, or copying of this message is strictly
prohibited. If
> you have received this communication in error, please notify the sender by
> telephone or e-mail (as shown above) immediately and destroy any and all
> copies of this message in your possession (whether hard copies or
> electronically stored copies).
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux