Re: Having trouble getting good performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



So, it seems you are not limited by anything..

I am suggesting synthetic workload like fio to run on top of VM to identify where the bottleneck is. For example, if fio is giving decent enough output, I guess ceph layer is doing fine. It is your client that is not driving enough.

Thanks & Regards
Somnath

-----Original Message-----
From: jdavidlists@xxxxxxxxx [mailto:jdavidlists@xxxxxxxxx] On Behalf Of J David
Sent: Wednesday, April 22, 2015 12:14 PM
To: Somnath Roy
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Having trouble getting good performance

On Wed, Apr 22, 2015 at 2:54 PM, Somnath Roy <Somnath.Roy@xxxxxxxxxxx> wrote:
> What ceph version are you using ?

Firefly, 0.80.9.

> Could you try with rbd_cache=false or true and see if behavior changes ?

As this is ZFS, running a cache layer below it that it is not aware of violates data integrity and can cause corruption, so that type of caching is not an option.

> What is the client side cpu util ?

Inside the VM, it appears fairly low: 0-2% user, 10-25% of one core in system, and the rest in wait I/O.  On the KVM host it shows as 80-90% of one CPU core.

> Performance also depends on the QD you are driving with.

How is this set for a real-world virtual machine client (i.e. not synthetic workload like fio)?  In practice, we have very little control over what ZFS does in terms of I/O.

Thanks!

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux