Re: Old vs New pool on same OSDs - Performance Difference

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Nick,

Did you preinitialize the new rbd volume ?

If not, do a seq write  to fill up the entire volume first and then do a Random read.

 

Thanks & Regards

Somnath

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Nick Fisk
Sent: Thursday, June 04, 2015 6:32 AM
To: ceph-users@xxxxxxxxxxxxxx
Subject: Old vs New pool on same OSDs - Performance Difference

 

Hi All,

 

I have 2 pools both on the same set of OSD’s, 1st is the default rbd pool created at installation 3 months ago, the other has just recently been created, to verify performance problems.

 

As mentioned both pools are on the same set of OSD’s, same crush ruleset and RBD’s on both are identical in size, version and order. The only real difference that I can think of is that the existing pool as around 5 million objects on it.

 

Testing using RBD enabled fio, I see the newly created pool get an expected random read IO performance of around 60 iop’s. The existing pool only gets around half of this. New pool latency = ~15ms Old pool latency = ~35ms for random reads.

 

There is no other IO going on in the cluster at the point of running these tests.

 

XFS fragmentation is low, somewhere around 1-2% on most of the disks. Only difference I can think of is that the existing pool has data on it where the new one is empty apart from testing RBD, should this make a difference?

 

Any ideas?

 

Any hints on what I can check to see why latency is so high for the existing pool?

 

Nick





PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux