Performance Testing Setup Tricks?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all-

 

I’m creating some scripted performance testing for my Ceph cluster.  The part relevant to my questions works like this:

1.       Create some pools

2.       Create and map some RBDs

3.       Write-in the RBDs using DD or FIO

4.       Run FIO testing on the RBDs (small block random and large block sequential with varying queue depths and workers)

5.       Delete the pools and make some new pools

6.       Populate with objects using Cosbench

7.       Run Cosbench to measure object read and write performance

8.       (repeat for various object sizes)

9.       Delete the pools

 

The whole this works pretty well as far as generating results.  The part I’m hoping to improve is steps 3 and 6, where I’m writing in the RBDs, or where I’m populating objects to the pools, respectively.  For any significant amount of data relative to the size of the cluster (which is 16TB now but will probably get bigger) this takes hours and hours and hours.  I’m wondering if there is any way to shortcut these preparation steps.  For example, for a new RBD, is there any way to tell Ceph to treat it as already written-in or thickly provisioned, and just serve me up whatever junk data is in there when I read from it?  Since the RBD sits on objects instead of blocks I’m guessing not but it doesn’t hurt to ask.  Similarly, are there any tricks I might investigate for populating junk objects into a pool, which I can then read and write, other than actually writing all the objects in with a tool like Cosbench?

 

There may not be a better approach, but any thoughts are appreciated.  Thanks!

 

-Joe

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux