Re: odd performance graph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/02/2013 05:06 PM, Gruher, Joseph R wrote:
I don't know how rbd works inside, but i think ceph rbd here returns zeros
without real osd disk read if the block/sector of the rbd-disk is unused. That
would explain the graph you see. You can try adding a second rbd image and
not format/use it and benchmark this disk, then make a filesystem on it and
write some data and benchmark again...


When performance testing RBDs I generally write in the whole area before doing any testing to avoid this problem.  It would be interesting to have confirmation this is a real concern with Ceph.  I know it is in other thin provisioned storage, for example, VMWare.  Perhaps someone more expert can comment.

Also, is there any way to shortcut the write-in process?  Writing in TBs of RBD image can really extend the length of our performance test cycle.  It would be great if there was some shortcut to cause Ceph to treat the whole RBD as having already been written, or just go fetch data from disk on all reads regardless of whether that area had been written, just for testing purposes.

For our internal testing, we always write data out in it's entirety before doing reads as well. Not doing so will show inaccurate results as you've noticed.

Mark

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux