Sequential write performance on cloning images

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Experts,

I was trying to test the performance on rbd images. I find the
sequential write throughput on cloning image is only harf of that on
origin one. I have enabled object-map feature on both images and the
cloning image is fully over-written before the performance test.

Here is my steps:
1. create two images called "ori-vol1", "ori-vol2"
2. run a "dd" on them to make the images full of data
3. create a snap on "ori-vol1", called "snap-vol1"
4. clone a image from "snap-vol1", called "clone-vol1"
5. run a "dd" on "clone-vol1" to make the image full "copy-up"
6. finally test performance on "ori-vol2" and "clone-vol1". --
sequential write, 64K iosize, fio

I have tested on hammer and jewel, the results are similar. Is it expected?

I know there may be performance drop on cloning image since the issued
object may be at parent image and it required a extra "copy-up"
operation. However in my case the objects are all at own image and
librbd should have knowledge about this from object-map.

Regards
Ridge
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux