Re: rbd cp copies of sparse files become fully allocated

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/09/2013 04:57 AM, Andrey Korolyov wrote:
May I also suggest the same for export/import mechanism? Say, if image
was created by fallocate we may also want to leave holes upon upload
and vice-versa for export.

Import and export already omit runs of zeroes. They could detect
smaller runs (currently they look at object size chunks), and export
might be more efficient if it used diff_iterate() instead of
read_iterate(). Have you observed them misbehaving with sparse images?

On Mon, Sep 9, 2013 at 8:45 AM, Sage Weil <sage@xxxxxxxxxxx> wrote:
On Sat, 7 Sep 2013, Oliver Daudey wrote:
Hey all,

This topic has been partly discussed here:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-March/000799.html

Tested on Ceph version 0.67.2.

If you create a fresh empty image of, say, 100GB in size on RBD and then
use "rbd cp" to make a copy of it, even though the image is sparse, the
command will attempt to read every part of it and take far more time
than expected.

After reading the above thread, I understand why the copy of an
essentially empty sparse image on RBD would take so long, but it doesn't
explain why the copy won't be sparse itself.  If I use "rbd cp" to copy
an image, the copy will take it's full allocated size on disk, even if
the original was empty.  If I use the QEMU "qemu-img"-tool's
"convert"-option to convert the original image to the copy without
changing the format, essentially only making a copy, it takes it's time
as well, but will be faster than "rbd cp" and the resulting copy will be
sparse.

Example-commands:
rbd create --size 102400 test1
rbd cp test1 test2
qemu-img convert -p -f rbd -O rbd rbd:rbd/test1 rbd:rbd/test3

Shouldn't "rbd cp" at least have an option to attempt to sparsify the
copy, or copy the sparse parts as sparse?  Same goes for "rbd clone",
BTW.

Yep, this is in fact a bug.  Opened http://tracker.ceph.com/issues/6257.

Thanks!
sage

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux