On 12/11/2016 06:08 AM, Tom Horsley wrote:
Last night I was "optimizing" a qcow2 virtual image
(after zero filling the free space), running (as root):
qemu-img convert -f qcow2 -O qcow2 old.img new.img
The copying and scanning for zero blocks pretty
much took over the disk. Anything else I tried to run
which needed access to something on that disk would
hang for minutes at a time before getting through.
Is that normal behavior? I thought folks were supposed
to take turns and play nice together :-).
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx
So "qcow2 old.img" is on the file system you want other apps to access,
and so is "new.img" ? this sounds like "as if" qemu-img wants to make sure
that the two images, old and new, will not be modified during the conversion
process. Since you ARE running as root, you have the privs to run the
conversion
at a very high priority that, for most of the time, it is picked first
off the run queue
and placed on cpu, and "might" even be inhibiting access to the
filesystem hosting
the 2 images during the conversion.
Now this is all conjecture on my part as I have never used it.
Other issues might be
1. The drive is a very slow drive
and/or
2. You do not have sufficient ram for buffering large amounts of data
and/or
3. You do not have multiple cores for other threads to run on different
cores.
One of the ways to really debug this is to scan the source code of
qemu-img to
see what priority it tries to give itself, what read/write/ioctl
operations it is performing
and look up the man pages on those operations to see if they are being
performed
in a manner that the process (in the kernel) spins idly while waiting
for the operation
to finish, and also scan the kernel to see if the huge FS spinlock is
held during the entire
conversion operation.
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx