Stefan Hajnoczi wrote:
2010/9/16 Andre Przywara <andre.przywara@xxxxxxx>:
TOURNIER Frédéric wrote:
Ok, thanks for taking time.
I'll dig into your answers.
So as i run relative.img on diskless systems with original.img on nfs,
what are the best practice/tips i can use ?
I thinks it is "-snapshot" you are looking for.
This will put the backing store into "normal" RAM, and you can later commit
it to the original image if needed. See the qemu manpage for more details.
In a nutshell you just specify the original image and add -snapshot to the
command line.
-snapshot creates a temporary qcow2 image in /tmp whose backing file
is your original image. I'm not sure what you mean by "This will put
the backing store into "normal" RAM"?
Stefan, you are right. I never looked into the code and because the file
in /tmp is deleted just after creation there wasn't a sign of it.
For some reason I thought that the buffer would just be allocated in
memory. Sorry, my mistake and thanks for pointing this out.
So Fred, unfortunately this does not solve your problem. I guess you run
into a general problem. If the guest actually changes so much of the
disk that this cannot be backed by RAM in the host, you have lost.
One solution could be to just make (at least parts of) the disk
read-only (a write protected /usr partition works quite well).
If you are sure that writes are not that frequent, you could think of
putting the overlay file also on the remote storage (NFS). Although this
is rather slow, it shouldn't matter if there aren't many writes and the
local page cache should catch most of the accesses (while still being
nice to other RAM users).
Regards,
Andre.
Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 448-3567-12
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html