Re: .img on nfs, relative on ram, consuming mass ram

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 20 Sep 2010 16:00:53 +0200
Andre Przywara <andre.przywara@xxxxxxx> wrote:

> TOURNIER FrÃdÃric wrote:
> > Heres my benches, done in two days so dates are weird and results are very approximative.
> > What surprises me are in the Part 2, swap occured.
> I don't know exactly why, but I have seen a small usage of swap 
> occasionally without real memory pressure. So I'd consider this normal.
Mmm i don't like strange normal things... Anyway my current setting is the NÂ1.
And my target the 3 or 4 ^^.

> > In 3 and 4, the ram is eaten up, even if the vm just booted.
> Where is the RAM eaten up? I see always always 800 MB free, some more 
> even after the d/l:
> You have to look at the second line of the free column, no the first 
> one. As you can see the OS has still enough RAM to afford a large cache, 
> so it uses this. Unused RAM is just a waste of resources (because it is 
> always there and there is no reason to not use it). If the 'cached' 
> contains a lot of clean pages, the OS can simply claim them should an 
> application request more memory. If you want a proof of this, try:
> # echo 3 > /proc/sys/vm/drop_caches
> This should free the cache and give you a high "real" free value back.
Ok i'll take a closer look. But i see no reason why so much cache is used.
I think there's some kind of "duplicated pages" between nfs and qemu-kvm.
Maybe there's an idea for a future enhancement for some "-nfs-image" switch ?
 
> Have you tried cache=none with the tmpfs scenario?
Oh yeah i tried and tried. Unfortunately this is impossible : Invalid argument
For shm and ramfs too.

> That should save you 
> some of the host's cached memory (note the difference between part 1 and 
> part2 in that respect), maybe at the expense of the guest's memory used 
> more heavily. Your choice here, that depends on the actual memory 
> utilization in the guest.
> 
> As I said before, it is not a very good idea to use such a setup (with 
> the relative image on tmpfs) if you are doing actual disk I/O, 
> especially large writes. AFAIK QCOW[2] does not really shrink, it only 
> grows, so you will end up with out-of-memory at some point.
> But if you can restrict the amount of written data, this may work.
Well i'm aware of this "dangerous" setting but i really tried to make it work because it's so comfortable.
If some of you readers have some spare time, and two machines (2G ram on each is a good start), try this setting.
Read on NFS, write on local, ram if possible. The performance of the guest is awesome, especially if the original .img is pre-cached in the ram of the server.

> 
> Regards,
> Andre.
> 
> P.S. Sorry for the confusion about tmpfs vs. ramfs in my last week's mail.

No problem.
Thank you for taking time.
And beeing answered by someone@xxxxxxx is a must ^^

Fred.

> 
> > 
> > ------------------------------------
> > Part 0
> > 
> > End of boot :
> > 
> > bash-3.1$ free
> >              total       used       free     shared    buffers     cached
> > Mem:       2056840     500836    1556004          0       2244     359504
> > -/+ buffers/cache:     139088    1917752
> > Swap:      3903784          0    3903784
> > 
> > ------------------------------------
> > 
> > Part 1
> > 
> > qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -name qlio -drive file=/mnt/hd/sda/sda1/tmp/relqlio.img,cache=none
> > 
> > bash-3.1$ free
> >              total       used       free     shared    buffers     cached
> > Mem:       2056840    1656280     400560          0      34884     378332
> > -/+ buffers/cache:    1243064     813776
> > Swap:      3903784          0    3903784
> > 
> > bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
> > 58946 -rw-r--r-- 1 ftournier info 60424192 2010-09-16 17:49 /mnt/hd/sda/sda1/tmp/relqlio.img
> > 
> > 650M download inside the vm
> > 
> > bash-3.1$ free
> >              total       used       free     shared    buffers     cached
> > Mem:       2056840    1677648     379192          0      33860     397716
> > -/+ buffers/cache:    1246072     810768
> > Swap:      3903784          0    3903784
> > 
> > bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
> > 914564 -rw-r--r-- 1 ftournier info 935723008 2010-09-20 14:07 /mnt/hd/sda/sda1/tmp/relqlio.img
> > 
> > ------------------------------------
> > 
> > Part 2
> > 
> > reboot
> > 
> > qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -name qlio -drive file=/mnt/hd/sda/sda1/tmp/relqlio.img
> > 
> > bash-3.1$ free
> >              total       used       free     shared    buffers     cached
> > Mem:       2056840    2040172      16668          0      32952     758948
> > -/+ buffers/cache:    1248272     808568
> > Swap:      3903784          0    3903784
> > 
> > bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
> > 60739 -rw-r--r-- 1 ftournier info 62259200 2010-09-16 17:57 /mnt/hd/sda/sda1/tmp/relqlio.img
> > 
> > 650M download inside the vm
> > 
> > bash-3.1$ free
> >              total       used       free     shared    buffers     cached
> > Mem:       2056840    2040540      16300          0      34412     765208
> > -/+ buffers/cache:    1240920     815920
> > Swap:      3903784       8160    3895624
> > 
> > bash-3.1$ ls -lsa /mnt/hd/sda/sda1/tmp/relqlio.img
> > 842430 -rw-r--r-- 1 ftournier info 861929472 2010-09-20 14:20 /mnt/hd/sda/sda1/tmp/relqlio.img
> > 
> > ------------------------------------
> > 
> > Part 3
> > 
> > reboot
> > 
> > qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -name qlio -drive file=/tmp/relqlio.img
> > 
> > note : /tmp is a tmpfs filesystem
> > 
> > bash-3.1$ free
> >              total       used       free     shared    buffers     cached
> > Mem:       2056840    2009688      47152          0        248     766328
> > -/+ buffers/cache:    1243112     813728
> > Swap:      3903784          0    3903784
> > 
> > bash-3.1$ ls -lsa /tmp/relqlio.img
> > 59848 -rw-r--r-- 1 ftournier info 61407232 2010-09-16 18:04 /tmp/relqlio.img
> > 
> > 650M download inside the vm
> > 
> > bash-3.1$ free
> >              total       used       free     shared    buffers     cached
> > Mem:       2056840    2041404      15436          0        128     921276
> > -/+ buffers/cache:    1120000     936840
> > Swap:      3903784     248804    3654980
> > 
> > bash-3.1$ ls -lsa /tmp/relqlio.img
> > 885448 -rw-r--r-- 1 ftournier info 906821632 2010-09-20 14:40 /tmp/relqlio.img
> > 
> > ------------------------------------
> > 
> > Part 4
> > 
> > reboot
> > 
> > qemu-system-x86_64 -m 1024 -vga std -usb -usbdevice tablet -localtime -soundhw es1370 -name qlio -drive file=/dev/shm/relqlio.img
> > 
> > bash-3.1$ free
> >              total       used       free     shared    buffers     cached
> > Mem:       2056840    2009980      46860          0        172     767328
> > -/+ buffers/cache:    1242480     814360
> > Swap:      3903784          0    3903784
> > 
> > bash-3.1$ ls -lsa /dev/shm/relqlio.img
> > 58496 -rw-r--r-- 1 ftournier info 59899904 2010-09-16 18:11 /dev/shm/relqlio.img
> > 
> > 650M download inside the vm
> > 
> > bash-3.1$ free
> >              total       used       free     shared    buffers     cached
> > Mem:       2056840    2041576      15264          0         92     938976
> > -/+ buffers/cache:    1102508     954332
> > Swap:      3903784     266232    3637552
> > 
> > bash-3.1$ ls -lsa /dev/shm/relqlio.img
> > 1016912 -rw-r--r-- 1 ftournier info 1039400960 2010-09-20 15:15 /dev/shm/relqlio.img
> > 
> 
> 
> -- 
> Andre Przywara
> AMD-OSRC (Dresden)
> Tel: x29712
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux