[Question] KVM memory usage (cache and buffers): LVM or Disk files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

At home I'm using Qemu/KVM for years now on some host with 64GB memory
running around 20 VMs all using files for their disks (qcow2 or vmdk).
On this host all free memory is used by "buffers", according to "free" command.

Now I'm about to deploy a bunch of VMs on some physical hosts (new
hosts) for production.
As I'm creating that VM platform I'm free to chose how I'll create my VMs.

I'm wondering if using LVM and so LV as disks rather than using files
as disks would lower the amount of memory used, on host side, for
buffering.

My newby idea is the following:
With files as disks the host must read the whole file to be able to
find some file inside that disk-file.
With LV as disks the hosts could be able to access VMs file systems
and so host should need less memory as memory for caching would happen
only on files on VM file system rather than for the whole disk-file.

If host needs less memory for caching/buffering when using LV rather
than disk-files, do we need to make LV used as disks accessible from
the host?
This second is because VM would be copy of other VM, they will have
same VG name, same LVs names too. I'm afraid Qemu/KVM become a bit
lost in the middle of all that mess...

Any experience return would be really welcomed : )

Best regards,

mathias
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux