Re: Guests using more ram than specified

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jan 17, 2015 at 3:00 AM, Dennis Jacobfeuerborn
<dennisml@xxxxxxxxxxxx> wrote:
> On 16.01.2015 15:14, Michal Privoznik wrote:
>> On 16.01.2015 13:33, Dennis Jacobfeuerborn wrote:
>>> Hi,
>>> today I noticed that one of my HVs started swapping aggressively and
>>> noticed that the two guests running on it use quite a bit more ram than
>>> I assigned to them. They respectively were assigned 124G and 60G with
>>> the idea that the 192G system then has 8G for other purposes. In top I
>>> see the VMs using about 128G and 64G which means there is nothing left
>>> for the system. This is on a CentOS 7 system.
>>> Any ideas what causes this or how I can calculate the actual maximum
>>> amount of RAM I can assign to the guests on a HV without overcommitting RAM?
>>
>> Well, this is an undecidable problem.
>> One thing that may help is to use hugepages to back the memory for your
>> guests. Because if you use the ordinary system pages, the translation
>> table for ~200G is gonna be gigantic. Remember, that the table is
>> counted in for memory usage.
>
> According to the system information the qemu process uses transparent
> hugepages and most of the Memory for a VM is reported under
> AnonHugePages so that looks ok.
>
>> Then, qemu itself consume some memory besides guest memory. How much?
>> Nobody is able to tell.
>
> Yes, and that worries me. The recommendation is not to over-commit
> memory but if the System uses specified RAM + X and X is unknown and can
> be tens of Gigabytes then I don't even know how to not over-commit the
> system. I already reserved 8G for overhead but that doesn't seem to be
> enough an now I don't know how to even calculate safe values for the guests.
> One of the HVs actually crashed and rebooted itself. Not a pretty picture.
>
> Regards,
>   Dennis
>

It`s may be interesting to check how various compaction technologies
(KSM/PKSM) will behave on such large VMs. If you are running those VMs
on a host where a single NUMA node has smaller amount of memory than a
single VM allocation, you also may face mm-related issues and
NUMA-related allocation faults.

_______________________________________________
libvirt-users mailing list
libvirt-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvirt-users



[Index of Archives]     [Virt Tools]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux