Re: [RFC] how the ballooned memory should be accounted by the drivers inside the guests? (was:[PATCH v6 1/2] Create debugfs file with virtio balloon usage information)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>
>> In case of Hyper-V I remember a customer BUG report that requested that
>> exact behavior, however, I'm not able to locate the BZ quickly.
>> [1] https://lists.linuxfoundation.org/pipermail/virtualization/2021-November/057767.html
>> (note that I can't easily find the original mail in the archives)
> 
> VMWare does not, Xen do, HV do (but it didn't) - Virtio does both.
> 
> For me the confusion comes from mixing ballooning and hot plug.

For example, QEMU (and even libvirt) doesn't even have built in support
for any kind of automatic balloon resizing on guest memory pressure (and
I'm happy that we don't implement any such heuristics). As a user/admin,
all you can do is manually adjust the logical VM size by requesting to
inflate/deflate the balloon. For virtio-balloon, we cannot derive what
the hypervisor/admin might or might not do -- and whether the admin
intends to use memory ballooning for memory hotunplug or for optimizing
memory overcommit.

As another example, HV dynamic memory actually combines memory hotplug
with memory ballooning: use memory hotplug to add more memory on demand
and use memory ballooning to logically unplug memory again.

The VMWare balloon is a bit special, because it actually usually
implements deflate-on-oom semantics in the hypervisor. IIRC, the
hypervisor will actually adjust the balloon size based on guest memory
stats, and there isn't really an interface to manually set the balloon
size for an admin. But I might be wrong regarding the latter.

> 
> Ballooning is like a heap inside the guest from which the host can 
> allocate/deallocate pages, if there is a mechanism for the guest to ask 
> the host for more/to free/ pages or the host have a heuristic to monitor 
> the guest and inflate/deflate the guest it is a matter of implementation.

Please don't assume that the use case for memory ballooning is only
optimizing memory overcommit in the hypervisor under memory pressure.

> 
> Hot plug is adding  to MemTotal and it is not a random event either in 
> real or virtual environment -  so you can act upon it. MemTotal  goes 
> down on hot unplug and if pages get marked as faulty RAM.

"not a random event either" -- sure, with ppc dlpar, xen balloon, hv
balloon or virtio-mem ... which all are able to hotplug memory fairly
randomly based on hypervisor decisions.

In physical environments, it's not really a random event, I agree.

> 
> Historically MemTotal is a stable value ( i agree with most of David 
> Stevens points) and user space is expecting it to be stable , 
> initialized at startup and it does not expect it to change.

Just like some apps are not prepared for memory hot(un)plug. Some apps
just don't work in environments with variable physical memory sizes:
examples include databases, where memory ballooning might essentially be
completely useless (there is a paper about application-aware memory
ballooning for that exact use case).

> 
> Used is what changes and that is what user space expects to change.
> 
> Delfate on oom might have been a mistake but it is there and if anything 
> depends on changing MemTotal  it will be broken by that option.  How 
> that can be fixed?

I didn't quite get your concern here. Deflate-on-oom in virtio-balloon
won't adjust MemTotal, so under which condition would be something broken?

> 
> I agree that the host can not reclaim what is marked as used  but should 
> it be able to ? May be it will be good to teach oom killer that there 
> can be such ram that can not be reclaimed.
> 
>> Note: I suggested under [1] to expose inflated pages via /proc/meminfo
>> directly. We could do that consistently over all balloon drivers ...
>> doesn't sound too crazy.
> 
> Initally i wanted to do exactly this BUT:
> - some drivers prefer to expose some more internal information in the file.

They always can have an extended debugfs interface in addition.

> - a lot of user space is using meminfo so better keep it as is to avoid breaking something, ballooning is not very frequently used.

We can always extend. Just recently, we exposed Zswap data:

commit f6498b776d280b30a4614d8261840961e993c2c8
Author: Johannes Weiner <hannes@xxxxxxxxxxx>
Date:   Thu May 19 14:08:53 2022 -0700

    mm: zswap: add basic meminfo and vmstat coverage


Exposing information about inflated pages in a generic way doesn't sound
completely wrong to me, but there might be people that object.

> 
> 
> Please, share your view on how the ballooned memory should be accounted by the drivers inside the guests so we can work towards consistent behaviour:
> 
> Should the inflated memory be accounted as Used or MemTotal be adjusted?

I hope I was able to make it clear that it completely depends on how
memory ballooning is actually intended to be used. It's not uncommon to
use it as form of fake memory hotunplug, where that memory is actually
gone for good and won't simply come back when under memory pressure.

> 
> Should the inflated memory be added to /proc/meminfo ?

My gut feeling is yes. The interesting question remains, how to
distinguish the two use cases (inflated memory subtracted from MemTotal
or subtracted from MemFree).

I'm not sure if we even want to unify balloon handling reagrding
adjusting managed pages. IMHO, there are good reasons to do it either way.

-- 
Thanks,

David / dhildenb

_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization




[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux