Re: Balloon pressuring page cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Wed, Feb 5, 2020 at 11:22 AM Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx> wrote:
On Wed, 2020-02-05 at 11:01 -0800, Tyler Sanderson wrote:


On Tue, Feb 4, 2020 at 10:57 PM Michael S. Tsirkin <mst@xxxxxxxxxx> wrote:
On Tue, Feb 04, 2020 at 03:58:51PM -0800, Tyler Sanderson wrote:
>     >     >
>     >     >  1. It is last-resort, which means the system has already gone     through
>     >     >     heroics to prevent OOM. Those heroic reclaim efforts are     expensive
>     >     >     and impact application performance.
>     >
>     >     That's *exactly* what "deflate on OOM" suggests.
>     >
>     >
>     > It seems there are some use cases where "deflate on OOM" is desired and
>     > others where "deflate on pressure" is desired.
>     > This suggests adding a new feature bit "DEFLATE_ON_PRESSURE" that
>     > registers the shrinker, and reverting DEFLATE_ON_OOM to use the OOM
>     > notifier callback.
>     >
>     > This lets users configure the balloon for their use case.
>
>     You want the old behavior back, so why should we introduce a new one? Or
>     am I missing something? (you did want us to revert to old handling, no?)
>
> Reverting actually doesn't help me because this has been the behavior since
> Linux 4.19 which is already widely in use. So my device implementation needs to
> handle the shrinker behavior anyways. I started this conversation to ask what
> the intended device implementation was.
>
> I think there are reasonable device implementations that would prefer the
> shrinker behavior (it turns out that mine doesn't).
> For example, an implementation that slowly inflates the balloon for the purpose
> of memory overcommit. It might leave the balloon inflated and expect any memory
> pressure (including page cache usage) to deflate the balloon as a way to
> dynamically right-size the balloon.

So just to make sure we understand, what exactly does your
implementation do?
My implementation is for the purposes of opportunistic memory overcommit. We always want to give balloon memory back to the guest rather than causing an OOM, so we use DEFLATE_ON_OOM.
We leave the balloon at size 0 while monitoring memory statistics reported on the stats queue. When we see there is an opportunity for significant savings then we inflate the balloon to a desired size (possibly including pressuring the page cache), and then immediately deflate back to size 0.
The host pages backing the guest pages are unbacked during the inflation process, so the memory footprint of the guest is smaller after this inflate/deflate cycle.

This sounds a lot like free page reporting, except I haven't decided on the best way to exert the pressure yet.
As you mention below, the advantage of free page reporting is that it doesn't trigger the OOM path. So I'd strongly advocate that the corresponding mechanism to shrink page cache should also not trigger the OOM path. That suggests something like the the drop_caches API we talked about earlier in the thread.

You might want to take a look at my patch set here:
Yes, I'm strongly in favor of your patch set's goals.
 

Instead of inflating a balloon all it is doing is identifying what pages are currently free and have not already been reported to the host and reports those via the balloon driver. The advantage is that we can do the reporting without causing any sort of OOM errors in most cases since we are just pulling and reporting a small set of patches at a time.



> Two reasons I didn't go with the above implementation:
> 1. I need to support guests before Linux 4.19 which don't have the shrinker
> behavior.
> 2. Memory in the balloon does not appear as "available" in /proc/meminfo even
> though it is freeable. This is confusing to users, but isn't a deal breaker.
>
> If we added a DEFLATE_ON_PRESSURE feature bit that indicated shrinker API
> support then that would resolve reason #1 (ideally we would backport the bit to
> 4.19).

We could declare lack of pagecache pressure with DEFLATE_ON_OOM a
regression and backport the revert but not I think the new
DEFLATE_ON_PRESSURE.
To be clear, the page cache can still be pressured. When the balloon driver allocates memory and causes reclaim, some of that memory comes from the balloon (bad) but some of that comes from the page cache (good).

I think the issue is that you aren't able to maintain the page cache pressure
Right. My implementation can shrink the page cache to whatever size is desired. It just takes a lot more (10x) time and CPU on guests using the shrinker API because of this back and forth.
 
because your balloon is deflating as well which in turn is relieving the pressure. Ideally we would want to have some way of putting the pressure on the page cache without having to put enough stress on the memory though to get to the point of encountering OOM which is one of the reasons why I suspect the balloon driver does the allocation with things in place so that it will stop when it cannot fulfill the allocation and is willing to wait on other threads to trigger the reclaim.



> In any case, the shrinker behavior when pressuring page cache is more of an
> inefficiency than a bug. It's not clear to me that it necessitates reverting.
> If there were/are reasons to be on the shrinker interface then I think those
> carry similar weight as the problem itself.
>  
>
>
>     I consider virtio-balloon to this very day a big hack. And I don't see
>     it getting better with new config knobs. Having that said, the
>     technologies that are candidates to replace it (free page reporting,
>     taming the guest page cache, etc.) are still not ready - so we'll have
>     to stick with it for now :( .
>
>     >
>     > I'm actually not sure how you would safely do memory overcommit without
>     > DEFLATE_ON_OOM. So I think it unlocks a huge use case.
>
>     Using better suited technologies that are not ready yet (well, some form
>     of free page reporting is available under IBM z already but in a
>     proprietary form) ;) Anyhow, I remember that DEFLATE_ON_OOM only makes
>     it less likely to crash your guest, but not that you are safe to squeeze
>     the last bit out of your guest VM.
>
> Can you elaborate on the danger of DEFLATE_ON_OOM? I haven't seen any problems
> in testing but I'd really like to know about the dangers.
> Is there a difference in safety between the OOM notifier callback and the
> shrinker API?

It's not about dangers as such. It's just that when linux hits OOM
all kind of error paths are being hit, latent bugs start triggering,
latency goes up drastically.
Doesn't this suggest that the shrinker is preferable to the OOM notifier in the case that we're actually OOMing (with DEFLATE_ON_OOM)?

I think it all depends on the use case. For the use case you describe going to the shrinker might be preferable as you are wanting to exert just a light bit of pressure to start some page cache reclaim. However if you are wanting to make the deflation a last resort sort of thing then I would think the OOM would make more sense.
I agree that the desired behavior depends on the use case. But even for the case that deflation is a last resort, it seems like we'd like to use the shrinker API rather than the OOM notifier since the OOM notifier is more likely to have bugs/errors. The shrinker API doesn't support this functionality yet, but you could imagine configuring the API so that the balloon is reclaimed from less frequently or only when shrinking other sources is becoming difficult. That way we're not actually in the error prone OOM path.


At a minimum I would think that the code needs to be reworked so that you either have the balloon inflating or deflating, not both at the same time.
DEFLATE_ON_OOM necessarily causes deflate activity regardless of whether the device wants to continue inflating the balloon. Blocking the deflate activity would cause an OOM in the guest.
 
I think that is really what is at the heart of the issue for the current shrinker based approach since you can end up with the balloon driver essentially cycling pages as it is allocating them and freeing them at the same time.
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux