On Thu, 30 Mar 2023 at 10:15, Michael Chapman <mike@xxxxxxxxxxxxxxxxx> wrote: > > On Thu, 30 Mar 2023, Lennart Poettering wrote: > > On Mi, 29.03.23 13:53, Christoph Anton Mitterer (calestyo@xxxxxxxxxxxx) wrote: > > > > > > > That's a bad idea btw. I'd advise you not to do that: on modern > > > > > systems you want swap, since it makes anonymous memory reclaimable. > > > > > I > > > > > am not sure where you are getting this idea from that swap was > > > > > bad. > > > > > > Well I haven't said it's bad, but I guess it depends on the use case > > > any available RAM. > > > > In almost all scenarios you want swap, regardless if little RAM or a > > lot. For specialist cases where you run everything from memory, and > > not even programs are backed by disk there might be exceptions. But > > that#s almost never the case. > > One specific case where I deliberately chose _not_ to use swap: large > hypervisors with local storage. > > With swap on the host enabled, all that ended up happening was that local > IO activity caused idle guest memory to be gradually swapped out. > Eventually all of the swap space filled up, and the system was exactly > where it would have been had it not had any swap space configured in the > first place -- except that it was now _a lot_ slower to migrate those > swapped-out guests to other hypervisors. > > - Michael The solution there is to ensure the cgroup configuration for the slices where the guests run have memory.swap.max=0, rather than disabling it for the whole system.