On Fri, 31 Mar 2023, Lennart Poettering wrote: > On Do, 30.03.23 18:56, Michael Chapman (mike@xxxxxxxxxxxxxxxxx) wrote: > > > On Thu, 30 Mar 2023, Lennart Poettering wrote: > > > On Mi, 29.03.23 13:53, Christoph Anton Mitterer (calestyo@xxxxxxxxxxxx) wrote: > > > > > > > > > That's a bad idea btw. I'd advise you not to do that: on modern > > > > > > systems you want swap, since it makes anonymous memory reclaimable. > > > > > > I > > > > > > am not sure where you are getting this idea from that swap was > > > > > > bad. > > > > > > > > Well I haven't said it's bad, but I guess it depends on the use case > > > > any available RAM. > > > > > > In almost all scenarios you want swap, regardless if little RAM or a > > > lot. For specialist cases where you run everything from memory, and > > > not even programs are backed by disk there might be exceptions. But > > > that#s almost never the case. > > > > One specific case where I deliberately chose _not_ to use swap: large > > hypervisors with local storage. > > > > With swap on the host enabled, all that ended up happening was that local > > IO activity caused idle guest memory to be gradually swapped out. > > Eventually all of the swap space filled up, and the system was exactly > > where it would have been had it not had any swap space configured in the > > first place -- except that it was now _a lot_ slower to migrate those > > swapped-out guests to other hypervisors. > > Linux will swap out stuff only if it has better uses for the RAM. So > yeah, apparently your VMs where mostly idle, and the RAM was better > used for other stuff, and ultimately helped speed up things for that > other more frequently used stuff. Which is an overall win, not a loss. > > If the key requirement you have to make VMs migrate quickly, then > yeah, then allowing them to be written to disk is of course a > problem. But frankly, if the ability to migrate VMs quickly is your > top priority and general performance irrelevant, then you might have > weird priorities? Also, are you sure your network is faster than your > local disk? Certainly faster than the swap-in path! 10 GigE networking really helps. Migrating VMs quickly was not a "key requirement" at all. But it was important, and not having swap meant that it could be achieved _without_ causing any other problems. Think about it: instead of 50 GB of RAM usable as buffer and page cache, let's say I had added swap and allowed that to increase to 100 GB. Would that really make much of a difference to IO performance in guests? Probably not. Sure, I could _engineer_ a test where it made a difference, but in _real-world_ usage it doesn't change things too much. > generally though: i am not doubting that sometimes latency matters for > certain jobs, and paging stuff back in is slow and thus makes > latencies worse. But the way to address that is not turn of swap for > everything, but just for the jobs where the latency means, via the > appropriate cgroup settings. I get that. But why would I spend time doing that rather than just hitting the big `swapoff` button, when that effectively yields the same result? The only difference would be about a GB: the size of all the processes that weren't in guests. > The thing is, anonymous memory is just > one kind of memory, and if you turn off swap then your force that to > remain in RAM – but at the same time you still allow file-based stuff > to be reclaimed so that it must be reread later from disk. If you use > the right resource management settings you have much better control on > that, too, and can comprehensively solve the issue, and get the > latencies you want. > > Or to turn this around: if you are concerned about the latencies swap > is supposed to "introduce", but you do not run your whole OS from an > in-memory image too, then you are doing things wrong and not actually > solving what you want to solve. > > Lennart > > -- > Lennart Poettering, Berlin > >