> > This does seem to be getting down into the weeds. How would a user > know (or even suspect) that these things are happening to them? Perhaps > it would be helpful to tell people where to go look to determine this. When I test this feature during its development, I primarily just look at the swapin/major fault counters to see if I'm experiencing swapping IO, and when writeback is disabled, if the IO is still there. We can also poll these counters overtime and plot it/compute their rate of change. I just assumed this is usually the standard practice, and not very zswap-specific in general, so I did not specify in the zswap documentation. > > Also, it would be quite helpful of the changelog were to give us some > idea of how important this tunable is. What sort of throughput > differences might it cause and under what circumstances? For the most part, this feature is motivated by internal parties who have already established their opinions regarding swapping - the workloads that are highly sensitive to IO, and especially those who are using servers with really slow disk performance (for instance, massive but slow HDDs). For these folks, it's impossible to convince them to even entertain zswap if swapping also comes as a packaged deal. Writeback disabling is quite a useful feature in these situations - on a mixed workloads deployment, they can disable writeback for the more IO-sensitive workloads, and enable writeback for other background workloads. (Maybe we should include the paragraph above as part of the changelog?) I don't have any concrete numbers though - any numbers I can pull out are from highly artificial tasks that only serve to test the correctness aspect of the implementation. zswap.writeback disablement would of course be faster in these situations (up to 33%!!!!) - but that's basically just saying HDD is slow. Which is not very informative or surprising, so I did not include it in the changelog.