On Fri, Jun 14, 2024 at 2:22 AM Usama Arif <usamaarif642@xxxxxxxxx> wrote: > > > On 13/06/2024 22:21, Yosry Ahmed wrote: > > On Mon, Jun 10, 2024 at 5:18 AM Usama Arif <usamaarif642@xxxxxxxxx> wrote: > >> Going back to the v1 implementation of the patchseries. The main reason > >> is that a correct version of v2 implementation requires another rmap > >> walk in shrink_folio_list to change the ptes from swap entry to zero pages to > >> work (i.e. more CPU used) [1], is more complex to implement compared to v1 > >> and is harder to verify correctness compared to v1, where everything is > >> handled by swap. > >> > >> --- > >> As shown in the patchseries that introduced the zswap same-filled > >> optimization [2], 10-20% of the pages stored in zswap are same-filled. > >> This is also observed across Meta's server fleet. > >> By using VM counters in swap_writepage (not included in this > >> patchseries) it was found that less than 1% of the same-filled > >> pages to be swapped out are non-zero pages. > >> > >> For conventional swap setup (without zswap), rather than reading/writing > >> these pages to flash resulting in increased I/O and flash wear, a bitmap > >> can be used to mark these pages as zero at write time, and the pages can > >> be filled at read time if the bit corresponding to the page is set. > >> > >> When using zswap with swap, this also means that a zswap_entry does not > >> need to be allocated for zero filled pages resulting in memory savings > >> which would offset the memory used for the bitmap. > >> > >> A similar attempt was made earlier in [3] where zswap would only track > >> zero-filled pages instead of same-filled. > >> This patchseries adds zero-filled pages optimization to swap > >> (hence it can be used even if zswap is disabled) and removes the > >> same-filled code from zswap (as only 1% of the same-filled pages are > >> non-zero), simplifying code. > >> > >> This patchseries is based on mm-unstable. > > Aside from saving swap/zswap space and simplifying the zswap code > > (thanks for that!), did you observe any performance benefits from not > > having to go into zswap code for zero-filled pages? > > > > In [3], I observed ~1.5% improvement in kernbench just by optimizing > > zswap's handling of zero-filled pages, and that benchmark only > > produced around 1.5% zero-filled pages. I imagine avoiding the zswap > > code entirely, and for workloads that have 10-20% zero-filled pages, > > the performance improvement should be more pronounced. > > > > When zswap is not being used and all swap activity translates to IO, I > > imagine the benefits will be much more significant. > > > > I am curious if you have any numbers with or without zswap :) > > Apart from tracking zero-filled pages (using inaccurate counters not in > this series) which had the same pattern to zswap_same_filled_pages, the > nvme writes went down around 5-10% during stable points in the > production experiment. The performance improved by 2-3% at some points, > but this is comparing 2 sets of machines running production workloads > (which can vary between machine sets), so I would take those numbers > cautiously and which is why I didnt include them in the cover letter. > Yeah this makes sense, thanks. It would have been great if we had comparable numbers with and without this series. But this shouldn't be a big deal, the advantage of the series should be self-explanatory. It's just a shame you don't get to brag about it :)