On Thu, Feb 27, 2025 at 1:55 PM Yosry Ahmed <yosry.ahmed@xxxxxxxxx> wrote: > > On Thu, Feb 27, 2025 at 01:46:29PM -0800, Nhat Pham wrote: > > On Wed, Feb 26, 2025 at 5:19 PM Yosry Ahmed <yosry.ahmed@xxxxxxxxx> wrote: > > > > > > On Wed, Feb 26, 2025 at 04:14:45PM -0800, Nhat Pham wrote: > > > > Currently, we crash the kernel when a decompression failure occurs in > > > > zswap (either because of memory corruption, or a bug in the compression > > > > algorithm). This is overkill. We should only SIGBUS the unfortunate > > > > process asking for the zswap entry on zswap load, and skip the corrupted > > > > entry in zswap writeback. The former is accomplished by returning true > > > > from zswap_load(), indicating that zswap owns the swapped out content, > > > > but without flagging the folio as up-to-date. The process trying to swap > > > > in the page will check for the uptodate folio flag and SIGBUS (see > > > > do_swap_page() in mm/memory.c for more details). > > > > > > We should call out the extra xarray walks and their perf impact (if > > > any). > > > > Lemme throw this in a quick and dirty test. I doubt there's any > > impact, but since I'm reworking this patch for a third version anyway > > might as well. > > It's likely everything is cache hot and the impact is minimal, but let's > do the due diligence. > Yeah I ran some kernel building tests for 5 times, and found basically no difference: With the new scheme: real: mean: 125.1s, stdev: 0.12s user: mean: 3265.23s, stdev: 9.62s sys: mean: 2156.41s, stdev: 13.98s The old scheme: real: mean: 125.78s, stdev: 0.45s user: mean: 3287.18s, stdev: 5.95s sys: mean: 2177.08s, stdev: 26.52s Honestly, eyeballing the results, the mean difference is probably smaller than between-run variance. :)