Re: [RFC 1/4] mm/zswap: skip swapcache for swapping in zswap pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[..]
> >> diff --git a/mm/zswap.c b/mm/zswap.c
> >> index 7f00cc918e7c..f4b03071b2fb 100644
> >> --- a/mm/zswap.c
> >> +++ b/mm/zswap.c
> >> @@ -1576,6 +1576,52 @@ bool zswap_store(struct folio *folio)
> >>         return ret;
> >>  }
> >>
> >> +static bool swp_offset_in_zswap(unsigned int type, pgoff_t offset)
> >> +{
> >> +       return (offset >> SWAP_ADDRESS_SPACE_SHIFT) <  nr_zswap_trees[type];
> >> +}
> >> +
> >> +/* Returns true if the entire folio is in zswap */
> >> +bool zswap_present_test(swp_entry_t swp, int nr_pages)
> >
> > Also, did you check how the performance changes if we bring back the
> > bitmap of present entries (i.e. what used to be frontswap's bitmap)
> > instead of the tree lookups here?
> >
>
> I think the cost of tree lookup is not much and compared to zswap_decompress
> can probably be ignored. zswap_present_test is essentially just xa_load for
> the first entry, and then xas_next_entry for subsequent entries which is even
> cheaper than xa_load.

Maybe it's worth measuring if it's not too much work. IIUC there is a
regression that we don't fully understand with this series, and the
extra lookup may be contributing to that. I think it could be just
fine, but I can't tell without numbers :)




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux