> -----Original Message----- > From: Yosry Ahmed <yosryahmed@xxxxxxxxxx> > Sent: Thursday, August 29, 2024 4:06 PM > To: Sridhar, Kanchana P <kanchana.p.sridhar@xxxxxxxxx> > Cc: linux-kernel@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx; > hannes@xxxxxxxxxxx; nphamcs@xxxxxxxxx; chengming.zhou@xxxxxxxxx; > usamaarif642@xxxxxxxxx; ryan.roberts@xxxxxxx; Huang, Ying > <ying.huang@xxxxxxxxx>; 21cnbao@xxxxxxxxx; akpm@xxxxxxxxxxxxxxxxxxxx; > Zou, Nanhai <nanhai.zou@xxxxxxxxx>; Feghali, Wajdi K > <wajdi.k.feghali@xxxxxxxxx>; Gopal, Vinodh <vinodh.gopal@xxxxxxxxx> > Subject: Re: [PATCH v6 2/3] mm: zswap: zswap_store() extended to handle > mTHP folios. > > On Thu, Aug 29, 2024 at 2:27 PM Kanchana P Sridhar > <kanchana.p.sridhar@xxxxxxxxx> wrote: > > > > I think "mm: zswap: support mTHP swapout in zswap_store()" is a better > subject. We usually use imperative tone for commit logs as much as > possible. Sure, this is a much better subject, thanks! I will make this change in v7. > > > zswap_store() will now process and store mTHP and PMD-size THP folios. > > > > A new config variable CONFIG_ZSWAP_STORE_THP_DEFAULT_ON (off by > default) > > will enable/disable zswap storing of (m)THP. > > > > This change reuses and adapts the functionality in Ryan Roberts' RFC > > patch [1]: > > > > "[RFC,v1] mm: zswap: Store large folios without splitting" > > > > [1] https://lore.kernel.org/linux-mm/20231019110543.3284654-1- > ryan.roberts@xxxxxxx/T/#u > > > > This patch provides a sequential implementation of storing an mTHP in > > zswap_store() by iterating through each page in the folio to compress > > and store it in the zswap zpool. > > > > Towards this goal, zswap_compress() is modified to take a page instead > > of a folio as input. > > > > Each page's swap offset is stored as a separate zswap entry. > > > > If an error is encountered during the store of any page in the mTHP, > > all previous pages/entries stored will be invalidated. Thus, an mTHP > > is either entirely stored in ZSWAP, or entirely not stored in ZSWAP. > > > > This forms the basis for building batching of pages during zswap store > > of large folios, by compressing batches of up to say, 8 pages in an > > mTHP in parallel in hardware, with the Intel In-Memory Analytics > > Accelerator (Intel IAA). > > > > Also, addressed some of the RFC comments from the discussion in [1]. > > > > Made a minor edit in the comments for "struct zswap_entry" to delete > > the comments related to "value" since same-filled page handling has > > been removed from zswap. > > This commit log is not ordered clearly. Please start by describing > what we are doing, which is basically making zswap_store() support > large folios by compressing them page by page. Then mention important > implementation details and the tunabel and config options added at the > end. After that, refer to the RFC that this is based on. Thanks for these comments. Sure, I will incorporate in v7. > > > > > Co-developed-by: Ryan Roberts > > Signed-off-by: > > This is probably supposed to be "Signed-off-by: Ryan Roberts". I am > not sure what the policy is for reusing patches sent earlier on the > mailing list. Did you talk to Ryan about this? You're right, this is intended to be "Signed-off-by: Ryan Roberts" once Ryan has had a chance to review and indicate approval of attribution as co-author. I have been following the documentation guidelines for submitting patches, as pertaining to co-development. Ryan is in the recipients list and I am hoping he can indicate his approval for the reuse of his original RFC. Ryan, I would greatly appreciate your inputs on the reuse of your RFC, and also any code review comments for improving the patchset! > > > Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@xxxxxxxxx> > > The diff is hard to follow because there is a lot of refactoring mixed > in with the functional changes. Could you please break this down into > purely refactoring patches doing the groundwork, and then the actual > functional change patch(es) on top of them? Sure, I will do this and submit a v7. Appreciate your comments! Thanks, Kanchana > > Thanks!