On Thu, Jun 21, 2018 at 8:55 PM Huang, Ying <ying.huang@xxxxxxxxx> wrote: > > From: Huang Ying <ying.huang@xxxxxxxxx> > > To support to swapin the THP as a whole, we need to create PMD swap > mapping during swapout, and maintain PMD swap mapping count. This > patch implements the support to increase the PMD swap mapping > count (for swapout, fork, etc.) and set SWAP_HAS_CACHE flag (for > swapin, etc.) for a huge swap cluster in swap_duplicate() function > family. Although it only implements a part of the design of the swap > reference count with PMD swap mapping, the whole design is described > as follow to make it easy to understand the patch and the whole > picture. > > A huge swap cluster is used to hold the contents of a swapouted THP. > After swapout, a PMD page mapping to the THP will become a PMD > swap mapping to the huge swap cluster via a swap entry in PMD. While > a PTE page mapping to a subpage of the THP will become the PTE swap > mapping to a swap slot in the huge swap cluster via a swap entry in > PTE. > > If there is no PMD swap mapping and the corresponding THP is removed > from the page cache (reclaimed), the huge swap cluster will be split > and become a normal swap cluster. > > The count (cluster_count()) of the huge swap cluster is > SWAPFILE_CLUSTER (= HPAGE_PMD_NR) + PMD swap mapping count. Because > all swap slots in the huge swap cluster are mapped by PTE or PMD, or > has SWAP_HAS_CACHE bit set, the usage count of the swap cluster is > HPAGE_PMD_NR. And the PMD swap mapping count is recorded too to make > it easy to determine whether there are remaining PMD swap mappings. > > The count in swap_map[offset] is the sum of PTE and PMD swap mapping > count. This means when we increase the PMD swap mapping count, we > need to increase swap_map[offset] for all swap slots inside the swap > cluster. An alternative choice is to make swap_map[offset] to record > PTE swap map count only, given we have recorded PMD swap mapping count > in the count of the huge swap cluster. But this need to increase > swap_map[offset] when splitting the PMD swap mapping, that may fail > because of memory allocation for swap count continuation. That is > hard to dealt with. So we choose current solution. > > The PMD swap mapping to a huge swap cluster may be split when unmap a > part of PMD mapping etc. That is easy because only the count of the > huge swap cluster need to be changed. When the last PMD swap mapping > is gone and SWAP_HAS_CACHE is unset, we will split the huge swap > cluster (clear the huge flag). This makes it easy to reason the > cluster state. > > A huge swap cluster will be split when splitting the THP in swap > cache, or failing to allocate THP during swapin, etc. But when > splitting the huge swap cluster, we will not try to split all PMD swap > mappings, because we haven't enough information available for that > sometimes. Later, when the PMD swap mapping is duplicated or swapin, > etc, the PMD swap mapping will be split and fallback to the PTE > operation. > > When a THP is added into swap cache, the SWAP_HAS_CACHE flag will be > set in the swap_map[offset] of all swap slots inside the huge swap > cluster backing the THP. This huge swap cluster will not be split > unless the THP is split even if its PMD swap mapping count dropped to > 0. Later, when the THP is removed from swap cache, the SWAP_HAS_CACHE > flag will be cleared in the swap_map[offset] of all swap slots inside > the huge swap cluster. And this huge swap cluster will be split if > its PMD swap mapping count is 0. > > Signed-off-by: "Huang, Ying" <ying.huang@xxxxxxxxx> > Cc: "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx> > Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> > Cc: Michal Hocko <mhocko@xxxxxxxx> > Cc: Johannes Weiner <hannes@xxxxxxxxxxx> > Cc: Shaohua Li <shli@xxxxxxxxxx> > Cc: Hugh Dickins <hughd@xxxxxxxxxx> > Cc: Minchan Kim <minchan@xxxxxxxxxx> > Cc: Rik van Riel <riel@xxxxxxxxxx> > Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> > Cc: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> > Cc: Zi Yan <zi.yan@xxxxxxxxxxxxxx> > Cc: Daniel Jordan <daniel.m.jordan@xxxxxxxxxx> > --- > include/linux/huge_mm.h | 5 + > include/linux/swap.h | 9 +- > mm/memory.c | 2 +- > mm/rmap.c | 2 +- > mm/swap_state.c | 2 +- > mm/swapfile.c | 287 +++++++++++++++++++++++++++++++++--------------- > 6 files changed, 214 insertions(+), 93 deletions(-) I'm probably missing some background, but I find the patch hard to read. Can you disseminate some of this patch changelog into kernel-doc commentary so it's easier to follow which helpers do what relative to THP swap. > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index d3bbf6bea9e9..213d32e57c39 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -80,6 +80,11 @@ extern struct kobj_attribute shmem_enabled_attr; > #define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT) > #define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER) > > +static inline bool thp_swap_supported(void) > +{ > + return IS_ENABLED(CONFIG_THP_SWAP); > +} > + > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > #define HPAGE_PMD_SHIFT PMD_SHIFT > #define HPAGE_PMD_SIZE ((1UL) << HPAGE_PMD_SHIFT) > diff --git a/include/linux/swap.h b/include/linux/swap.h > index f73eafcaf4e9..57aa655ab27d 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -451,8 +451,8 @@ extern swp_entry_t get_swap_page_of_type(int); > extern int get_swap_pages(int n, bool cluster, swp_entry_t swp_entries[]); > extern int add_swap_count_continuation(swp_entry_t, gfp_t); > extern void swap_shmem_alloc(swp_entry_t); > -extern int swap_duplicate(swp_entry_t); > -extern int swapcache_prepare(swp_entry_t); > +extern int swap_duplicate(swp_entry_t *entry, bool cluster); This patch introduces a new flag to swap_duplicate(), but then all all usages still pass 'false' so why does this patch change the argument. Seems this change belongs to another patch? > +extern int swapcache_prepare(swp_entry_t entry, bool cluster); Rather than add a cluster flag to these helpers can the swp_entry_t carry the cluster flag directly?