Re: [merged mm-stable] mm-zswap-add-zswap_never_enabled.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 25, 2024 at 5:01 PM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
>
>
> The quilt patch titled
>      Subject: mm: zswap: add zswap_never_enabled()
> has been removed from the -mm tree.  Its filename was
>      mm-zswap-add-zswap_never_enabled.patch
>
> This patch was dropped because it was merged into the mm-stable branch
> of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
>
> ------------------------------------------------------
> From: Yosry Ahmed <yosryahmed@xxxxxxxxxx>
> Subject: mm: zswap: add zswap_never_enabled()
> Date: Tue, 11 Jun 2024 02:45:15 +0000
>
> Add zswap_never_enabled() to skip the xarray lookup in zswap_load() if
> zswap was never enabled on the system.  It is implemented using static
> branches for efficiency, as enabling zswap should be a rare event.  This
> could shave some cycles off zswap_load() when CONFIG_ZSWAP is used but
> zswap is never enabled.
>
> However, the real motivation behind this patch is two-fold:
> - Incoming large folio swapin work will need to fallback to order-0
>   folios if zswap was ever enabled, because any part of the folio could be
>   in zswap, until proper handling of large folios with zswap is added.
>
> - A warning and recovery attempt will be added in a following change in
>   case the above was not done incorrectly.  Zswap will fail the read if
>   the folio is large and it was ever enabled.
>
> Expose zswap_never_enabled() in the header for the swapin work to use
> it later.
>
> [yosryahmed@xxxxxxxxxx: expose zswap_never_enabled() in the header]
>   Link: https://lkml.kernel.org/r/Zmjf0Dr8s9xSW41X@xxxxxxxxxx
> Link: https://lkml.kernel.org/r/20240611024516.1375191-2-yosryahmed@xxxxxxxxxx
> Signed-off-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx>
> Reviewed-by: Nhat Pham <nphamcs@xxxxxxxxx>
> Cc: Barry Song <baohua@xxxxxxxxxx>
> Cc: Chengming Zhou <chengming.zhou@xxxxxxxxx>
> Cc: Chris Li <chrisl@xxxxxxxxxx>
> Cc: David Hildenbrand <david@xxxxxxxxxx>
> Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> ---
>
>  include/linux/zswap.h |    6 ++++++
>  mm/zswap.c            |   10 ++++++++++
>  2 files changed, 16 insertions(+)
>
> --- a/include/linux/zswap.h~mm-zswap-add-zswap_never_enabled
> +++ a/include/linux/zswap.h
> @@ -36,6 +36,7 @@ void zswap_memcg_offline_cleanup(struct
>  void zswap_lruvec_state_init(struct lruvec *lruvec);
>  void zswap_folio_swapin(struct folio *folio);
>  bool zswap_is_enabled(void);
> +bool zswap_never_enabled(void);
>  #else
>
>  struct zswap_lruvec_state {};
> @@ -64,6 +65,11 @@ static inline bool zswap_is_enabled(void
>  {
>         return false;
>  }
> +
> +static inline bool zswap_never_enabled(void)
> +{
> +       return false;

Hi Yosry & Andrew,

Sorry for reporting the error too late. It should be "return true".

As it has been in mm-stable, do we still have a way to fix it?


> +}
>
>  #endif
>
> --- a/mm/zswap.c~mm-zswap-add-zswap_never_enabled
> +++ a/mm/zswap.c
> @@ -83,6 +83,7 @@ static bool zswap_pool_reached_full;
>  static int zswap_setup(void);
>
>  /* Enable/disable zswap */
> +static DEFINE_STATIC_KEY_MAYBE(CONFIG_ZSWAP_DEFAULT_ON, zswap_ever_enabled);
>  static bool zswap_enabled = IS_ENABLED(CONFIG_ZSWAP_DEFAULT_ON);
>  static int zswap_enabled_param_set(const char *,
>                                    const struct kernel_param *);
> @@ -136,6 +137,11 @@ bool zswap_is_enabled(void)
>         return zswap_enabled;
>  }
>
> +bool zswap_never_enabled(void)
> +{
> +       return !static_branch_maybe(CONFIG_ZSWAP_DEFAULT_ON, &zswap_ever_enabled);
> +}
> +
>  /*********************************
>  * data structures
>  **********************************/
> @@ -1557,6 +1563,9 @@ bool zswap_load(struct folio *folio)
>
>         VM_WARN_ON_ONCE(!folio_test_locked(folio));
>
> +       if (zswap_never_enabled())
> +               return false;
> +
>         /*
>          * When reading into the swapcache, invalidate our entry. The
>          * swapcache can be the authoritative owner of the page and
> @@ -1735,6 +1744,7 @@ static int zswap_setup(void)
>                         zpool_get_type(pool->zpools[0]));
>                 list_add(&pool->list, &zswap_pools);
>                 zswap_has_pool = true;
> +               static_branch_enable(&zswap_ever_enabled);
>         } else {
>                 pr_err("pool creation failed\n");
>                 zswap_enabled = false;
> _
>
> Patches currently in -mm which might be from yosryahmed@xxxxxxxxxx are
>
>





[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux