On 2021/4/14 9:17, Huang, Ying wrote: > Miaohe Lin <linmiaohe@xxxxxxxxxx> writes: > >> On 2021/4/12 15:24, Huang, Ying wrote: >>> "Huang, Ying" <ying.huang@xxxxxxxxx> writes: >>> >>>> Miaohe Lin <linmiaohe@xxxxxxxxxx> writes: >>>> >>>>> We will use percpu-refcount to serialize against concurrent swapoff. This >>>>> patch adds the percpu_ref support for later fixup. >>>>> >>>>> Signed-off-by: Miaohe Lin <linmiaohe@xxxxxxxxxx> >>>>> --- >>>>> include/linux/swap.h | 2 ++ >>>>> mm/swapfile.c | 25 ++++++++++++++++++++++--- >>>>> 2 files changed, 24 insertions(+), 3 deletions(-) >>>>> >>>>> diff --git a/include/linux/swap.h b/include/linux/swap.h >>>>> index 144727041e78..849ba5265c11 100644 >>>>> --- a/include/linux/swap.h >>>>> +++ b/include/linux/swap.h >>>>> @@ -240,6 +240,7 @@ struct swap_cluster_list { >>>>> * The in-memory structure used to track swap areas. >>>>> */ >>>>> struct swap_info_struct { >>>>> + struct percpu_ref users; /* serialization against concurrent swapoff */ >>>>> unsigned long flags; /* SWP_USED etc: see above */ >>>>> signed short prio; /* swap priority of this type */ >>>>> struct plist_node list; /* entry in swap_active_head */ >>>>> @@ -260,6 +261,7 @@ struct swap_info_struct { >>>>> struct block_device *bdev; /* swap device or bdev of swap file */ >>>>> struct file *swap_file; /* seldom referenced */ >>>>> unsigned int old_block_size; /* seldom referenced */ >>>>> + struct completion comp; /* seldom referenced */ >>>>> #ifdef CONFIG_FRONTSWAP >>>>> unsigned long *frontswap_map; /* frontswap in-use, one bit per page */ >>>>> atomic_t frontswap_pages; /* frontswap pages in-use counter */ >>>>> diff --git a/mm/swapfile.c b/mm/swapfile.c >>>>> index 149e77454e3c..724173cd7d0c 100644 >>>>> --- a/mm/swapfile.c >>>>> +++ b/mm/swapfile.c >>>>> @@ -39,6 +39,7 @@ >>>>> #include <linux/export.h> >>>>> #include <linux/swap_slots.h> >>>>> #include <linux/sort.h> >>>>> +#include <linux/completion.h> >>>>> >>>>> #include <asm/tlbflush.h> >>>>> #include <linux/swapops.h> >>>>> @@ -511,6 +512,15 @@ static void swap_discard_work(struct work_struct *work) >>>>> spin_unlock(&si->lock); >>>>> } >>>>> >>>>> +static void swap_users_ref_free(struct percpu_ref *ref) >>>>> +{ >>>>> + struct swap_info_struct *si; >>>>> + >>>>> + si = container_of(ref, struct swap_info_struct, users); >>>>> + complete(&si->comp); >>>>> + percpu_ref_exit(&si->users); >>>> >>>> Because percpu_ref_exit() is used, we cannot use percpu_ref_tryget() in >>>> get_swap_device(), better to add comments there. >>> >>> I just noticed that the comments of percpu_ref_tryget_live() says, >>> >>> * This function is safe to call as long as @ref is between init and exit. >>> >>> While we need to call get_swap_device() almost at any time, so it's >>> better to avoid to call percpu_ref_exit() at all. This will waste some >>> memory, but we need to follow the API definition to avoid potential >>> issues in the long term. >> >> I have to admit that I'am not really familiar with percpu_ref. So I read the >> implementation code of the percpu_ref and found percpu_ref_tryget_live() could >> be called after exit now. But you're right we need to follow the API definition >> to avoid potential issues in the long term. >> >>> >>> And we need to call percpu_ref_init() before insert the swap_info_struct >>> into the swap_info[]. >> >> If we remove the call to percpu_ref_exit(), we should not use percpu_ref_init() >> here because *percpu_ref->data is assumed to be NULL* in percpu_ref_init() while >> this is not the case as we do not call percpu_ref_exit(). Maybe percpu_ref_reinit() >> or percpu_ref_resurrect() will do the work. >> >> One more thing, how could I distinguish the killed percpu_ref from newly allocated one? >> It seems percpu_ref_is_dying is only safe to call when @ref is between init and exit. >> Maybe I could do this in alloc_swap_info()? > > Yes. In alloc_swap_info(), you can distinguish newly allocated and > reused swap_info_struct. > >>> >>>>> +} >>>>> + >>>>> static void alloc_cluster(struct swap_info_struct *si, unsigned long idx) >>>>> { >>>>> struct swap_cluster_info *ci = si->cluster_info; >>>>> @@ -2500,7 +2510,7 @@ static void enable_swap_info(struct swap_info_struct *p, int prio, >>>>> * Guarantee swap_map, cluster_info, etc. fields are valid >>>>> * between get/put_swap_device() if SWP_VALID bit is set >>>>> */ >>>>> - synchronize_rcu(); >>>>> + percpu_ref_reinit(&p->users); >>>> >>>> Although the effect is same, I think it's better to use >>>> percpu_ref_resurrect() here to improve code readability. >>> >>> Check the original commit description for commit eb085574a752 "mm, swap: >>> fix race between swapoff and some swap operations" and discussion email >>> thread as follows again, >>> >>> https://lore.kernel.org/linux-mm/20171219053650.GB7829@xxxxxxxxxxxxxxxxxx/ >>> >>> I found that the synchronize_rcu() here is to avoid to call smp_rmb() or >>> smp_load_acquire() in get_swap_device(). Now we will use >>> percpu_ref_tryget_live() in get_swap_device(), so we will need to add >>> the necessary memory barrier, or make sure percpu_ref_tryget_live() has >>> ACQUIRE semantics. Per my understanding, we need to change >>> percpu_ref_tryget_live() for that. >>> >> >> Do you mean the below scene is possible? >> >> cpu1 >> swapon() >> ... >> percpu_ref_init >> ... >> setup_swap_info >> /* smp_store_release() is inside percpu_ref_reinit */ >> percpu_ref_reinit > > spin_unlock() has RELEASE semantics already. > >> ... >> >> cpu2 >> get_swap_device() >> /* ignored smp_rmb() */ >> percpu_ref_tryget_live > > Some kind of ACQUIRE is required here to guarantee the refcount is > checked before fetching the other fields of swap_info_struct. I have > sent out a RFC patch to mailing list to discuss this. Many thanks. But We may still need to add a smp_rmb() in get_swap_device() in case we can't add ACQUIRE for refcount. > >> ... >> >> There is indeed missing smp_rmb() in percpu_ref_tryget_live. So I think the above >> scene possible and we should fix this. >> >>>>> spin_lock(&swap_lock); >>>>> spin_lock(&p->lock); >>>>> _enable_swap_info(p); >>>>> @@ -2621,11 +2631,13 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) >>>>> p->flags &= ~SWP_VALID; /* mark swap device as invalid */ >>>>> spin_unlock(&p->lock); >>>>> spin_unlock(&swap_lock); >>>>> + >>>>> + percpu_ref_kill(&p->users); >>>>> /* >>>>> * wait for swap operations protected by get/put_swap_device() >>>>> * to complete >>>>> */ >>>>> - synchronize_rcu(); >>>>> + wait_for_completion(&p->comp); >>>> >>>> Better to move percpu_ref_kill() after the comments. And maybe revise >>>> the comments. >>> >>> After reading the original commit description as above, I found that we >>> need synchronize_rcu() here to protect the accessing to the swap cache >>> data structure. Because there's call_rcu() during percpu_ref_kill(), it >>> appears OK to keep the synchronize_rcu() here. And we need to revise >>> the comments to make it clear what is protected by which operation. >>> >> >> Per my understanding, percpu_ref->data->release is called only after the refcnt >> reaches 0, including a full grace period has elapsed or refcnt won't be 0. >> wait_for_completion() is used for waiting the last refcnt being released. So >> synchronize_rcu() is not necessary here? > > Then we will depends on the implementation of percpu_ref. If it changed > its implementation, it may take long to find out we need to change the > code here. I guess in most cases, even adding a synchronize_rcu() here, > we still only need to wait for one grace period. So the overhead to > call synchronize_rcu() is low here. And the code is easier to be > maintained. > Sounds reasonable. Will do. Thanks. > Best Regards, > Huang, Ying > . >