Re: [RFC PATCH V1 04/13] mm: Create a separate kernel thread for migration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 19 Mar 2025 19:30:19 +0000
Raghavendra K T <raghavendra.kt@xxxxxxx> wrote:

> Having independent thread helps in:
>  - Alleviating the need for multiple scanning threads
>  - Aids to control batch migration (TBD)
>  - Migration throttling (TBD)
> 
A few comments on things noticed whilst reading through.

Jonathan

> Signed-off-by: Raghavendra K T <raghavendra.kt@xxxxxxx>
> ---
>  mm/kmmscand.c | 157 +++++++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 154 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/kmmscand.c b/mm/kmmscand.c
> index a76a58bf37b2..6e96cfab5b85 100644
> --- a/mm/kmmscand.c
> +++ b/mm/kmmscand.c

>  /* Per folio information used for migration */
>  struct kmmscand_migrate_info {
>  	struct list_head migrate_node;
> @@ -101,6 +126,13 @@ static int kmmscand_has_work(void)
>  	return !list_empty(&kmmscand_scan.mm_head);
>  }
>  
> +static int kmmmigrated_has_work(void)
> +{
> +	if (!list_empty(&kmmscand_migrate_list.migrate_head))
> +		return true;
> +	return false;
If it isn't getting more complex later, can just
	return !list_empty().
or indeed, just put that condition directly at caller.

> +}


>  static inline bool is_valid_folio(struct folio *folio)
>  {
> @@ -238,7 +293,6 @@ static int hot_vma_idle_pte_entry(pte_t *pte,
>  			folio_put(folio);
>  			return 0;
>  		}
> -		/* XXX: Leaking memory. TBD: consume info */
>  		info = kzalloc(sizeof(struct kmmscand_migrate_info), GFP_NOWAIT);
>  		if (info && scanctrl) {
>  
> @@ -282,6 +336,28 @@ static inline int kmmscand_test_exit(struct mm_struct *mm)
>  	return atomic_read(&mm->mm_users) == 0;
>  }
>  
> +static void kmmscand_cleanup_migration_list(struct mm_struct *mm)
> +{
> +	struct kmmscand_migrate_info *info, *tmp;
> +
> +	spin_lock(&kmmscand_migrate_lock);

Could scatter some guard() magic in here.

> +	if (!list_empty(&kmmscand_migrate_list.migrate_head)) {

Maybe flip logic of this unless it is going to get more complex in future
patches.  That way, with guard() handling the spin lock, you can just
return when nothing to do.

> +		if (mm == READ_ONCE(kmmscand_cur_migrate_mm)) {
> +			/* A folio in this mm is being migrated. wait */
> +			WRITE_ONCE(kmmscand_migration_list_dirty, true);
> +		}
> +
> +		list_for_each_entry_safe(info, tmp, &kmmscand_migrate_list.migrate_head,
> +			migrate_node) {
> +			if (info && (info->mm == mm)) {
> +				info->mm = NULL;
> +				WRITE_ONCE(kmmscand_migration_list_dirty, true);
> +			}
> +		}
> +	}
> +	spin_unlock(&kmmscand_migrate_lock);
> +}

>  static unsigned long kmmscand_scan_mm_slot(void)
>  {
>  	bool next_mm = false;
> @@ -347,9 +429,17 @@ static unsigned long kmmscand_scan_mm_slot(void)
>  
>  		if (vma_scanned_size >= kmmscand_scan_size) {
>  			next_mm = true;
> -			/* TBD: Add scanned folios to migration list */
> +			/* Add scanned folios to migration list */
> +			spin_lock(&kmmscand_migrate_lock);
> +			list_splice_tail_init(&kmmscand_scanctrl.scan_list,
> +						&kmmscand_migrate_list.migrate_head);
> +			spin_unlock(&kmmscand_migrate_lock);
>  			break;
>  		}
> +		spin_lock(&kmmscand_migrate_lock);
> +		list_splice_tail_init(&kmmscand_scanctrl.scan_list,
> +					&kmmscand_migrate_list.migrate_head);
> +		spin_unlock(&kmmscand_migrate_lock);

I've stared at this a while, but if we have entered the conditional block
above, do we splice the now empty list? 

>  	}
>  
>  	if (!vma)




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux