The patch titled Subject: z3fold: remove preempt disabled sections for RT has been added to the -mm tree. Its filename is z3fold-remove-preempt-disabled-sections-for-rt.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/z3fold-remove-preempt-disabled-sections-for-rt.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/z3fold-remove-preempt-disabled-sections-for-rt.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Vitaly Wool <vitaly.wool@xxxxxxxxxxxx> Subject: z3fold: remove preempt disabled sections for RT Replace get_cpu_ptr() with migrate_disable()+this_cpu_ptr() so RT can take spinlocks that become sleeping locks. Signed-off-by Mike Galbraith <efault@xxxxxx> Link: https://lkml.kernel.org/r/20201209145151.18994-3-vitaly.wool@xxxxxxxxxxxx Signed-off-by: Vitaly Wool <vitaly.wool@xxxxxxxxxxxx> Cc: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/z3fold.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) --- a/mm/z3fold.c~z3fold-remove-preempt-disabled-sections-for-rt +++ a/mm/z3fold.c @@ -623,14 +623,16 @@ static inline void add_to_unbuddied(stru { if (zhdr->first_chunks == 0 || zhdr->last_chunks == 0 || zhdr->middle_chunks == 0) { - struct list_head *unbuddied = get_cpu_ptr(pool->unbuddied); - + struct list_head *unbuddied; int freechunks = num_free_chunks(zhdr); + + migrate_disable(); + unbuddied = this_cpu_ptr(pool->unbuddied); spin_lock(&pool->lock); list_add(&zhdr->buddy, &unbuddied[freechunks]); spin_unlock(&pool->lock); zhdr->cpu = smp_processor_id(); - put_cpu_ptr(pool->unbuddied); + migrate_enable(); } } @@ -880,8 +882,9 @@ static inline struct z3fold_header *__z3 int chunks = size_to_chunks(size), i; lookup: + migrate_disable(); /* First, try to find an unbuddied z3fold page. */ - unbuddied = get_cpu_ptr(pool->unbuddied); + unbuddied = this_cpu_ptr(pool->unbuddied); for_each_unbuddied_list(i, chunks) { struct list_head *l = &unbuddied[i]; @@ -899,7 +902,7 @@ lookup: !z3fold_page_trylock(zhdr)) { spin_unlock(&pool->lock); zhdr = NULL; - put_cpu_ptr(pool->unbuddied); + migrate_enable(); if (can_sleep) cond_resched(); goto lookup; @@ -913,7 +916,7 @@ lookup: test_bit(PAGE_CLAIMED, &page->private)) { z3fold_page_unlock(zhdr); zhdr = NULL; - put_cpu_ptr(pool->unbuddied); + migrate_enable(); if (can_sleep) cond_resched(); goto lookup; @@ -928,7 +931,7 @@ lookup: kref_get(&zhdr->refcount); break; } - put_cpu_ptr(pool->unbuddied); + migrate_enable(); if (!zhdr) { int cpu; _ Patches currently in -mm which might be from vitaly.wool@xxxxxxxxxxxx are z3fold-simplify-freeing-slots.patch z3fold-stricter-locking-and-more-careful-reclaim.patch z3fold-remove-preempt-disabled-sections-for-rt.patch