Re: [RFC][PATCH] sched: Use lightweight hazard pointers to grab lazy mms

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Excerpts from Peter Zijlstra's message of June 17, 2021 7:10 pm:
> On Thu, Jun 17, 2021 at 11:08:03AM +0200, Peter Zijlstra wrote:
>> On Wed, Jun 16, 2021 at 10:32:15PM -0700, Andy Lutomirski wrote:
> 
>> --- a/arch/x86/include/asm/mmu.h
>> +++ b/arch/x86/include/asm/mmu.h
>> @@ -66,4 +66,9 @@ typedef struct {
>>  void leave_mm(int cpu);
>>  #define leave_mm leave_mm
>>  
>> +/* On x86, mm_cpumask(mm) contains all CPUs that might be lazily using mm */
>> +#define for_each_possible_lazymm_cpu(cpu, mm) \
>> +	for_each_cpu((cpu), mm_cpumask((mm)))
>> +
>> +
>>  #endif /* _ASM_X86_MMU_H */
> 
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index 8ac693d542f6..e102ec53c2f6 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -19,6 +19,7 @@
>>  
> 
>> +
>> +#ifndef for_each_possible_lazymm_cpu
>> +#define for_each_possible_lazymm_cpu(cpu, mm) for_each_online_cpu((cpu))
>> +#endif
>> +
> 
> Why can't the x86 implementation be the default? IIRC the problem with
> mm_cpumask() is that (some) architectures don't clear bits, but IIRC
> they all should be setting bits, or were there archs that didn't even do
> that?

There are. alpha, arm64, hexagon (of the SMP supporting ones), AFAICT.

I have a patch for alpha though (it's 2 lines :))

Thanks,
Nick





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux