On 2022-11-03 09:36, Frederic Weisbecker wrote:
On Thu, Nov 03, 2022 at 09:13:13PM +0800, Pingfan Liu wrote:
Clarify at first:
This issue is totally detected by code suspicion, not a real experience.
Scene:
__srcu_read_(un)lock() uses percpu variable srcu_(un)lock_count[2].
Normally, the percpu can help avoid the non-atomic RMW issue, but in
some rare cases, it can not.
Supposing that __srcu_read_lock() runs on cpuX, the statement
this_cpu_inc(ssp->sda->srcu_lock_count[idx]);
can be decomposed into two sub group:
-1. get the address of this_cpu_ptr(ssp->sda)->srcu_lock_count[idx],
denoted as addressX and let unsigned long *pX = addressX;
-2. *pX = *pX + 1;
It's not supposed to happen:
* The weak version of this_cpu_inc() disables interrupts during the whole.
* x86 adds directly to gs/fs memory
* arm64, loongarch, s390 disable preemption
This has to be a fundamental constraint of this_cpu_*() ops implementation.
I concur with Frederic, this is guaranteed by the this_cpu_*() API.
There is no issue there.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com