在 2019/7/4 下午6:17, xiaoqiang.zhao 写道:
Resend as plain-text to linux-rt-users list.
在 2019/7/4 下午1:50, xiaoqiang.zhao 写道:
在 2019/7/3 下午7:42, Sebastian Andrzej Siewior 写道:
On 2019-06-26 15:35:04 [+0800], xiaoqiang.zhao wrote:
Hi, guys:
Hi,
Thanks for your reply ;-)
2) -> __schedule_bug ( leads to kernel pagefault exception, OOPS!!)
Before schedule, we have call preempt_disable twice, this will
definitely
bump preempt_count to 2 and
something probably disabled preemption before that
I feel this is not make sense. In my opinion, the preempt_count must
be zero before we call 'schedule()',
otherwise, in_atomic_preempt_off will return true and trigger the
__schedule_bug. If we have already
disable_preempt, we may in atomic context and we should not call
schedule, right ?
in_atomic_preempt_off will fail.
I did not figure out: WHY we call schedule inside
rt_spin_lock_slowlock
and under what condition this call is correct ?
if the lock is acquired you schedule out and wait und it is available
again.
got this.
Finally, this issue is resolved by revert commit
80127a39681bd68c959f0953f84a830cbd7c3b1c <locking/percpu-rwsem: Optimize
readers and reduce global impact>. This commit introduce a
"preempt_disable()" call in "percpu_up_read" function and can NOT
coexist with 4.4.38-rt49 preempt-rt patch set
Hope this information may be useful to someone who encounter the same
problem ;-)
Thanks Sebastian !