On 6/19/20 10:24 AM, ebiederm@xxxxxxxxxxxx wrote:
Junxiao Bi <junxiao.bi@xxxxxxxxxx> writes:
Hi Eric,
The patch didn't improve lock contention.
Which raises the question where is the lock contention coming from.
Especially with my first variant. Only the last thread to be reaped
would free up anything in the cache.
Can you comment out the call to proc_flush_pid entirely?
Still high lock contention. Collect the following hot path.
74.90% 0.01% proc_race
[kernel.kallsyms] [k]
entry_SYSCALL_64_after_hwframe
|
--74.89%--entry_SYSCALL_64_after_hwframe
|
--74.88%--do_syscall_64
|
|--69.70%--exit_to_usermode_loop
| |
| --69.70%--do_signal
| |
| --69.69%--get_signal
| |
| |--56.30%--do_group_exit
| | |
| | --56.30%--do_exit
| | |
| |
|--27.50%--_raw_write_lock_irq
| | | |
| | |
--27.47%--queued_write_lock_slowpath
| |
| |
| | |
--27.18%--native_queued_spin_lock_slowpath
| | |
| |
|--26.10%--release_task.part.20
| | | |
| | |
--25.60%--_raw_write_lock_irq
| |
| |
| | |
--25.56%--queued_write_lock_slowpath
| |
| |
| | |
--25.23%--native_queued_spin_lock_slowpath
| | |
| | --0.56%--mmput
| | |
| |
--0.55%--exit_mmap
| |
| --13.31%--_raw_spin_lock_irq
| |
| --13.28%--native_queued_spin_lock_slowpath
|
Thanks,
Junxiao.
That will rule out the proc_flush_pid in d_invalidate entirely.
The only candidate I can think of d_invalidate aka (proc_flush_pid) vs ps.
Eric