On busy container servers reading /proc/locks shows all the locks created by all clients. This can cause large latency spikes. In my case I observed lsof taking up to 5-10 seconds while processing around 50k locks. Fix this by limiting the locks shown only to those created in the same pidns as the one the proc was mounted in. When reading /proc/locks from the init_pid_ns show everything. Signed-off-by: Nikolay Borisov <kernel@xxxxxxxx> --- fs/locks.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/fs/locks.c b/fs/locks.c index ee1b15f6fc13..751673d7f7fc 100644 --- a/fs/locks.c +++ b/fs/locks.c @@ -2648,9 +2648,15 @@ static int locks_show(struct seq_file *f, void *v) { struct locks_iterator *iter = f->private; struct file_lock *fl, *bfl; + struct pid_namespace *proc_pidns = file_inode(f->file)->i_sb->s_fs_info; + struct pid_namespace *current_pidns = task_active_pid_ns(current); fl = hlist_entry(v, struct file_lock, fl_link); + if ((current_pidns != &init_pid_ns) && fl->fl_nspid + && (proc_pidns != ns_of_pid(fl->fl_nspid))) + return 0; + lock_get_status(f, fl, iter->li_pos, ""); list_for_each_entry(bfl, &fl->fl_block, fl_block) -- 2.5.0 -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html