Re: [RFC PATCH] locks: Show only file_locks created in the same pidns as current process

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 08/02/2016 06:05 PM, J. Bruce Fields wrote:
> On Tue, Aug 02, 2016 at 05:42:23PM +0300, Nikolay Borisov wrote:
>> Currently when /proc/locks is read it will show all the file locks
>> which are currently created on the machine. On containers, hosted
>> on busy servers this means that doing lsof can be very slow. I
>> observed up to 5 seconds stalls reading 50k locks,
> 
> Do you mean just that the reading process itself was blocked, or that
> others were getting stuck on blocked_lock_lock?

I mean the listing process. Here is a simplified example from cat: 

cat-15084 [010] 3394000.190341: funcgraph_entry:      # 6156.641 us |  vfs_read();
cat-15084 [010] 3394000.196568: funcgraph_entry:      # 6096.618 us |  vfs_read();
cat-15084 [010] 3394000.202743: funcgraph_entry:      # 6060.097 us |  vfs_read();
cat-15084 [010] 3394000.208937: funcgraph_entry:      # 6111.374 us |  vfs_read();


> 
> (And what process was actually reading /proc/locks, out of curiosity?)

lsof in my case

> 
>> while the container
>> itself had only a small number of relevant entries. Fix it by
>> filtering the locks listed by the pidns of the current process
>> and the process which created the lock.
> 
> Thanks, that's interesting.  So you show a lock if it was created by
> someone in the current pid namespace.  With a special exception for the
> init namespace so that 

I admit this is a rather naive approach. Something else I was pondering was 
checking whether the user_ns of the lock's creator pidns is the same as the 
reader's user_ns. That should potentially solve your concerns re. 
shared filesystems, no? Or whether the reader's userns is an ancestor 
of the user'ns of the creator's pidns? Maybe Eric can elaborate whether 
this would make sense?

> 
> If a filesystem is shared between containers that means you won't
> necessarily be able to figure out from within a container which lock is
> conflicting with your lock.  (I don't know if that's really a problem.
> I'm unfortunately short on evidence aobut what people actually use
> /proc/locks for....)
> 
> --b.
> 
>>
>> Signed-off-by: Nikolay Borisov <kernel@xxxxxxxx>
>> ---
>>  fs/locks.c | 8 ++++++++
>>  1 file changed, 8 insertions(+)
>>
>> diff --git a/fs/locks.c b/fs/locks.c
>> index 6333263b7bc8..53e96df4c583 100644
>> --- a/fs/locks.c
>> +++ b/fs/locks.c
>> @@ -2615,9 +2615,17 @@ static int locks_show(struct seq_file *f, void *v)
>>  {
>>  	struct locks_iterator *iter = f->private;
>>  	struct file_lock *fl, *bfl;
>> +	struct pid_namespace *pid_ns = task_active_pid_ns(current);
>> +
>>  
>>  	fl = hlist_entry(v, struct file_lock, fl_link);
>>  
>> +	pr_info ("Current pid_ns: %p init_pid_ns: %p, fl->fl_nspid: %p nspidof:%p\n", pid_ns, &init_pid_ns,
>> +		 fl->fl_nspid, ns_of_pid(fl->fl_nspid));
>> +	if ((pid_ns != &init_pid_ns) && fl->fl_nspid &&
>> +		(pid_ns != ns_of_pid(fl->fl_nspid)))
>> +		    return 0;
>> +
>>  	lock_get_status(f, fl, iter->li_pos, "");
>>  
>>  	list_for_each_entry(bfl, &fl->fl_block, fl_block)
>> -- 
>> 2.5.0
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux