Re: [RFC PATCH] locks: Show only file_locks created in the same pidns as current process

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 02, 2016 at 06:20:32PM +0300, Nikolay Borisov wrote:
> On 08/02/2016 06:05 PM, J. Bruce Fields wrote:
> > (And what process was actually reading /proc/locks, out of curiosity?)
> 
> lsof in my case

Oh, thanks, and you said that at the start, and I overlooked
it--apologies.

> >> while the container
> >> itself had only a small number of relevant entries. Fix it by
> >> filtering the locks listed by the pidns of the current process
> >> and the process which created the lock.
> > 
> > Thanks, that's interesting.  So you show a lock if it was created by
> > someone in the current pid namespace.  With a special exception for the
> > init namespace so that 
> 
> I admit this is a rather naive approach. Something else I was pondering was 
> checking whether the user_ns of the lock's creator pidns is the same as the 
> reader's user_ns. That should potentially solve your concerns re. 
> shared filesystems, no? Or whether the reader's userns is an ancestor 
> of the user'ns of the creator's pidns? Maybe Eric can elaborate whether 
> this would make sense?

If I could just imagine myself king of the world for a moment--I wish I
could have an interface that took a path or a filehandle and gave back a
list of locks on the associated filesystem.  Then if lsof wanted a
global list, it would go through /proc/mounts and request the list of
locks for each filesystem.

For /proc/locks it might be nice if we could restrict to locks on
filesystem that are somehow visible to the current process, but I don't
know if there's a simple way to do that.

--b.

> 
> > 
> > If a filesystem is shared between containers that means you won't
> > necessarily be able to figure out from within a container which lock is
> > conflicting with your lock.  (I don't know if that's really a problem.
> > I'm unfortunately short on evidence aobut what people actually use
> > /proc/locks for....)
> > 
> > --b.
> > 
> >>
> >> Signed-off-by: Nikolay Borisov <kernel@xxxxxxxx>
> >> ---
> >>  fs/locks.c | 8 ++++++++
> >>  1 file changed, 8 insertions(+)
> >>
> >> diff --git a/fs/locks.c b/fs/locks.c
> >> index 6333263b7bc8..53e96df4c583 100644
> >> --- a/fs/locks.c
> >> +++ b/fs/locks.c
> >> @@ -2615,9 +2615,17 @@ static int locks_show(struct seq_file *f, void *v)
> >>  {
> >>  	struct locks_iterator *iter = f->private;
> >>  	struct file_lock *fl, *bfl;
> >> +	struct pid_namespace *pid_ns = task_active_pid_ns(current);
> >> +
> >>  
> >>  	fl = hlist_entry(v, struct file_lock, fl_link);
> >>  
> >> +	pr_info ("Current pid_ns: %p init_pid_ns: %p, fl->fl_nspid: %p nspidof:%p\n", pid_ns, &init_pid_ns,
> >> +		 fl->fl_nspid, ns_of_pid(fl->fl_nspid));
> >> +	if ((pid_ns != &init_pid_ns) && fl->fl_nspid &&
> >> +		(pid_ns != ns_of_pid(fl->fl_nspid)))
> >> +		    return 0;
> >> +
> >>  	lock_get_status(f, fl, iter->li_pos, "");
> >>  
> >>  	list_for_each_entry(bfl, &fl->fl_block, fl_block)
> >> -- 
> >> 2.5.0
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux