Re: [PATCH v2 RFC] userns: Convert xfs to use kuid/kgid where appropriate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Dave Chinner (david@xxxxxxxxxxxxx):
> On Thu, Jun 27, 2013 at 08:02:05AM -0500, Serge Hallyn wrote:
> > Quoting Dave Chinner (david@xxxxxxxxxxxxx):
> > > On Wed, Jun 26, 2013 at 05:30:17PM -0400, Dwight Engen wrote:
> > > > On Wed, 26 Jun 2013 12:09:24 +1000
> > > > Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> > > > > > We do need to decide on the di_uid that comes back from bulkstat.
> > > > > > Right now it is returning on disk (== init_user_ns) uids. It looks
> > > > > > to me like xfsrestore is using the normal vfs routines (chown,
> > 
> > I might not be helpful here, (as despite having used xfs for years
> > I've not used these features) but feel like I should try based on
> > what I see in the manpages.  Here is my understanding:
> > 
> > Assume you're a task in a child userns, where you have host uids
> > 100000-110000 mapped to container uids 0-10000,
> > 
> > 1. bulkstat is an xfs_ioctl command, right?  It should return the mapped
> > uids (0-10000).
> > 
> > 2. xfsdump should store the uids as seen in the caller's namespace.  If
> > xfsdump is done from the container, the dump should show uids 0-10000.
> 
> So when run from within a namespace, it should filter and return
> only inodes that match the uids/gids mapped into the namespace?

I would think they should all be returned, with uid/gid being -1.

> That can be done, it's just a rather inefficient use of bulkstat
> (which is primarily there for efficiency reasons).
> 
> Here's a corner case. Say I download a tarball from somewhere that
> has uids/gids inside it, and when I untar it it creates uids/gids
> outside the namespaces mapped range of [0-10000]. What happens then?

The chown will fail, so they should belong to the fsuid/fsguid of the
calling task.

> What uids do we end up on disk, and how do we ensure that the
> bulkstat filter still returns those inodes?
> 
> > 3. xfsrestore should use be run from the desired namespace.  If you did
> > xfsdump from the host ns, you should then xfsrestore from the host ns.
> > Then inside the container those uids (100000-110000) will be mapped
> > to your uids (0-10000).
> > 
> > 4. If you xfsdump in this container, then xfsrestore in another
> > container where you have 200000-210000 mapped to 0-10000, the dump
> > image will have uids 0-10000.  The restored image will have container
> > uids 0-10000, while on the underlying host media it will be uids
> > 200000-210000.
> > 
> > 5. If you xfsdump in this container then xfsrestore on the host, then
> > the host uids 0-10000 will be used on the underlying media.  The
> > container would be unable to read this files as the uids do not map
> > into the container.
> 
> Yes, that follows from 1+2. We'll need some documentation in
> the dump/restore man pages for this, and I'd suggest that the
> namespace documentation/man pages get this sort of treatment, too.

There is a user_namespaces(7) man page which Michael Kerrisk had been
working on with Eric back in March.  I don't see it at
http://man7.org/linux/man-pages/dir_section_7.html
though, so it may still be in development or in a staging tree.

-serge

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux