Re: [RFC PATCH 0/4] namespacefs: Proof-of-Concept

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 19.11.21 г. 18:42 ч., James Bottomley wrote:
[resend due to header mangling causing loss on the lists]
On Fri, 2021-11-19 at 09:27 -0500, Steven Rostedt wrote:
On Fri, 19 Nov 2021 07:45:01 -0500
James Bottomley <James.Bottomley@xxxxxxxxxxxxxxxxxxxxx> wrote:

On Thu, 2021-11-18 at 14:24 -0500, Steven Rostedt wrote:
On Thu, 18 Nov 2021 12:55:07 -0600
ebiederm@xxxxxxxxxxxx (Eric W. Biederman) wrote:
It is not correct to use inode numbers as the actual names for
namespaces.

I can not see anything else you can possibly uses as names for
namespaces.

This is why we used inode numbers.
To allow container migration between machines and similar
things the you wind up needing a namespace for your names of
namespaces.

Is this why you say inode numbers are incorrect?

The problem is you seem to have picked on one orchestration system
without considering all the uses of namespaces and how this would
impact them.  So let me explain why inode numbers are incorrect and
it will possibly illuminate some of the cans of worms you're
opening.

We have a container checkpoint/restore system called CRIU that can
be used to snapshot the state of a pid subtree and restore it.  It
can be used for the entire system or piece of it.  It is also used
by some orchestration systems to live migrate containers.  Any
property of a container system that has meaning must be saved and
restored by CRIU.

The inode number is simply a semi random number assigned to the
namespace.  it shows up in /proc/<pid>/ns but nowhere else and
isn't used by anything.  When CRIU migrates or restores containers,
all the namespaces that compose them get different inode values on
the restore.  If you want to make the inode number equivalent to
the container name, they'd have to restore to the previous number
because you've made it a property of the namespace.  The way
everything is set up now, that's just not possible and never will
be.  Inode numbers are a 32 bit space and can't be globally
unique.  If you want a container name, it will have to be something
like a new UUID and that's the first problem you should tackle.

So everyone seems to be all upset about using inode number. We could
do what Kirill suggested and just create some random UUID and use
that. We could have a file in the directory called inode that has the
inode number (as that's what both docker and podman use to identify
their containers, and it's nice to have something to map back to
them).

On checkpoint restore, only the directories that represent the
container that migrated matter, so as Kirill said, make sure they get
the old UUID name, and expose that as the directory.

If a container is looking at directories of other containers on the
system, then it gets migrated to another system, it should be treated
as though those directories were deleted under them.

I still do not see what the issue is here.

The issue is you're introducing a new core property for namespaces they
didn't have before.  Everyone has different use cases for containers
and we need to make sure the new property works with all of them.

Having a "name" for a namespace has been discussed before which is the
landmine you stepped on when you advocated using the inode number as
the name, because that's already known to be unworkable.

Can we back up and ask what problem you're trying to solve before we
start introducing new objects like namespace name?  The problem
statement just seems to be "Being able to see the structure of the
namespaces can be very useful in the context of the containerized
workloads."  which you later expanded on as "trying to add more
visibility into the working of things like kubernetes".  If you just
want to see the namespace "tree" you can script that (as root) by
matching the process tree and the /proc/<pid>/ns changes without
actually needing to construct it in the kernel.  This can also be done
without introducing the concept of a namespace name.  However, there is
a subtlety of doing this matching in the way I described in that you
don't get proper parenting to the user namespace ownership ... but that
seems to be something you don't want anyway?



The major motivation is to be able to hook tracing to individual containers. We want to be able to quickly discover the PIDs of all containers running on a system. And when we say all, we mean not only Docker, but really all sorts of containers that exist now or may exist in the future. We also considered the solution of brute-forcing all processes in /proc/*/ns/ but we are afraid that such solution do not scale. As I stated in the Cover letter, the problem was discussed at Plumbers (links at the bottom of the Cover letter) and the conclusion was that the most distinct feature that anything that can be called 'Container' must have is a separate PID namespace. This is why the PoC starts with the implementation of this namespace. You can see in the example script that discovering the name and all PIDs of all containers gets quick and trivial with the help of this new filesystem. And you need to add just few more lines of code in order to make it start tracing a selected container.

Thanks!
Yordan

James






[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux