On Fri, Sep 17, 2021 at 12:38 PM Jan Kara <jack@xxxxxxx> wrote: > > On Fri 17-09-21 10:36:08, Jan Kara wrote: > > Let me also post Amir's thoughts on this from a private thread: > > And now I'm actually replying to Amir :-p > > > On Fri 17-09-21 10:30:43, Jan Kara wrote: > > > We did a small update to the schedule: > > > > > > > Christian Brauner will run the second session, discussing what idmapped > > > > filesystem mounts are for and the current status of supporting more > > > > filesystems. > > > > > > We have extended this session as we'd like to discuss and get some feedback > > > from users about project quotas and project ids: > > > > > > Project quotas were originally mostly a collaborative feature and later got > > > used by some container runtimes to implement limitation of used space on a > > > filesystem shared by multiple containers. As a result current semantics of > > > project quotas are somewhat surprising and handling of project ids is not > > > consistent among filesystems. The main two contending points are: > > > > > > 1) Currently the inode owner can set project id of the inode to any > > > arbitrary number if he is in init_user_ns. It cannot change project id at > > > all in other user namespaces. > > > > > > 2) Should project IDs be mapped in user namespaces or not? User namespace > > > code does implement the mapping, VFS quota code maps project ids when using > > > them. However e.g. XFS does not map project IDs in its calls setting them > > > in the inode. Among other things this results in some funny errors if you > > > set project ID to (unsigned)-1. > > > > > > In the session we'd like to get feedback how project quotas / ids get used > > > / could be used so that we can define the common semantics and make the > > > code consistently follow these rules. > > > > I think that legacy projid semantics might not be a perfect fit for > > container isolation requirements. I added project quota support to docker > > at the time because it was handy and it did the job of limiting and > > querying disk usage of containers with an overlayfs storage driver. > > > > With btrfs storage driver, subvolumes are used to create that isolation. > > The TREE_ID proposal [1] got me thinking that it is not so hard to > > implement "tree id" as an extention or in addition to project id. > > > > The semantics of "tree id" would be: > > 1. tree id is a quota entity accounting inodes and blocks > > 2. tree id can be changed only on an empty directory > > 3. tree id can be set to TID only if quota inode usage of TID is 0 > > 4. tree id is always inherited from parent > > 5. No rename() or link() across tree id (clone should be possible) > > > > AFAIK btrfs subvol meets all the requirements of "tree id". > > > > Implementing tree id in ext4/xfs could be done by adding a new field to > > inode on-disk format and a new quota entity to quota on-disk format and > > quotatools. > > > > An alternative simpler way is to repurpose project id and project quota: > > * Add filesystem feature projid-is-treeid > > * The feature can be enabled on fresh mkfs or after fsck verifies "tree id" > > rules are followed for all usage of projid > > * Once the feature is enabled, filesystem enforces the new semantics > > about setting projid and projid_inherit > > > > This might be a good option if there is little intersection between > > systems that need to use the old project semantics and systems > > that would rather have the tree id semantics. > > Yes, I actually think that having both tree-id and project-id on a > filesystem would be too confusing. And I'm not aware of realistic usecases. > I've heard only of people wanting current semantics (although these we more > of the kind: "sometime in the past people used the feature like this") and > the people complaining current semantics is not useful for them. This was > discussed e.g. in ext4 list [2]. > > > I think that with the "tree id" semantics, the user_ns/idmapped > > questions become easier to answer. > > Allocating tree id ranges per userns to avoid exhausting the tree id > > namespace is a very similar problem to allocating uids per userns. > > It still depends how exactly tree ids get used - if you want to use them to > limit space usage of a container, you still have to forbid changing of tree > ids inside the container, don't you? > Yes. This is where my view of userns becomes hazy (so pulling Christain into the discussion), but in general I think that this use case would be similar to the concept of single uid container - the range of allowed tree ids that is allocated for the container in that case is a single tree id. I understand that the next question would be about nesting subtree quotas and I don't have a good answer to that question. Are btrfs subvolume nested w.r.t. capacity limit? I don't think that they are. Thanks, Amir.