Re: [RFC][PATCH] fanotify: allow to set errno in FAN_DENY permission response

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


> > The Windows Cloud Sync Engine API:
> >
> > Does allow registring different "Storage namespace providers".
> > AFAICT, the persistence of "Place holder" files is based on NTFS
> > "Reparse points",
> > which are a long time native concept which allows registering a persistent
> > hook on a file to be handled by a specific Windows driver.
> >
> > So for example, a Dropbox place holder file, is a file with "reparse point"
> > that has some label to direct the read/write calls to the Windows
> > Cloud Sync Engine
> > driver and a sub-label to direct the handling of the upcall by the Dropbox
> > CloudSync Engine service.
> OK, so AFAIU they implement HSM directly in the filesystem which is somewhat
> different situation from what we are trying to do.

Technically, I think that Reparse points Win32 driver hooks are a
generic WIN32 API which NTFS implements, but that doesn't matter.
IIUC, it is equivalent to having support for xattr "security.hsm.dropbox"
that fanotify would know how to intercept as a persistent mark to
be handled by "dropbox" hsm group or return EPERM.

> > I do not want to deal with "persistent fanotify marks" at this time, so
> > maybe something like:
> >
> > fsconfig(ffd, FSCONFIG_SET_STRING, "hsmid", "dropbox", 0)
> > fsconfig(ffd, FSCONFIG_SET_STRING, "hsmver", "1", 0)
> >
> > Add support ioctls in fanotify_ioctl():
> What would these do? Set HSMID & HSMVER for fsnotify_group identified by
> 'file'? BTW I'm not so convinced about the 'version' thing. I have hard
> time to remember an example where the versioning in the API actually ended
> up being useful. I also expect tight coupling between userspace mounting
> the filesystem and setting up HSM so it is hard to imagine some wrong
> version of HSM provider would be "accidentally" started for the
> filesystem.

ok. worse case, can alway switch to hsmid "dropboxv2"

> > And require that a group with matching hsmid and recent hsmver has a live
> > filesystem mark on the sb.
> I'm not quite following here. We'd require matching fsnotify group for
> what? For mounting the fs? For not returning EPERM from all pre-op
> handlers? Either way that doesn't make sense to me as its unclear how
> userspace would be able to place the mark... But there's a way around that
> - since the HSM app will have its private non-HSM mount for filling in
> contents, it can first create that mount, place filesystem marks though
> it and then remount the superblock with hsmid mount option and create the
> public mount. But I'm not sure if you meant this or something else...

I haven't thought of the mechanics yet just the definition:
- An sb with hsm="XXX" returns EPERM for pre-content events
  unless there is an sb mark from a group that is identified as hsm "XXX"

I don't see a problem with mounting the fs first and only then
setting up the sb mark on the root of the fs (which does not require
a pre-lookup event). When the hsm service is restarted, it is going to
need to re-set the sb mark on the hsm="XXX" sb anyway.

> > If this is an acceptable API for a single crash-safe HSM provider, then the
> > question becomes:
> > How would we extend this to multiple crash-safe HSM providers in the future?
> Something like:
> fsconfig(ffd, FSCONFIG_SET_STRING, "hsmid", "dropbox,cloudsync,httpfs", 0)
> means all of them are required to have a filesystem mark?

Yeh, it's an option.
I have a trauma from comma separated values in overlayfs
mount options, but maybe it's fine.
The main API question would be, regardless of single or multi hsm,
whether hsm="" value should be reconfigurable (probably yes).

> > Or maybe we do not need to support multiple HSM groups per sb?
> > Maybe in the future a generic service could be implemented to
> > delegate different HSM modules, e.g.:
> >
> > fsconfig(ffd, FSCONFIG_SET_STRING, "hsmid", "cloudsync", 0)
> > fsconfig(ffd, FSCONFIG_SET_STRING, "hsmver", "1", 0)
> >
> > And a generic "cloudsync" service could be in charge of
> > registration of "cloudsync" engines and dispatching the pre-content
> > event to the appropriate module based on path (i.e. under the dropbox folder).
> >
> > Once this gets passed NACKs from fs developers I'd like to pull in
> > some distro people to the discussion and maybe bring this up as
> > a topic discussion for LSFMM if we feel that there is something to discuss.
> I guess a short talk (lighting talk?) what we are planning to do would be
> interesting so that people are aware. At this point I don't think we have
> particular problems to discuss that would be interesting for the whole fs
> crowd for a full slot...



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux