On Thu, Jul 5, 2012 at 1:25 PM, Florian Haas <florian@xxxxxxxxxxx> wrote: > On Thu, Jul 5, 2012 at 10:01 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote: >>> Also, going down the rabbit hole, how would this behavior change if I >>> used cephfs to set the default layout on some directory to use a >>> different pool? >> >> I'm not sure what you're asking here — if you have access to the >> metadata server, you can change the pool that new files go into, and I >> think you can set the pool to be whatever you like (and we should >> probably harden all this, too). So you can fix it if it's a problem, >> but you can also turn it into a problem. > > I am aware that I would be able to do this. > > My question was more along the lines of: if the pool that data is > written to can be set on a per-file or per-directory basis, and we can > also set read and write permissions per pool, how would the filesystem > behave properly? Hide files the mounting user doesn't have read access > to? Return -EIO or -EPERM on writes to files stored in pools we can't > write to? Failing a mount if we're missing some permission on any file > or directory in the fs? All of these sound painful in one way or > another, so I'm having trouble envisioning what the "correct" behavior > would look like. Ah, yes. My feeling would be that we want to treat it like a local file they aren't allowed to access — ie, return EPERM. I *think* that is what will actually happen if they try to read those files, but the write path works a bit differently (since the writes are flushed out asynchronously) and so we would need to introduce some smarts into the client to check the pool permissions and proactively apply them on any attempted access. -Greg -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html