On Sun, Jun 10, 2012 at 04:49:21AM +0100, Al Viro wrote: > So the only remaining reason for having that thing is this: what if we > call ->atomic_open(), but it doesn't call finish_open()? Then we need > to free that unused struct file. If finish_open() failed, we wouldn't. > Same if it succeeded and something *after* it in ->atomic_open() failed > (then we need to fput() that file - your code in ceph leaks it, BTW). > Fair enough. So we need to add one more helper that would discard that > half-set-up struct file as we want it to be discarded. That's all. Actually, I take that back - that code in ceph is unreachable when finish_open() succeeds. Anyway, see vfs.git#atomic_open; it's a port of your queue + COMPLETELY UNTESTED followups massaging it along the following lines: * ->atomic_open() takes struct file * instead of struct opendata * * it return int instead of struct file * - 0 for succeess, -E... for error, 1 for "here's your sodding dentry, do it yourself". Said dentry is returned via file->f_path.dentry. * the same had been done to atomic_open()/lookup_open()/do_last() * finish_open() takes struct file and returns an int * it *also* takes int * - used to keep track of whether we'd done successful do_dentry_open(), instead of "has opendata->filp been cleared?" as in your variant. Said int * is what your bool *created of ->atomic_open() and friends has been turned into. So the check in path_openat() is if (!(opened & FILE_OPENED)) { BUG_ON(!error); put_filp(file); } which is as explicit as it gets, IMO. The forest of failure exits in do_last() got cleaned up a bit, BTW. Probably can be cleaned up some more... WARNING: I haven't even tried to boot it. It builds, but this is all I can promise at the moment. I'm about to fall down (it's 7am here already ;-/), will give it some beating when I get up. It almost certainly has bugs, so consider that as call for review and not much more. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html