On Fri, Nov 11, 2016 at 1:54 AM, Al Viro <viro@xxxxxxxxxxxxxxxxxx> wrote: > On Fri, Nov 11, 2016 at 01:33:08AM +0200, Amir Goldstein wrote: > >> I am certainly looking for this feedback, because there is no other means for >> me to sanity test if relaxing lock_rename() is safe. >> >> When I write "any change of parent" I mean a change between 2 different >> connected parents. Both work dir and upper dir are connected and with >> reference held after mount. >> Are the d_splice_alias() and __d_unalias() cases a real concern for moving >> work dir around after the overlay mount?? > > Why not? Again, it's really up to you to provide an analysis of the call > chains. There's nothing in d_splice_alias() to prohibit an existing alias > being attached - in fact, __d_unalias() is called exactly in that case. > It's a rare case, all right, but it is not impossible. > > BTW, your analysis would better be simple and explicit - anything subtle > will be flat-out rejected, since it would have to be stepped around very > carefully in any later work in VFS. > Sure, I get that. Wanted to post early, before I have a full proof that this is sane (if I am able to generate one), so people can shout at me - what the hell? soon enough. > I really wonder what it is that you are getting contention on - what are > you doing, besides the actual renames? And that needs serialization anyway > (on inode lock of workdir, if nothing else), so any contention would not > disappear from dropped ->s_vfs_rename_mutex... Yeah, I wrote in the cover letter that I did not generate performance numbers yet, which is a must for this sort of work, and that I am hoping to get some feedback from testers. But the serialization I am trying to avoid is between copy-ups and whiteouts of different overlay mounts, all on the same fs, which is the case with docker/rocket containers. Not ruling out that I am barking up the wrong tree. The burden of proof is on me. Thanks, Amir. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html