On Mon, Apr 20, 2015 at 02:04:53PM -0700, Linus Torvalds wrote: > That said, you then introduce a stack-allocated "struct saved stack[]" > in path_mountpoint[] instead, *and* nameidata is saved on stack, so > this all ends up being very stack-intensive anyway. > > I might have missed some patch here, but would it be possible to just > allocate a single per-thread nameidata, and just leave it at that? > Because allocating that thing on the stack when it contains what is > now one kilobyte of array data is *not* acceptable. What kilobyte? It's 9*4 pointers, IOW, 288 bytes total (assuming 64bit box). And nd->saved_names[] goes away, so scratch 9 pointers we used to have. Sure, we can allocate that dynamically (or hold a couple of elements on stack and allocate when/if we overgrow that), but it's not particularly large win. Breakeven point is circa the second level of nesting - symlink met when traversing a symlink... That - on amd64; on something with fatter stack frames I would expect the comparison to be even worse for mainline... We need to preserve 4 pointers on stack per level of nesting. Seeing that single link_path_walk() stack frame in mainline is about 5-6 times bigger than that, "just put enough for all levels into auto array" is an obvious approach - a couple of link_path_walk() stack frames will be heavier than that. For renameat() (the worst-case user of link_path_walk() - there are two struct nameidata on stack) we end up with breakeven at *one* level of nesting, what with getting rid of 2*9 pointers in ->saved_names[] of those nameidata. And yes, I've measured the actual stack use before and after... A kilobyte would suffice for 32 levels. _IF_ we go for "lift the restrictions on nesting completely", sure, we want to switch to (on-demand) dynamic allocation. It's not particularly hard to add and it might be worth doing, but it's a separate story. This series leaves the set of accepted pathnames as-is... -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html