On Mon, Sep 03, 2012 at 10:49:27AM +1000, Dave Chinner wrote: > Given that we are working around stack depth issues in the > filesystems already in several places, and now it seems like there's > a reason to work around it in the block layers as well, shouldn't we > simply increase the default stack size rather than introduce > complexity and performance regressions to try and work around not > having enough stack? > > I mean, we can deal with it like the ia32 4k stack issue was dealt > with (i.e. ignore those stupid XFS people, that's an XFS bug), or > we can face the reality that storage stacks have become so complex > that 8k is no longer a big enough stack for a modern system.... I'm not arguing against increasing the default stack size (I really don't have an opinion there) - but it's not a solution for the block layer, as stacking block devices can require an unbounded amount of stack without the generic_make_request() convert recursion-to-iteration thing. -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel