On Tue, Feb 11, 2014 at 10:31 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote: > > FYI, just creating lots of files with open(O_CREAT): > > [ 348.718357] fs_mark (4828) used greatest stack depth: 2968 bytes left > [ 348.769846] fs_mark (4814) used greatest stack depth: 2312 bytes left > [ 349.777717] fs_mark (4826) used greatest stack depth: 2280 bytes left > [ 418.139415] fs_mark (4928) used greatest stack depth: 1936 bytes left > [ 460.492282] fs_mark (4993) used greatest stack depth: 1336 bytes left > [ 544.825418] fs_mark (5104) used greatest stack depth: 1112 bytes left > [ 689.503970] fs_mark (5265) used greatest stack depth: 1000 bytes left > > We've got absolutely no spare stack space anymore in the IO path. > And the IO path can't get much simpler than filesystem -> virtio > block device. Ugh, that's bad. A thousand bytes of stack space is much too close to any limits. Do you have the stack traces for these things so that we can look at worst offenders? If the new block-mq code is to blame, it needs to be fixed. __virtblk_add_req() has a 300-byte stack frame, it seems. Looking elsewhere, blkdev_issue_discard() has 350 bytes of stack frame, but is hopefully not in any normal path - online discard is moronic, and I'm assuming XFS doesn't do that. There's a lot of 200+ byte stack frames in block/blk-core.s, and they all seem to be of the type perf_trace_block_buffer() - things created with DECLARE_EVENT_CLASS(), afaik. Why they all have 200+ bytes of frame, I have no idea. That sounds like a potential disaster too, although hopefully it's mostly leaf functions - but leaf functions *deep* in the callchain. Tejun? Steven, why _do_ they end up with such huge frames? And if the stack use comes from the VFS layer, we can probably work on that too. But I don't think that has really changed much lately.. Linus _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs