On Fri, Jun 28, 2013 at 09:30:17AM +0900, OGAWA Hirofumi wrote: > Theodore Ts'o <tytso@xxxxxxx> writes: > > > On Fri, Jun 28, 2013 at 08:37:40AM +0900, OGAWA Hirofumi wrote: > >> > >> Well, anyway, it is simple. This issue was came as the performance > >> regression when I was experimenting to use kernel bdi flusher from own > >> flusher. The issue was sync(2) like I said. And this was just I > >> couldn't solve this issue by tux3 side unlike other optimizations. > > > > A performance regression using fsstress? That's not a program > > intended to be a useful benchmark for measuring performance. > > Right. fsstress is used as stress tool for me too as part of CI, with > background vmstat 1. Anyway, it is why I noticed this. > > I agree it would not be high priority. But I don't think we should stop > to optimize it. But you're not proposing any sort of optimisation at all - you're simply proposing to hack around the problem so you don't have to care about it. The VFS is a shared resource - it has to work well for everyone - and that means we need to fix problems and not ignore them. As I said, wait_sb_inodes() is fixable. I'm not fixing for tux3, though - I'm fixing it because it's causing soft lockups on XFS and ext4 in 3.10-rc6: https://lkml.org/lkml/2013/6/27/772 Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html