On Sat, Nov 16, 2019 at 10:40:05AM +1100, Dave Chinner wrote: > On Fri, Nov 15, 2019 at 03:08:43PM +0800, Ming Lei wrote: > > On Fri, Nov 15, 2019 at 03:56:34PM +1100, Dave Chinner wrote: > > > On Fri, Nov 15, 2019 at 09:08:24AM +0800, Ming Lei wrote: > > I can reproduce the issue with 4k block size on another RH system, and > > the login info of that system has been shared to you in RH BZ. > > > > 1) > > sysctl kernel.sched_min_granularity_ns=10000000 > > sysctl kernel.sched_wakeup_granularity_ns=15000000 > > So, these settings definitely influence behaviour. > > If these are set to kernel defaults (4ms and 3ms each): > > sysctl kernel.sched_min_granularity_ns=4000000 > sysctl kernel.sched_wakeup_granularity_ns=3000000 > > The migration problem largely goes away - the fio task migration > event count goes from ~2,000 a run down to 200/run. > > That indicates that the migration trigger is likely load/timing > based. The analysis below is based on the 10/15ms numbers above, > because it makes it so much easier to reproduce. On another machine, './xfs_complete 512' may be migrated 11~12K/sec, which don't need to change the above two kernel sched defaults, however the fio io thread only takes 40% CPU. './xfs_complete 4k' on this machine, the fio IO CPU utilization is >= 98%. Thanks, Ming