On Thu, 8 Oct 2009 09:58:22 +0800 Wu Fengguang <fengguang.wu@xxxxxxxxx> wrote: > On Thu, Oct 08, 2009 at 09:01:59AM +0800, KAMEZAWA Hiroyuki wrote: > > IIUC, "iowait" cpustat data was calculated by runqueue->nr_iowait as > > == kernel/schec.c > > void account_idle_time(cputime_t cputime) > > { > > struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat; > > cputime64_t cputime64 = cputime_to_cputime64(cputime); > > struct rq *rq = this_rq(); > > > > if (atomic_read(&rq->nr_iowait) > 0) > > cpustat->iowait = cputime64_add(cpustat->iowait, cputime64); > > else > > cpustat->idle = cputime64_add(cpustat->idle, cputime64); > > } > > == > > Then, for showing "cpu is in iowait", runqueue->nr_iowait should be modified > > at some places. In old kernel, congestion_wait() at el did that by calling > > io_schedule_timeout(). > > > > How this runqueue->nr_iowait is handled now ? > > Good question. io_schedule() has an old comment for throttling IO wait: > > * But don't do that if it is a deliberate, throttling IO wait (this task > * has set its backing_dev_info: the queue against which it should throttle) > */ > void __sched io_schedule(void) > > So it looks both Jens' and this patch behaves right in ignoring the > iowait accounting for balance_dirty_pages() :) > Thank you for clarification. Then, hmm, %iotwait (which 'top' shows) didn't work as desgined and we need to update throttle_vm_writeout() and some in vmscan.c. Thanks for input. BTW, I'm glad if I can know "how many threads/ios are throttoled now" per bdi. Regards, -Kame -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html