Hi... On Tue, Mar 31, 2009 at 12:34 AM, David Neiss <davidaneiss@xxxxxxxxx> wrote: > Thanks for the response. > > OK, so it seems like if you call io_schedule, then the kernel takes > idle time and accumulates that into io_wait if the count of # of > threads currently blocked in that mode is >= 1. So, to get total idle > time you need to add io_wait and idle (which I see being done in > Bootchart java code). Presumably then, large io_wait times could > indicate that you are IO bound - good to know. Your explanation makes sense to me...well good to know for me either. > But, it would seem then that any code that has this kind of blocking > IO operations then would need to use io_schedule and not any of the > other kernel synchronization primitives then because the regular > primitives (completions) wouldn't use io_schedule(), they would just > use schedule()? Never dig that deep, but I guess completion still use the usual schedule(). Up to this point, it entirely depends on the developers to pick the right API to do the job. Maybe the kernel needs further janitorial jobs in this field.... Perhaps, it's quite similar back to the days the developer still use BKL then got introduced to finer grained lock such as wait queue etc >If so, that seems like a drag since its not likely > that all such blocking time would really be accounted for? Yeah, for sure.... unless scheduler_tick also account for I/O waiting condition, then "idle time" would have double meaning. It could mean pure idle time or mixed idle+I/O wait time.... We face relatively same issue in memory usage accounting, right? for example, RSS (resident set size) actually includes shared memory area...while most people thinks it is the size of allocated memory area minus size of loaded libraries... regards, Mulyadi. -- To unsubscribe from this list: send an email with "unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx Please read the FAQ at http://kernelnewbies.org/FAQ