On Thu, May 28, 2009 at 06:27:40PM +0900, Ryo Tsuruta wrote: > Hi Vivek, > > > +#ifdef CONFIG_TRACK_ASYNC_CONTEXT > > + if (elv_bio_sync(bio)) { > > + /* sync io. Determine cgroup from submitting task context. */ > > + cgroup = task_cgroup(current, io_subsys_id); > > + return cgroup; > > + } > > + > > + /* Async io. Determine cgroup from with cgroup id stored in page */ > > + bio_cgroup_id = get_blkio_cgroup_id(bio); > > + > > + if (!bio_cgroup_id) > > + return NULL; > > + > > + cgroup = blkio_cgroup_lookup(bio_cgroup_id); > > +#else > > + cgroup = task_cgroup(current, io_subsys_id); > > +#endif > > + return cgroup; > > +} > > There is a case where a kernel thread (such as device-mapper drivers) > submits a sync IO instead of a task which originates the IO. I think > you should always use get_blkio_cgroup_id() to determine cgroup. > Hi Ryo, Ok. Can you give some examples of drivers which are submitting reads in different context al-together. You mentioned in the past that dm-crypt looks like the one. How does current CFQ takes care of that. So if a BE prio 7 or an RT prio 0, task is submitting a READ, CFQ will not know it and it will put that READ in the queue of the READ submitting device mapper thread (may be BE prio 3 or 4)? Always determining the cgroup from bio, will make things slower at the same time complicated from the CFQ point of view. Right now cfq creates and caches the queue pointer in the io context of the bio submitting task and assumes sync requests are coming from that task/io context. Currently there can only be one sync queue associated with one context. So if a single thread is submitting reads (may be a worker thread) on behalf of other processes, then we loose the io context information. In fact currently we don't even carry ioprio and io class information in bio. So looks like we need to carry task io context information also in bio to be able to associate the bio to right queue at CFQ level. This makes it bit more complicated. For the time being I will keep it in my TODO list and handle it once other more severe problems have been taken care of. Thanks Vivek -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel