On 2012-12-04 21:23, Jeff Moyer wrote: > Jens Axboe <jaxboe@xxxxxxxxxxxx> writes: > >> On 2012-12-03 19:53, Jeff Moyer wrote: >>> Hi, >>> >>> In realtime environments, it may be desirable to keep the per-bdi >>> flusher threads from running on certain cpus. This patch adds a >>> cpu_list file to /sys/class/bdi/* to enable this. The default is to tie >>> the flusher threads to the same numa node as the backing device (though >>> I could be convinced to make it a mask of all cpus to avoid a change in >>> behaviour). >> >> Looks sane, and I think defaulting to the home node is a sane default. >> One comment: >> >>> + ret = cpulist_parse(buf, newmask); >>> + if (!ret) { >>> + spin_lock(&bdi->wb_lock); >>> + task = wb->task; >>> + if (task) >>> + get_task_struct(task); >>> + spin_unlock(&bdi->wb_lock); >> >> bdi->wb_lock needs to be bh safe. The above should have caused lockdep >> warnings for you. > > No lockdep complaints. I'll double check that's enabled (but I usually > have it enabled...). > >>> @@ -437,6 +488,14 @@ static int bdi_forker_thread(void *ptr) >>> spin_lock_bh(&bdi->wb_lock); >>> bdi->wb.task = task; >>> spin_unlock_bh(&bdi->wb_lock); >>> + mutex_lock(&bdi->flusher_cpumask_mutex); >>> + ret = set_cpus_allowed_ptr(task, >>> + bdi->flusher_cpumask); >>> + mutex_unlock(&bdi->flusher_cpumask_mutex); >> >> It'd be very useful if we had a kthread_create_cpu_on_cpumask() instead >> of a _node() variant, since the latter could easily be implemented on >> top of the former. But not really a show stopper for the patch... > > Hmm, if it isn't too scary, I might give this a try. Should not be, pretty much just removing the node part of the create struct passed in and making it a cpumask. And for the on_node() case, cpumask_of_ndoe() will do the trick. -- Jens Axboe -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>