Pekka Enberg <penberg@xxxxxxxxxxxxxx> writes: > Hi Eric, > > On Tue, Jun 2, 2009 at 12:50 AM, Eric W. Biederman > <ebiederm@xxxxxxxxxxxx> wrote: >> +#ifdef CONFIG_FILE_HOTPLUG >> + >> +static bool file_in_use(struct file *file) >> +{ >> + struct task_struct *leader, *task; >> + bool in_use = false; >> + int i; >> + >> + rcu_read_lock(); >> + do_each_thread(leader, task) { >> + for (i = 0; i < MAX_FILE_HOTPLUG_LOCK_DEPTH; i++) { >> + if (task->file_hotplug_lock[i] == file) { >> + in_use = true; >> + goto found; >> + } >> + } >> + } while_each_thread(leader, task); >> +found: >> + rcu_read_unlock(); >> + return in_use; >> +} > > This seems rather heavy-weight. If we're going to use this > infrastructure for forced unmount, I think this will be a problem. > Can't we two this in two stages: (1) mark a bit that forces > file_hotplug_read_trylock to always fail and (2) block until the last > remaining in-kernel file_hotplug_read_unlock() has executed? Yes there is room for more optimization in the slow path. I haven't noticed being a problem yet so I figured I would start with stupid and simple. I can easily see two passes. The first setting the flag an calling f_op->dead. The second some kind of consolidate walk through the task list, allowing checking on multiple files at once. I'm not ready to consider anything that will add cost to the fast path in the file descriptors though. Eric -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html