Explicit locking in the fallback case provides a safe state of the table. Getting rid of blocking semantics makes __fd_install usable again in non-sleepable contexts, which easies backporting efforts. There is a side effect of slightly nicer assembly for the common case as might_sleep can now be removed. Signed-off-by: Mateusz Guzik <mguzik@xxxxxxxxxx> --- Documentation/filesystems/porting | 4 ---- fs/file.c | 11 +++++++---- 2 files changed, 7 insertions(+), 8 deletions(-) diff --git a/Documentation/filesystems/porting b/Documentation/filesystems/porting index 93e0a24..17bb4dc 100644 --- a/Documentation/filesystems/porting +++ b/Documentation/filesystems/porting @@ -502,10 +502,6 @@ in your dentry operations instead. store it as cookie. -- [mandatory] - __fd_install() & fd_install() can now sleep. Callers should not - hold a spinlock or other resources that do not allow a schedule. --- -[mandatory] any symlink that might use page_follow_link_light/page_put_link() must have inode_nohighmem(inode) called before anything might start playing with its pagecache. No highmem pages should end up in the pagecache of such diff --git a/fs/file.c b/fs/file.c index 9d047bd..4115503 100644 --- a/fs/file.c +++ b/fs/file.c @@ -592,13 +592,16 @@ void __fd_install(struct files_struct *files, unsigned int fd, { struct fdtable *fdt; - might_sleep(); rcu_read_lock_sched(); - while (unlikely(files->resize_in_progress)) { + if (unlikely(files->resize_in_progress)) { rcu_read_unlock_sched(); - wait_event(files->resize_wait, !files->resize_in_progress); - rcu_read_lock_sched(); + spin_lock(&files->file_lock); + fdt = files_fdtable(files); + BUG_ON(fdt->fd[fd] != NULL); + rcu_assign_pointer(fdt->fd[fd], file); + spin_unlock(&files->file_lock); + return; } /* coupled with smp_wmb() in expand_fdtable() */ smp_rmb(); -- 1.8.3.1