2011/3/15 Jeff Layton <jlayton@xxxxxxxxxx>: > On Fri, 25 Feb 2011 13:21:50 +0300 > Pavel Shilovsky <piastry@xxxxxxxxxxx> wrote: > >> Use invalidate_inode_pages2 that don't leave pages even if shrink_page_list() >> has a temp ref on them. It prevents a data coherency problem on exclusive >> oplocks opens. >> >> Signed-off-by: Pavel Shilovsky <piastry@xxxxxxxxxxx> >> --- >> fs/cifs/inode.c | 16 +++++++++++----- >> 1 files changed, 11 insertions(+), 5 deletions(-) >> >> diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c >> index 589f3e3..0011e95 100644 >> --- a/fs/cifs/inode.c >> +++ b/fs/cifs/inode.c >> @@ -1687,12 +1687,18 @@ cifs_invalidate_mapping(struct inode *inode) >> >> cifs_i->invalid_mapping = false; >> >> - /* write back any cached data */ >> - if (inode->i_mapping && inode->i_mapping->nrpages != 0) { >> - rc = filemap_write_and_wait(inode->i_mapping); >> - mapping_set_error(inode->i_mapping, rc); >> + if (inode->i_mapping) { >> + /* write back any cached data */ >> + if (inode->i_mapping->nrpages != 0) { >> + rc = filemap_write_and_wait(inode->i_mapping); >> + mapping_set_error(inode->i_mapping, rc); >> + } >> + rc = invalidate_inode_pages2(inode->i_mapping); >> + if (rc) >> + cERROR(1, "%s: could not invalidate inode %p", __func__, >> + inode); >> } >> - invalidate_remote_inode(inode); >> + >> cifs_fscache_reset_inode_cookie(inode); >> } >> > > I think using invalidate_inode_pages2 is the right thing to do. I'm not > so keen however on simply popping a printk when that fails. The user is > going to see that and say "huh?" > > I think we need to consider allowing EBUSY bubble up to userspace > appropriately. Otherwise we still risk data coherency problems, right? > Perhaps cifs_invalidate_mapping should be changed to an int return and > the callers could return errors from it? > I like this idea. I can recreate the patch if nobody objects. -- Best regards, Pavel Shilovsky. -- To unsubscribe from this list: send the line "unsubscribe linux-cifs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html