On Sat, 25 Sep 2010 08:16:44 -0400 Jeff Layton <jlayton@xxxxxxxxxx> wrote: > On Sat, 25 Sep 2010 00:23:30 -0400 > Christoph Hellwig <hch@xxxxxxxxxxxxx> wrote: > > > On Fri, Sep 24, 2010 at 02:11:37PM -0400, Jeff Layton wrote: > > > On Fri, 24 Sep 2010 12:58:55 -0500 > > > Steve French <smfrench@xxxxxxxxx> wrote: > > > > > > > We need to see the performance impact. As you say cifs_writepages is > > > > synchronous so we should be ok without it. Any test results > > > > before/after? > > > > > > > > > > No, I haven't tested this for performance. It is a correctness issue > > > though. We absolutely can't put the last reference to the last open > > > filehandle without flushing all of the data first. > > > > > > My expectation here though is that this may help performance in some > > > cases since this patch also has it skip the flush on files open > > > read-only. > > > > ->flush is called on every close call, ->release on the last close for a > > given file pointer. Maybe you want a filemap_flush in ->flush and > > filemap_write_and_wait in ->release? > > > > Hmm...there is one problem with this scheme. __fput ignores the error > return from ->release. Only the errors from ->flush will be returned to > userspace. So if we only filemap_fdatawait in the ->release op, then we > have the potential to miss returning writeback-related errors on a > close call. > > On a side note, why does f_op->release return an int? Are there places > in the kernel besides __fput that call it? If not, maybe we should > consider changing it to a void function to make this more clear. > Now that I've had a chance to look over this code, I'm seeing some other problems with it. I think the patch that fixes this problem is going to have to be made part of a patchset to overhaul how open files are managed in cifs. That's work that's long overdue, but it'll probably take me some time to sort it all out. Stay tuned... -- Jeff Layton <jlayton@xxxxxxxxx> -- To unsubscribe from this list: send the line "unsubscribe linux-cifs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html