On Fri, Apr 24, 2009 at 01:00:48PM -0400, Trond Myklebust wrote: > On Fri, 2009-04-24 at 16:52 +0200, Miklos Szeredi wrote: > > On Fri, 24 Apr 2009, Robin Holt wrote: > > > I am not sure how you came to this conclusion. The address_space has > > > the vma's chained together and protected by the i_mmap_lock. That is > > > acquired prior to the cleaning operation. Additionally, the cleaning > > > operation walks the process's page tables and will remove/write-protect > > > the page before releasing the i_mmap_lock. > > > > > > Maybe I misunderstand. I hope I have not added confusion. > > > > Looking more closely, I think you're right. > > > > I thought that detach_vmas_to_be_unmapped() also removed them from > > mapping->i_mmap, but that is not the case, it only removes them from > > the process's mm_struct. The vma is only removed from ->i_mmap in > > unmap_region() _after_ zapping the pte's. > > > > This means that while the pte zapping is going on, any page faults > > will fail but page_mkclean() (and all of rmap) will continue to work. > > > > But then I don't see how we get a dirty pte without also first getting > > a page fault. Weird... > > You don't, but unless you unmap the page when you write it out, you will > not get any further page faults. The VM will just redirty the page > without calling page_mkwrite(). Why? It should call page_mkwrite... > As I said, I think I can fix the NFS problem by simply unmapping the > page inside ->writepage() whenever we know the write request was > originally set up by a page fault. The biggest outstanding problem we have remaining is get_user_pages. Callers are only required to hold a ref on the page and then they can call set_page_dirty at any point after that. I have a half-done patch somewhere to add a put_user_pages, and then we could probably go from there to pinning the fs metadata (whether by using the page lock or something else, I don't quite know). -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html