David, On a somewhat related node. The header definition of fscache_maybe_release_page doesn't seam quite correct. The comment states it should return true if the page can be freed ... yet when there's not a cookie or PG_fscache is not set it returns false. This doesn't seam right, in fact the NFS code wraps this bit in a second PageFsCache check to make sure to return true. 661 /** 662 * fscache_maybe_release_page - Consider releasing a page, cancelling a store 663 * @cookie: The cookie representing the cache object 664 * @page: The netfs page that is being cached. 665 * @gfp: The gfp flags passed to releasepage() 666 * 667 * Consider releasing a page for the vmscan algorithm, on behalf of the netfs's 668 * releasepage() call. A storage request on the page may cancelled if it is 669 * not currently being processed. 670 * 671 * The function returns true if the page no longer has a storage request on it, 672 * and false if a storage request is left in place. If true is returned, the 673 * page will have been passed to fscache_uncache_page(). If false is returned 674 * the page cannot be freed yet. 675 */ 676 static inline 677 bool fscache_maybe_release_page(struct fscache_cookie *cookie, 678 struct page *page, 679 gfp_t gfp) 680 { 681 if (fscache_cookie_valid(cookie) && PageFsCache(page)) 682 return __fscache_maybe_release_page(cookie, page, gfp); 683 return false; 684 } On Tue, Jul 9, 2013 at 8:46 AM, Milosz Tanski <milosz@xxxxxxxxx> wrote: > On Tue, Jul 9, 2013 at 6:33 AM, David Howells <dhowells@xxxxxxxxxx> wrote: >> Milosz Tanski <milosz@xxxxxxxxx> wrote: >> >>> It looks like both the cifs and NFS code do not bother with any >>> locking around cifs_fscache_set_inode_cookie. Is there no concern over >>> multiple open() calls racing to create the cookie in those >>> filesystems? >> >> Yeah... That's probably wrong. AFS obviates the need for special locking by >> doing it in afs_iget(). > > I'm going to create a mutex around the enable cache / disable cache in > the Ceph code since the spinlock is also right now. > >> >> Hmmm... I think I've just spotted what might be the cause of pages getting >> marked PG_fscache whilst belonging to the allocator. >> >> void nfs_fscache_set_inode_cookie(struct inode *inode, struct file *filp) >> { >> if (NFS_FSCACHE(inode)) { >> nfs_fscache_inode_lock(inode); >> if ((filp->f_flags & O_ACCMODE) != O_RDONLY) >> nfs_fscache_disable_inode_cookie(inode); >> else >> nfs_fscache_enable_inode_cookie(inode); >> nfs_fscache_inode_unlock(inode); >> } >> } >> >> can release the cookie whilst reads are in progress on it when an inode being >> read suddenly changes to an inode being written. We need some sort of >> synchronisation on that there. > > So far my experience has been that synchronization has been the most > tricky part of implementing fscache for Ceph. Things work great when > there's a simple shell accessing data and break down when you're doing > a HPC kind of workload. > >> >> David -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html