On Wed, Jul 11, 2007 at 01:21:55PM +1000, Neil Brown wrote: > And just by-the-way, the server doesn't really have the option of not > sending the attribute. If i_version isn't defined, it has to fake > something using mtime, and hope that is good enough. ctime, actually--the change attribute is also supposed to be updated on attribute updates. > Alternately we could mandate that i_version is always kept up-to-date > and if a filesystem doesn't have anything to load from storage, it > just sets it to the current time in nanoseconds. > > That would mean that a client would need to flush it's cache whenever > the inode fell out of cache on the server, but I don't think we can > reliably do better than that. > > I think I like that approach. > > So my vote is to increment i_version in common code every time any > change is made to the file, and alloc_inode should initialise it to > current time, which might be changed by the filesystem before it calls > unlock_new_inode. So the client would be invalidating its cache more often than necessary, rather than failing to invalidate it when it should. I agree that that's probably the better tradeoff, although I wish I had a better idea of the downside. I don't know, for example, whether users might see unpleasant results if every client has to reread its cached data on a reboot. The currently proposed change--just providing a model change attribute implementation for ext4 and leaving other filesystems untouched--is a more conservative step. So I'm inclined to just do this ext4 thing first, and then look into further change attribute experiments next time around.... --b. - To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html