On Tue, Nov 22, 2011 at 12:52:53PM -0500, Chuck Lever wrote: > Servers have a finite amount of memory to store NFSv4 open and lock > owners. Moreover, servers may have a difficult time determining when > they can reap their state owner table, thanks to gray areas in the > NFSv4 protocol specification. What's the gray area? Reminding myself: OK, I guess it's the NFSv4.0 close-replay problem. You have to keep around enough information about a closed stateid to handle a replay. If a client reuses the stateowner than you can purge that as soon as you bump the sequence number, otherwise you have to keep it around a while (how long is unclear). (Is that what you're referring to?) > Thus clients should be careful to reuse > state owners when possible. > > Currently Linux is not too careful. When a user has closed all her > files on one mount point, the state owner's reference count goes to > zero, and it is released. The next OPEN allocates a new one. A > workload that serially opens and closes files can run through a large > number of open owners this way. > > When a state owner's reference count goes to zero, slap it onto a free > list for that nfs_server, with an expiry time. Garbage collect before > looking for a state owner. This makes state owners for active users > available for re-use. Makes sense to me. > @@ -1739,6 +1745,7 @@ struct nfs_server *nfs4_create_server(const struct nfs_parsed_mount_data *data, > goto error; > > dprintk("<-- nfs4_create_server() = %p\n", server); > + server->destroy = nfs4_destroy_server; > return server; > > error: > @@ -1792,6 +1799,7 @@ struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *data, > goto error; > > dprintk("<-- nfs_create_referral_server() = %p\n", server); > + server->destroy = nfs4_destroy_server; Couldn't you avoid adding that line in two different places, if you put it in nfs4_server_common_setup()? --b. -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html