On Thursday October 16, jlayton@xxxxxxxxxx wrote: > > Thanks for the info Neil, that helps clarify this... > > Using RLIMIT_NOFILE is an interesting idea. From a cursory look at the > code, the default for RLIMIT_NOFILE looks like it's generally 1024. > We'll have to figure that this limit will effectively act as a limit on > the number of concurrent lockd clients. It's not too hard to imagine a > server with more clients than this (think of a large compute cluster). If all those clients used UDP, this would not be a problem. While I see the value of TCP for NFS, it doesn't seem as convincing for NLM. But I don't expect we have the luxury of insisting that clients use UDP for locking :-( > > The problem as you mention, is that that limit won't be easily tunable. > I think we need some mechanism for an admin to tune this limit. It > doesn't have to be tunable on the fly, but shouldn't require a kernel > rebuild. We could eliminate this check for single-threaded services > entirely, but I suppose that leaves the door open for DoS attacks > against those services. > > Maybe the best thing is to go with Bruce's idea and add a sv_maxconn > field to the svc_serv struct. We could make that default to the max of > RLIMIT_NOFILE rlim_cur value or the currently calculated value. > Eventually we could add a mechanism to allow someone to tune that > value. A module parameter would probably be fine for lockd. We might > even want to set the limit lower for things like the nfsv4 callback > thread. > > Thoughts? A per-service setting that defaults to something reasonable like your suggestions and can be over-ridden by a module parameter sounds like a good idea. If you change the module parameter via /sys/modules/lockd/parameters/max_connections then it wouldn't take effect until the service were stopped and restarted, but I expect that is acceptable (and could probably be 'fixed' if really needed). NeilBrown -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html