On 21 Sep 2023, at 9:56, Charles Hedrick wrote: > thanks. I can work with that info. Restarting the server isn't practical. This is a large-scale system serving hundreds of students. We generally keep it up uninterrupted for a whole semester. By web server, I mean the process not the system. Though, it must have a lot of local state if you don't have it load-balanced and redundant, so maybe even restarting the process is impractical. Without trying this, we're still guessing that it's the ACCESS cache. You should be able to do something like "sudo su - webserveruser" and that /should/ count as a login for that process, and so that process /should/ gain the access you need from the new membership. Its worth a test to make sure we're not actually dealing with a different problem. >> So, the NFS client will keep caching the result of previous calls to unchanged inodes until it notices that the process' oldest parent with the same user/credential has a task start_time that is older than the currently cached entries. > > I trust you mean newer. This is jupyterhub, which likes to keep user processes around after logout and reattach when they login. But as long as we know what's going on, there's a way for a user to kill their processes manually. There's been some attempts to add an "fasc" or "nofasc" mount option to upstream NFS client, which would modify the behavior of the client. That's not had a lot of traction (I think because the patch wants to change the default behavior again). It's possible to submit work to add a sysfs knob to flush the access cache.. that could look like a full cache flush for everyone, or maybe upon writing a uid to a sysfs file, a flush for cached entries. Have to tried talking to your NFS client vendor about this problem? Ben