RE: Zombie / Orphan open files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> What you are describing sounds like a bug in a system (be it client or
> server). There is state that the client thought it closed but the
> server still keeping that state.

Hi Olga

Based on my simple test script experiment,
Here's a summary of what I believe is happening

1. An interactive user starts a process that opens a file or multiple files

2. A disruption, that prevents 
   NFS-client <-> NFS-server communication,
   occurs while the file is open.  This could be due to
   having the file open a long time or due to opening the file
   too close to the time of disruption.

( I believe the most common "disruption" is
  credential expiration )

3) The user's process terminates before the disruption
     is cleared.  ( or stated another way ,  the disruption is not cleared until after the user
    process terminates )

   At the time the user process terminates, the process
   can not tell the server to close the server-side file state.

  After the process terminates, nothing will ever tell the server
  to close the files.  The now zombie open files will continue to 
  consume server-side resources.

  In environments with many users, the problem is significant

My reasons for posting:

- Are not to have your team  help troubleshoot my specific issue
   ( that would be quite rude )

they are:

- Determine If my NAS vendor might be accidentally
  not doing something they should be.
  (  I now don't really think this is the case. )


- Determine if this is a known behavior common to all NFS implementations
   ( Linux, ....etc ) and if so have your team determine if this is a problem that should be addressed
   in the spec and the implementations.  



Andy









[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux