Re: Zombie / Orphan open files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 30, 2023 at 5:44 PM Andrew J. Romero <romero@xxxxxxxx> wrote:
>
> Hi
>
> This is a quick general NFS server question.
>
> Does the NFSv4x  specification require or recommend that:   the NFS server, after some reasonable time,
> should / must close orphan / zombie open files ?

Why should the server be responsible for a badly behaving client? It
seems like you are advocating for the world where a problem is hidden
rather than solved. But because bugs do occur and some customers want
a quick solution, some storage providers do have ways of dealing with
releasing resources (like open state) that the client will never ask
for again.

Why should we excuse bad user behaviour? For things like long running
jobs users have to be educated that their credentials must stay valid
for the duration of their usage.

Why should we excuse poor application behaviour that doesn't close
files? But in a way we do, the OS will make sure that the file is
closed when the application exists without explicitly closing the
file. So I'm curious how do you get in a state with zombie?

> On several NAS platforms I have seen large numbers of orphan / zombie open files "pile up"
> as a result of Kerberos credential expiration.
>
> Does the Red Hat NFS server "deal with" orphan / zombie open files ?
>
> Thanks
>
> Andy Romero
> Fermilab
>
>



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux