Thanks!
On Sun, Apr 26, 2020 at 10:13 AM Roger Heflin <rogerheflin@xxxxxxxxx> wrote:
lazyunmount leaves all processes that were accessing the nfs server
still accessing it with files and directories open. In reality
lazyunmount has very few valid uses and quite a few invalid uses that
it does not exactly do what you think it does. Lazyunmount removes it
from the visible mount table, but everything using it will still be
using it. And to clear up all of those accesses you either need to
fix the nfs server, or kill all processes accessing it on the client
(typically you might as well reboot the client since generally
everything important is often using the nfs resources). The only use
I have seen for it is to fix a client that only a small portion of
processes are using the nfs mount and they are not critical while the
ones still working are. At best I view it as an option to buy you a
few hours so that the reboot can be schedule event.
On Sun, Apr 26, 2020 at 4:14 AM Javier Perez <pepebuho@xxxxxxxxx> wrote:
>
> Hi again.
> I sshd to the nfs server and rebooted the nfs service.
> It seems to have solved the problem, at least as far as journalctl is concerned.
> Still I would like to know if there is a way to stop whomever is trying to access the nfs server from the client machine and avoid the error flood on journalctl.
>
> Thanks
>
> JP
>
> On Sun, Apr 26, 2020 at 3:56 AM Javier Perez <pepebuho@xxxxxxxxx> wrote:
>>
>> Hi
>> I had to unplug the ethernet cable from the nfs server.
>> After I plugged it back, the client machine is filling my journal with the following message
>>
>> nfs: server "ipaddress" not responding, time out
>>
>> where ipaddress is the ip address of the nfs server.
>>
>> I did a lazy unmount of all the shared directories, but I am still getting this message.
>> If I try to use Thunar, it will not open up and even the panel will not accept a click. I can switch windows with ALT+TAB
>>
>> How can I find out whatever process is still trying to reach the nfs subdirectories and kill it?
>>
>> Last time this happened I had to reboot the system. I do not want to do it again
>>
>> Thanks
>>
>> JP
>> --
>> ------------------------------
>> /\_/\
>> |O O| pepebuho@xxxxxxxxx
>> ~~~~ Javier Perez
>> ~~~~ While the night runs
>> ~~~~ toward the day...
>> m m Pepebuho watches
>> from his high perch.
>
>
>
> --
> ------------------------------
> /\_/\
> |O O| pepebuho@xxxxxxxxx
> ~~~~ Javier Perez
> ~~~~ While the night runs
> ~~~~ toward the day...
> m m Pepebuho watches
> from his high perch.
> _______________________________________________
> users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
> To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx
> Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx
--
------------------------------
/\_/\
|O O| pepebuho@xxxxxxxxx
~~~~ Javier Perez
~~~~ While the night runs
~~~~ toward the day...
m m Pepebuho watches
from his high perch.
/\_/\
|O O| pepebuho@xxxxxxxxx
~~~~ Javier Perez
~~~~ While the night runs
~~~~ toward the day...
m m Pepebuho watches
from his high perch.
_______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx