Hi,
we are currently trying to debug and understand a problem with cephfs
and inotify watchers. A user is running Visual Studio Code with a
workspace on a cephfs mount. VSC uses inotify for monitoring files and
directories in the workspace:
root@cli:~# ./inotify-info
------------------------------------------------------------------------------
INotify Limits:
max_queued_events 16,384
max_user_instances 128
max_user_watches 1,048,576
------------------------------------------------------------------------------
Pid Uid App Watches Instances
3599940 1236 node 1,681 1
1 0 systemd 106 5
3600170 1236 node 54 1
874797 0 udevadm 17 1
3599118 0 systemd 7 3
3599707 1236 systemd 7 3
3599918 1236 node 6 1
2047 100 dbus-daemon 3 1
2054 0 sssd 2 1
2139 0 systemd-logind (deleted) 1 1
2446 0 agetty 1 1
3600001 1236 node 1 1
------------------------------------------------------------------------------
Total inotify Watches: 1886
Total inotify Instances: 20
------------------------------------------------------------------------------
root@cli:~# cat /sys/kernel/debug/ceph/XYZ.client354064780/caps | wc -l
1773083
root@cli:~# uname -a
Linux cli 6.1.0-23-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.99-1
(2024-07-15) x86_64 GNU/Linux
So roughly 1.700 watchers result in over 1.7 million caps. (some of the
watchers might for files on different filesystems). I've also checked
this on the MDS side, it also reports a very high number of caps for
that client. Running tools like lsof on the host as root only reports
very few open files (<50). So inotify seems to be responsible for the
massive caps build up. Terminating VSC results in a sharp drop of the
caps (just a few open files / directories left afterwards).
Is this a known problem?
Best regards,
Burkhard Linke
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx