mds openfiles table shards

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

During rejoin an MDS can sometimes go OOM if the openfiles table is too large.
The workaround has been described by ceph devs as "rados rm -p
cephfs_metadata mds0_openfiles.0".

On our cluster we have several such objects for rank 0:

mds0_openfiles.0 exists with size: 199978
mds0_openfiles.1 exists with size: 153650
mds0_openfiles.2 exists with size: 40987
mds0_openfiles.3 exists with size: 7746
mds0_openfiles.4 exists with size: 413

If we suffer such an OOM, do we need to rm *all* of those objects or
only the `.0` object?

Best Regards,

Dan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux