Much more dentries than inodes, is that normal?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 7, 2017 at 9:17 AM, Xiaoxi Chen <superdebuger at gmail.com> wrote:
> Hi,
>
>       From the admin socket of mds, I got following data on our
> production cephfs env, roughly we have 585K inodes and almost same
> amount of caps, but we have>2x dentries than inodes.
>
>       I am pretty sure we dont use hard link intensively (if any).
> And the #ino match with "rados ls --pool $my_data_pool}.
>
>       Thanks for any explanations, appreciate it.
>
>
> "mds_mem": {
>         "ino": 584974,
>         "ino+": 1290944,
>         "ino-": 705970,
>         "dir": 25750,
>         "dir+": 25750,
>         "dir-": 0,
>         "dn": 1291393,
>         "dn+": 1997517,
>         "dn-": 706124,
>         "cap": 584560,
>         "cap+": 2657008,
>         "cap-": 2072448,
>         "rss": 24599976,
>         "heap": 166284,
>         "malloc": 18446744073708721289,
>         "buf": 0
>     },
>

One possibility is that you have many "null" dentries, which are
created when we do a lookup and a file is not found -- we create a
special dentry to remember that that filename does not exist, so that
we can return ENOENT quickly next time.  On pre-Kraken versions, null
dentries can also be left behind after file deletions when the
deletion is replayed on a standbyreplay MDS
(http://tracker.ceph.com/issues/16919)

John



>
> Xiaoxi
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux