Re: Much more dentries than inodes, is that normal?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 7, 2017 at 5:17 PM, Xiaoxi Chen <superdebuger@xxxxxxxxx> wrote:
> Hi,
>
>       From the admin socket of mds, I got following data on our
> production cephfs env, roughly we have 585K inodes and almost same
> amount of caps, but we have>2x dentries than inodes.
>
>       I am pretty sure we dont use hard link intensively (if any).
> And the #ino match with "rados ls --pool $my_data_pool}.
>
>       Thanks for any explanations, appreciate it.
>
>
> "mds_mem": {
>         "ino": 584974,
>         "ino+": 1290944,
>         "ino-": 705970,
>         "dir": 25750,
>         "dir+": 25750,
>         "dir-": 0,
>         "dn": 1291393,
>         "dn+": 1997517,
>         "dn-": 706124,
>         "cap": 584560,
>         "cap+": 2657008,
>         "cap-": 2072448,
>         "rss": 24599976,
>         "heap": 166284,
>         "malloc": 18446744073708721289,
>         "buf": 0
>     },
>

Maybe they are dirty null dentries. try flushing mds's journal and check again.

Regards
Yan, Zheng

>
>
> Xiaoxi
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux