Re: [ceph-users] Much more dentries than inodes, is that normal?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yeah I checked the dump  , it it truely the known issue.

Thanks

2017-03-08 17:58 GMT+08:00 John Spray <jspray@xxxxxxxxxx>:
> On Tue, Mar 7, 2017 at 3:05 PM, Xiaoxi Chen <superdebuger@xxxxxxxxx> wrote:
>> Thanks John.
>>
>> Very likely, note that mds_mem::ino + mds_cache::strays_created ~=
>> mds::inodes, plus the MDS was the active-standby one, and become
>> active days ago due to failover.
>>
>> mds": {
>>         "inodes": 1291393,
>> }
>> "mds_cache": {
>>         "num_strays": 3559,
>>         "strays_created": 706120,
>>         "strays_purged": 702561
>> }
>> "mds_mem": {
>>         "ino": 584974,
>> }
>>
>> I do have a cache dump from the mds via admin socket,  is there
>> anything I can get from it  to make 100% percent sure?
>
> You could go through that dump and look for the dentries with no inode
> number set, but honestly if this is a previously-standby-replay daemon
> and you're running pre-Kraken code I'd be pretty sure it's the known
> issue.
>
> John
>
>>
>> Xiaoxi
>>
>> 2017-03-07 22:20 GMT+08:00 John Spray <jspray@xxxxxxxxxx>:
>>> On Tue, Mar 7, 2017 at 9:17 AM, Xiaoxi Chen <superdebuger@xxxxxxxxx> wrote:
>>>> Hi,
>>>>
>>>>       From the admin socket of mds, I got following data on our
>>>> production cephfs env, roughly we have 585K inodes and almost same
>>>> amount of caps, but we have>2x dentries than inodes.
>>>>
>>>>       I am pretty sure we dont use hard link intensively (if any).
>>>> And the #ino match with "rados ls --pool $my_data_pool}.
>>>>
>>>>       Thanks for any explanations, appreciate it.
>>>>
>>>>
>>>> "mds_mem": {
>>>>         "ino": 584974,
>>>>         "ino+": 1290944,
>>>>         "ino-": 705970,
>>>>         "dir": 25750,
>>>>         "dir+": 25750,
>>>>         "dir-": 0,
>>>>         "dn": 1291393,
>>>>         "dn+": 1997517,
>>>>         "dn-": 706124,
>>>>         "cap": 584560,
>>>>         "cap+": 2657008,
>>>>         "cap-": 2072448,
>>>>         "rss": 24599976,
>>>>         "heap": 166284,
>>>>         "malloc": 18446744073708721289,
>>>>         "buf": 0
>>>>     },
>>>>
>>>
>>> One possibility is that you have many "null" dentries, which are
>>> created when we do a lookup and a file is not found -- we create a
>>> special dentry to remember that that filename does not exist, so that
>>> we can return ENOENT quickly next time.  On pre-Kraken versions, null
>>> dentries can also be left behind after file deletions when the
>>> deletion is replayed on a standbyreplay MDS
>>> (http://tracker.ceph.com/issues/16919)
>>>
>>> John
>>>
>>>
>>>
>>>>
>>>> Xiaoxi
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux