Re: [Warning Possible spam] Re: ceph mds dump tree - root inode is not in cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Weiwen,

I get the following results:

# ceph fs status
fs - 0 clients
==
RANK  STATE     MDS        ACTIVITY     DNS    INOS
 0    active  tceph-03  Reqs:    0 /s   997k   962k
  POOL      TYPE     USED  AVAIL
fs-meta1  metadata  6650M   780G
fs-meta2    data       0    780G
fs-data     data       0   1561G
STANDBY MDS
  tceph-01
  tceph-02
MDS version: ceph version 15.2.16 (d46a73d6d0a67a79558054a3a5a72cb561724974) octopus (stable)

# ceph tell mds.0 dump tree '~mds0/stray0'
2022-08-07T14:28:00.735+0200 7fb6827fc700  0 client.434599 ms_handle_reset on v2:10.41.24.15:6812/2903519715
2022-08-07T14:28:00.776+0200 7fb6837fe700  0 client.434605 ms_handle_reset on v2:10.41.24.15:6812/2903519715
root inode is not in cache

# ceph tell mds.0 dump tree '~mdsdir/stray0'
2022-08-07T14:30:07.370+0200 7f364d7fa700  0 client.434623 ms_handle_reset on v2:10.41.24.15:6812/2903519715
2022-08-07T14:30:07.411+0200 7f364e7fc700  0 client.434629 ms_handle_reset on v2:10.41.24.15:6812/2903519715
root inode is not in cache

Whatever I try, it says the same: "root inode is not in cache". Are the ms_handle_reset messages possibly hinting at a problem with my installation? The MDS is the only daemon type for which these appear when I use ceph tell commands.

This is a test cluster, I can do all sorts of experiments with it. Please let me know if I can try something and pull extra information out.

With the default settings, this is all that's in today's log after trying a couple of times, the sighup comes from logrotate:

2022-08-07T04:02:06.693+0200 7f7856b1c700 -1 received  signal: Hangup from Kernel ( Could be generated by pthread_kill(), raise(), abort(), alarm() ) UID: 0
2022-08-07T14:27:01.298+0200 7f785731d700  1 mds.tceph-03 asok_command: dump tree {prefix=dump tree,root=~mdsdir/stray0} (starting...)
2022-08-07T14:27:07.581+0200 7f785731d700  1 mds.tceph-03 asok_command: dump tree {prefix=dump tree,root=~mds0/stray0} (starting...)
2022-08-07T14:27:48.976+0200 7f785731d700  1 mds.tceph-03 asok_command: dump tree {prefix=dump tree,root=~mds0/stray0} (starting...)
2022-08-07T14:28:00.776+0200 7f785731d700  1 mds.tceph-03 asok_command: dump tree {prefix=dump tree,root=~mds0/stray0} (starting...)
2022-08-07T14:30:07.410+0200 7f785731d700  1 mds.tceph-03 asok_command: dump tree {prefix=dump tree,root=~mdsdir/stray0} (starting...)
2022-08-07T14:31:15.839+0200 7f785731d700  1 mds.tceph-03 asok_command: dump tree {prefix=dump tree,root=~mdsdir/stray0} (starting...)
2022-08-07T14:31:19.900+0200 7f785731d700  1 mds.tceph-03 asok_command: dump tree {prefix=dump tree,root=~mds0/stray0} (starting...)

Please let me know if/how I can provide more info.

Thanks and best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: 胡 玮文 <huww98@xxxxxxxxxxx>
Sent: 05 August 2022 03:43:05
To: Frank Schilder
Cc: ceph-users@xxxxxxx
Subject: [Warning Possible spam]  Re:  ceph mds dump tree - root inode is not in cache

Hi Frank,

I have not experienced this before. Maybe mds.tceph-03 is in standby state? Could you show the output of “ceph fs status”?

You can also try “ceph tell mds.0 …” and let ceph find the correct daemon for you.

You may also try dumping “~mds0/stray0”.

Weiwen Hu

> 在 2022年8月4日,23:22,Frank Schilder <frans@xxxxxx> 写道:
>
> Hi all,
>
> I'm stuck with a very annoying problem with a ceph octopus test cluster (latest stable version). I need to investigate the contents of the MDS stray buckets and something like this should work:
>
> [root@ceph-adm:tceph-03 ~]# ceph daemon mds.tceph-03 dump tree '~mdsdir' 3
> [root@ceph-adm:tceph-03 ~]# ceph tell mds.tceph-03 dump tree '~mdsdir/stray0'
> 2022-08-04T16:57:54.010+0200 7f3475ffb700  0 client.371437 ms_handle_reset on v2:10.41.24.15:6812/2903519715
> 2022-08-04T16:57:54.052+0200 7f3476ffd700  0 client.371443 ms_handle_reset on v2:10.41.24.15:6812/2903519715
> root inode is not in cache
>
> However, I either get nothing or an error message. Whatever I try, I cannot figure out how to pull the root inode into the MDS cache - if this is even the problem here. I also don't understand why the annoying ms_handle_reset messages are there. I found the second command in a script:
>
> Code line: https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgist.github.com%2Fhuww98%2F91cbff0782ad4f6673dcffccce731c05%23file-cephfs-reintegrate-conda-stray-py-L11&amp;data=05%7C01%7C%7C8c073b7e1d98481f873908da762d2110%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637952233466034022%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=zS7Z2%2B1nYrhl39T8nvTRr33AgVmhlray4Gi8RH7C7UU%3D&amp;reserved=0
>
> that came up in this conversation: https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.ceph.io%2Fhyperkitty%2Flist%2Fceph-users%40ceph.io%2Fmessage%2F4TDASTSWF4UIURKUN2P7PGZZ3V5SCCEE%2F&amp;data=05%7C01%7C%7C8c073b7e1d98481f873908da762d2110%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637952233466034022%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=Es%2FCebNVrmDP1afP7gZYYxfly%2BNkMYtElOae83WKzFc%3D&amp;reserved=0
>
> The only place I can find "root inode is not in cache" is https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftracker.ceph.com%2Fissues%2F53597%23note-14&amp;data=05%7C01%7C%7C8c073b7e1d98481f873908da762d2110%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637952233466190258%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=ZPncxAhS4abFsrPbTTrnB6P3kZTC5TnzepeAkONqcVw%3D&amp;reserved=0, where it says that the above commands should return the tree. I have ca. 1mio stray entries and they must be somewhere. mds.tceph-03 is the only active MDS.
>
> Can someone help me out here?
>
> Thanks and best regards,
> =================
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux