Re: Cephfs mds node already exists crashes mds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



HI,

What pops out is the " handle_client_mkdir()"... Does this mean the MDS
crashed when a client was creating a new dir or snapshot? Any idea about
the steps?

Thank you,
Bogdan Velica
croit.io

On Tue, Aug 20, 2024 at 7:47 PM Tarrago, Eli (RIS-BCT) <
Eli.Tarrago@xxxxxxxxxxxxxxxxxx> wrote:

> Here is the backtrace from a ceph crash
>
> ceph crash info
> '2024-08-20T16:07:39.319197Z_8bcdf3df-f9b5-451a-b971-16f8190ab351'
> {
>     "assert_condition": "!p",
>     "assert_file": "/build/ceph-18.2.4/src/mds/MDCache.cc",
>     "assert_func": "void MDCache::add_inode(CInode*)",
>     "assert_line": 251,
>     "assert_msg": "/build/ceph-18.2.4/src/mds/MDCache.cc: In function
> 'void MDCache::add_inode(CInode*)' thread 7f0551248700 time
> 2024-08-20T12:07:39.313490-0400\n/build/ceph-18.2.4/src/mds/MDCache.cc:
> 251: FAILED ceph_assert(!p)\n",
>     "assert_thread_name": "ms_dispatch",
>     "backtrace": [
>         "/lib/x86_64-linux-gnu/libpthread.so.0(+0x14420) [0x7f055736a420]",
>         "gsignal()",
>         "abort()",
>         "(ceph::__ceph_assert_fail(char const*, char const*, int, char
> const*)+0x182) [0x7f05576e12c9]",
>         "/usr/lib/ceph/libceph-common.so.2(+0x32d42b) [0x7f05576e142b]",
>         "(MDCache::add_inode(CInode*)+0x348) [0x55f4f007b7e8]",
>         "(Server::prepare_new_inode(boost::intrusive_ptr<MDRequestImpl>&,
> CDir*, inodeno_t, unsigned int, file_layout_t const*)+0x7b3)
> [0x55f4f00023a3]",
>
> "(Server::handle_client_mkdir(boost::intrusive_ptr<MDRequestImpl>&)+0x1cd)
> [0x55f4f000546d]",
>         "(MDSContext::complete(int)+0x5f) [0x55f4f02b72cf]",
>         "(void finish_contexts<std::vector<MDSContext*,
> std::allocator<MDSContext*> > >(ceph::common::CephContext*,
> std::vector<MDSContext*, std::allocator<MDSContext*> >&, int)+0x90)
> [0x55f4eff5c730]",
>         "(MDSCacheObject::finish_waiting(unsigned long, int)+0x5c)
> [0x55f4f02c7dbc]",
>         "(Locker::eval_gather(SimpleLock*, bool, bool*,
> std::vector<MDSContext*, std::allocator<MDSContext*> >*)+0x12dc)
> [0x55f4f016fe2c]",
>         "(Locker::dispatch(boost::intrusive_ptr<Message const>
> const&)+0x16c) [0x55f4f0183b1c]",
>         "(MDSRank::_dispatch(boost::intrusive_ptr<Message const> const&,
> bool)+0x5a6) [0x55f4eff6b1a6]",
>         "(MDSRankDispatcher::ms_dispatch(boost::intrusive_ptr<Message
> const> const&)+0x5c) [0x55f4eff6b79c]",
>         "(MDSDaemon::ms_dispatch2(boost::intrusive_ptr<Message>
> const&)+0x1bf) [0x55f4eff5530f]",
>         "(Messenger::ms_deliver_dispatch(boost::intrusive_ptr<Message>
> const&)+0x468) [0x7f055795c448]",
>         "(DispatchQueue::entry()+0x657) [0x7f05579599e7]",
>         "(DispatchQueue::DispatchThread::entry()+0x11) [0x7f0557a20b11]",
>         "/lib/x86_64-linux-gnu/libpthread.so.0(+0x8609) [0x7f055735e609]",
>         "clone()"
>     ],
>     "ceph_version": "18.2.4",
>     "crash_id":
> "2024-08-20T16:07:39.319197Z_8bcdf3df-f9b5-451a-b971-16f8190ab351",
>     "entity_name": "mds.mds0202",
>     "os_id": "ubuntu",
>     "os_name": "Ubuntu",
>     "os_version": "20.04.6 LTS (Focal Fossa)",
>     "os_version_id": "20.04",
>     "process_name": "ceph-mds",
>     "stack_sig":
> "e51039f21ed17c3220d021c313d5af656ee59d9ccfb3e6d6f991bcf245dbf000",
>     "timestamp": "2024-08-20T16:07:39.319197Z",
>     "utsname_hostname": "mds02",
>     "utsname_machine": "x86_64",
>     "utsname_release": "5.4.0-190-generic",
>     "utsname_sysname": "Linux",
>     "utsname_version": "#210-Ubuntu SMP Fri Jul 5 17:03:38 UTC 2024"
> }
>
> ________________________________
> The information contained in this e-mail message is intended only for the
> personal and confidential use of the recipient(s) named above. This message
> may be an attorney-client communication and/or work product and as such is
> privileged and confidential. If the reader of this message is not the
> intended recipient or an agent responsible for delivering it to the
> intended recipient, you are hereby notified that you have received this
> document in error and that any review, dissemination, distribution, or
> copying of this message is strictly prohibited. If you have received this
> communication in error, please notify us immediately by e-mail, and delete
> the original message.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux