Hi Elians,
you might want to create a ticket in Ceph bug tracker and attach failing
OSD startup log with debug-bluefs set to 20. It can be pretty large
though...
Also wondering what did precede to the first failure - may be
unexpected shutdown or something else?
Thanks,
Igor
On 10/30/2020 5:41 AM, Elians Wan wrote:
Anyone can help? Bluefs mount failed after a long time
The error message:
2020-10-30 05:33:54.906725 7f1ad73f5e00 1 bluefs add_block_device bdev 1
path /var/lib/ceph/osd/ceph-30/block size 7.28TiB
2020-10-30 05:33:54.906758 7f1ad73f5e00 1 bluefs mount
2020-10-30 06:00:32.881850 7f1ad73f5e00 -1 *** Caught signal (Segmentation
fault) **
in thread 7f1ad73f5e00 thread_name:ceph-osd ceph version 12.2.12
(1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)
1: (()+0xaa2044) [0x5570d12af044]
2: (()+0x11390) [0x7f1ad56d2390]
3: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned
long, unsigned long, ceph::buffer::list*, char*)+0xad4) [0x5570d125ea34]
4: (BlueFS::_replay(bool)+0x409) [0x5570d1267599]
5: (BlueFS::mount()+0x209) [0x5570d126b659]
6: (BlueStore::_open_db(bool)+0x169c) [0x5570d117acdc]
7: (BlueStore::_mount(bool)+0x3ad) [0x5570d11aeded]
8: (OSD::init()+0x3e2) [0x5570d0d00f12]
9: (main()+0x2f0a) [0x5570d0c0a0ca]
10: (__libc_start_main()+0xf0) [0x7f1ad4658830]
11: (_start()+0x29) [0x5570d0c97329]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
to interpret this.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx