Hi all,
I am facing a major issue where my osd is down and not coming up after a reboot.
These are the last osd logs
Not able to figure out why did this happen and what is the workaround? and how to handle this in future?
Thanks
I am facing a major issue where my osd is down and not coming up after a reboot.
These are the last osd logs
2018-07-20 10:43:00.701904 7f02f1b53d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1532063580701900, "job": 1, "event": "recovery_finished"}
2018-07-20 10:43:00.735978 7f02f1b53d80 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.5/rpm/el7/BUILD/ceph-12.2.5/src/rocksdb/db/db_impl_open.cc:1063] DB pointer 0x5638bd336000
2018-07-20 10:43:00.736016 7f02f1b53d80 1 bluestore(/var/lib/ceph/osd/ceph-8) _open_db opened rocksdb path db options compression=kNoCompression,max_write_buffer_number=4,min_write_buffer_number_to_merge=1,recycle_log_file_num=4,write_buffer_size=268435456,writable_file_max_buffer_size=0,compaction_readahead_size=2097152
2018-07-20 10:43:00.741543 7f02f1b53d80 1 freelist init
2018-07-20 10:43:00.756919 7f02f1b53d80 1 bluestore(/var/lib/ceph/osd/ceph-8) _open_alloc opening allocation metadata
2018-07-20 10:43:00.950290 7f02f1b53d80 1 bluestore(/var/lib/ceph/osd/ceph-8) _open_alloc loaded 769 G in 13784 extents
2018-07-20 10:43:00.968909 7f02f1b53d80 -1 bluestore(/var/lib/ceph/osd/ceph-8) _verify_csum bad crc32c/0x1000 checksum at blob offset 0x0, got 0xbdb7a352, expected 0x70303e25, device location [0x10000~1000], logical extent 0x0~1000, object #-1:7b3f43c4:::osd_superblock:0#
2018-07-20 10:43:00.968941 7f02f1b53d80 -1 osd.8 0 OSD::init() : unable to read osd superblock
2018-07-20 10:43:00.968957 7f02f1b53d80 1 bluestore(/var/lib/ceph/osd/ceph-8) umount
2018-07-20 10:43:00.969461 7f02f1b53d80 1 stupidalloc 0x0x5638bd358700 shutdown
2018-07-20 10:43:00.969540 7f02f1b53d80 1 freelist shutdown
2018-07-20 10:43:00.969567 7f02f1b53d80 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.5/rpm/el7/BUILD/ceph-12.2.5/src/rocksdb/db/db_impl.cc:217] Shutdown: canceling all background work
2018-07-20 10:43:00.969770 7f02f1b53d80 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.5/rpm/el7/BUILD/ceph-12.2.5/src/rocksdb/db/db_impl.cc:343] Shutdown complete
2018-07-20 10:43:00.969983 7f02f1b53d80 1 bluefs umount
2018-07-20 10:43:00.976932 7f02f1b53d80 1 stupidalloc 0x0x5638bd053d50 shutdown
2018-07-20 10:43:00.976973 7f02f1b53d80 1 bdev(0x5638bd137c00 /var/lib/ceph/osd/ceph-8/block) close
2018-07-20 10:43:01.229274 7f02f1b53d80 1 bdev(0x5638bd137a00 /var/lib/ceph/osd/ceph-8/block) close
2018-07-20 10:43:01.265043 7f02f1b53d80 -1 ** ERROR: osd init failed: (22) Invalid argument
Not able to figure out why did this happen and what is the workaround? and how to handle this in future?
Thanks
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com