Sorry with log level 20 turned on for bluestore / bluefs
-31> 2019-02-25 15:07:27.842 7f2bfbd71240 10 bluestore(/var/lib/ceph/osd/ceph-8) _open_db initializing bluefs
-30> 2019-02-25 15:07:27.842 7f2bfbd71240 10 bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-8/block.db
-29> 2019-02-25 15:07:27.842 7f2bfbd71240 1 bdev create path /var/lib/ceph/osd/ceph-8/block.db type kernel
-28> 2019-02-25 15:07:27.842 7f2bfbd71240 1 bdev(0x5651277e6a80 /var/lib/ceph/osd/ceph-8/block.db) open path /var/lib/ceph/osd/ceph-8/block.db
-27> 2019-02-25 15:07:27.842 7f2bfbd71240 1 bdev(0x5651277e6a80 /var/lib/ceph/osd/ceph-8/block.db) open size 107374182400 (0x1900000000, 100 GiB) block_size 4096 (4 KiB) rotational
-26> 2019-02-25 15:07:27.842 7f2bfbd71240 1 bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-8/block.db size 100 GiB
-25> 2019-02-25 15:07:27.842 7f2bfbd71240 10 bluestore(/var/lib/ceph/osd/ceph-8/block.db) _read_bdev_label
-24> 2019-02-25 15:07:27.842 7f2bfbd71240 10 bluestore(/var/lib/ceph/osd/ceph-8/block.db) _read_bdev_label got bdev(osd_uuid 77703c4b-eb1d-4fae-a0e8-d6a80e55cd6e, size 0x1900000000, btime 2018-11-10 10:10:3 9.072862, desc bluefs db, 0 meta)
-23> 2019-02-25 15:07:27.842 7f2bfbd71240 10 bluefs add_block_device bdev 2 path /var/lib/ceph/osd/ceph-8/block
-22> 2019-02-25 15:07:27.842 7f2bfbd71240 1 bdev create path /var/lib/ceph/osd/ceph-8/block type kernel
-21> 2019-02-25 15:07:27.842 7f2bfbd71240 1 bdev(0x5651277e6e00 /var/lib/ceph/osd/ceph-8/block) open path /var/lib/ceph/osd/ceph-8/block
-20> 2019-02-25 15:07:27.846 7f2bfbd71240 1 bdev(0x5651277e6e00 /var/lib/ceph/osd/ceph-8/block) open size 9834397171712 (0x8f1bfc00000, 8.9 TiB) block_size 4096 (4 KiB) rotational
-19> 2019-02-25 15:07:27.846 7f2bfbd71240 1 bluefs add_block_device bdev 2 path /var/lib/ceph/osd/ceph-8/block size 8.9 TiB
-18> 2019-02-25 15:07:27.846 7f2bfbd71240 1 bluefs mount
-17> 2019-02-25 15:07:27.846 7f2bfbd71240 10 bluefs _open_super
-16> 2019-02-25 15:07:27.846 7f2bfbd71240 10 bluefs _open_super superblock 54
-15> 2019-02-25 15:07:27.846 7f2bfbd71240 10 bluefs _open_super log_fnode file(ino 1 size 0x100000 mtime 0.000000 bdev 0 allocated 500000 extents [1:0x1062700000+100000,0:0xf500000+400000])
-14> 2019-02-25 15:07:27.846 7f2bfbd71240 20 bluefs _init_alloc
-13> 2019-02-25 15:07:27.846 7f2bfbd71240 10 bluefs _replay
-12> 2019-02-25 15:07:27.846 7f2bfbd71240 10 bluefs _replay log_fnode file(ino 1 size 0x100000 mtime 0.000000 bdev 0 allocated 500000 extents [1:0x1062700000+100000,0:0xf500000+400000])
-11> 2019-02-25 15:07:27.846 7f2bfbd71240 10 bluefs _read h 0x565127411c80 0x0~1000 from file(ino 1 size 0x100000 mtime 0.000000 bdev 0 allocated 500000 extents [1:0x1062700000+100000,0:0xf500000+400000])
-10> 2019-02-25 15:07:27.846 7f2bfbd71240 20 bluefs _read fetching 0x0~100000 of 1:0x1062700000+100000
-9> 2019-02-25 15:07:27.862 7f2bfbd71240 20 bluefs _read left 0x100000 len 0x1000
-8> 2019-02-25 15:07:27.862 7f2bfbd71240 20 bluefs _read got 4096
-7> 2019-02-25 15:07:27.862 7f2bfbd71240 20 bluefs _replay need 0x4000 more bytes
-6> 2019-02-25 15:07:27.862 7f2bfbd71240 10 bluefs _read h 0x565127411c80 0x1000~4000 from file(ino 1 size 0x100000 mtime 0.000000 bdev 0 allocated 500000 extents [1:0x1062700000+100000,0:0xf500000+400000] )
-5> 2019-02-25 15:07:27.862 7f2bfbd71240 20 bluefs _read left 0xff000 len 0x4000
-4> 2019-02-25 15:07:27.862 7f2bfbd71240 20 bluefs _read got 16384
-3> 2019-02-25 15:07:27.862 7f2bfbd71240 10 bluefs _replay 0x0: txn(seq 1 len 0x4586 crc 0xfb7afd17)
-2> 2019-02-25 15:07:27.862 7f2bfbd71240 20 bluefs _replay 0x0: op_init
-1> 2019-02-25 15:07:27.862 7f2bfbd71240 20 bluefs _replay 0x0: op_alloc_add 0:0x1000~1ffff000
0> 2019-02-25 15:07:27.866 7f2bfbd71240 -1 *** Caught signal (Segmentation fault) **
in thread 7f2bfbd71240 thread_name:ceph-osd
On Mon, Feb 25, 2019 at 11:06 PM Ashley Merrick <singapore@xxxxxxxxxxxxxx> wrote:
So I was able to change the perms using : chown -h ceph:ceph /var/lib/ceph/osd/ceph-6/block.dbHowever now I get the following when starting the OSD which then causes it to crashbluefs add_block_device bdev 2 path /var/lib/ceph/osd/ceph-8/block size 8.9 TiB-1> 2019-02-25 15:03:51.990 7f26d4777240 1 bluefs mount0> 2019-02-25 15:03:52.006 7f26d4777240 -1 *** Caught signal (Segmentation fault) **in thread 7f26d4777240 thread_name:ceph-osdceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic (stable)1: (()+0x9414c0) [0x55a8fc0b54c0]2: (()+0x12dd0) [0x7f26d5da8dd0]3: (BlueFS::_replay(bool, bool)+0x11ce) [0x55a8fc079e6e]4: (BlueFS::mount()+0xff) [0x55a8fc07d16f]5: (BlueStore::_open_db(bool, bool)+0x81c) [0x55a8fbfa9c3c]6: (BlueStore::_mount(bool, bool)+0x1a3) [0x55a8fbfd04a3]7: (OSD::init()+0x27d) [0x55a8fbbc250d]8: (main()+0x30a2) [0x55a8fba9ceb2]9: (__libc_start_main()+0xeb) [0x7f26d567a09b]10: (_start()+0x2a) [0x55a8fbb685aa]NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.Not sure if this just means the bluefs is corrupt or something I can try and repair.On Mon, Feb 25, 2019 at 10:15 AM Ashley Merrick <singapore@xxxxxxxxxxxxxx> wrote:After a reboot of a node I have one particular OSD that won't boot. (Latest Mimic)When I "/var/lib/ceph/osd/ceph-8 # ls -lsh"I get " 0 lrwxrwxrwx 1 root root 19 Feb 25 02:09 block.db -> '/dev/sda5 /dev/sdc5'"For some reasons it is trying to link block.db to two disks, if I remove the block.db link and manually create the correct link the OSD still fails to start due to the perms on block.db file being root:root.If I run a chmod it just goes back to root:root and the following shows in the OSD logs2019-02-25 02:03:21.738 7f574b2a1240 -1 bluestore(/var/lib/ceph/osd/ceph-8) _open_db /var/lib/ceph/osd/ceph-8/block.db symlink exists but target unusable: (13) Permission denied2019-02-25 02:03:21.738 7f574b2a1240 1 bdev(0x55dbf0a56700 /var/lib/ceph/osd/ceph-8/block) close2019-02-25 02:03:22.034 7f574b2a1240 -1 osd.8 0 OSD:init: unable to mount object store2019-02-25 02:03:22.034 7f574b2a1240 -1 ** ERROR: osd init failed: (13) Permission deniedThanks
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com