OSD not starting after being mounted with ceph-objectstore-tool --op fuse

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I have a problem with an OSD not starting after being mounted offline using the ceph-objectstore-tool --op fuse command.

The cephadm orch ps now shows me the osd in error state:

osd.0                   storage1               error 2m ago   5h        -    4096M  <unknown> <unknown>     <unknown>


If I'm checkung the logs on the node I can see the following messages in the system journal:

Sep 21 10:26:13 storage1 systemd[1]: Started Ceph osd.0 for 82eb0cee-583a-11ee-b10b-abe63a69ab28.
Sep 21 10:26:14 storage1 bash[50983]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Sep 21 10:26:14 storage1 bash[50983]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph--aac54f64--d2a7--42e6>
Sep 21 10:26:14 storage1 bash[50983]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph--aac54f64--d2a7--42e6--a09d--1373e3524414-osd--block--57cfd62d--ae4d--4cae--8c64--be255837>
Sep 21 10:26:14 storage1 bash[50983]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Sep 21 10:26:14 storage1 bash[50983]: Running command: /usr/bin/ln -s /dev/mapper/ceph--aac54f64--d2a7--42e6--a09d--1373e3524414-osd--block--57cfd62d--ae4d--4cae--8c64--be25583728fa /var/lib>
Sep 21 10:26:14 storage1 bash[50983]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Sep 21 10:26:14 storage1 bash[50983]: --> ceph-volume raw activate successful for osd ID: 0
Sep 21 10:26:14 storage1 bash[51214]: debug 2023-09-21T10:26:14.607+0000 7f91c87cd540  0 set uid:gid to 167:167 (ceph:ceph)
Sep 21 10:26:14 storage1 bash[51214]: debug 2023-09-21T10:26:14.607+0000 7f91c87cd540  0 ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable), process ceph-osd, pid>
Sep 21 10:26:14 storage1 bash[51214]: debug 2023-09-21T10:26:14.607+0000 7f91c87cd540  0 pidfile_write: ignore empty --pid-file
Sep 21 10:26:14 storage1 bash[51214]: debug 2023-09-21T10:26:14.611+0000 7f91c87cd540  1 bdev(0x55d79b319400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 21 10:26:14 storage1 bash[51214]: debug 2023-09-21T10:26:14.611+0000 7f91c87cd540  1 bdev(0x55d79b319400 /var/lib/ceph/osd/ceph-0/block) open size 10733223936 (0x27fc00000, 10 GiB) block>
Sep 21 10:26:14 storage1 bash[51214]: debug 2023-09-21T10:26:14.611+0000 7f91c87cd540  1 bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Sep 21 10:26:14 storage1 bash[51214]: debug 2023-09-21T10:26:14.611+0000 7f91c87cd540  1 bdev(0x55d79b318c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 21 10:26:14 storage1 bash[51214]: debug 2023-09-21T10:26:14.611+0000 7f91c87cd540  1 bdev(0x55d79b318c00 /var/lib/ceph/osd/ceph-0/block) open size 10733223936 (0x27fc00000, 10 GiB) block>
Sep 21 10:26:14 storage1 bash[51214]: debug 2023-09-21T10:26:14.611+0000 7f91c87cd540  1 bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 10 GiB
Sep 21 10:26:14 storage1 bash[51214]: debug 2023-09-21T10:26:14.611+0000 7f91c87cd540  1 bdev(0x55d79b318c00 /var/lib/ceph/osd/ceph-0/block) close
Sep 21 10:26:14 storage1 bash[51214]: debug 2023-09-21T10:26:14.899+0000 7f91c87cd540  1 bdev(0x55d79b319400 /var/lib/ceph/osd/ceph-0/block) close
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.143+0000 7f91c87cd540  0 starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.143+0000 7f91c87cd540 -1 Falling back to public interface
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.175+0000 7f91c87cd540  0 load: jerasure load: lrc
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.175+0000 7f91c87cd540  1 bdev(0x55d79c120000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.179+0000 7f91c87cd540 -1 bdev(0x55d79c120000 /var/lib/ceph/osd/ceph-0/block) open open got: (13) Permission denied
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.179+0000 7f91c87cd540  1 bdev(0x55d79c120000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.179+0000 7f91c87cd540 -1 bdev(0x55d79c120000 /var/lib/ceph/osd/ceph-0/block) open open got: (13) Permission denied
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.179+0000 7f91c87cd540  1 mClockScheduler: set_max_osd_capacity #op shards: 5 max osd capacity(iops) per shard: 63.00
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.179+0000 7f91c87cd540  1 mClockScheduler: set_osd_mclock_cost_per_io osd_mclock_cost_per_io: 0.0114000
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.179+0000 7f91c87cd540  1 mClockScheduler: set_osd_mclock_cost_per_byte osd_mclock_cost_per_byte: 0.0000026
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.183+0000 7f91c87cd540  1 mClockScheduler: set_mclock_profile mclock profile: high_client_ops
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.183+0000 7f91c87cd540  1 mClockScheduler: set_max_osd_capacity #op shards: 5 max osd capacity(iops) per shard: 63.00
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.183+0000 7f91c87cd540  0 osd.0:0.OSDShard using op scheduler mClockScheduler
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.183+0000 7f91c87cd540  1 bdev(0x55d79c120000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.183+0000 7f91c87cd540 -1 bdev(0x55d79c120000 /var/lib/ceph/osd/ceph-0/block) open open got: (13) Permission denied
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.183+0000 7f91c87cd540  1 mClockScheduler: set_max_osd_capacity #op shards: 5 max osd capacity(iops) per shard: 63.00
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.183+0000 7f91c87cd540  1 mClockScheduler: set_osd_mclock_cost_per_io osd_mclock_cost_per_io: 0.0114000
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.183+0000 7f91c87cd540  1 mClockScheduler: set_osd_mclock_cost_per_byte osd_mclock_cost_per_byte: 0.0000026
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.183+0000 7f91c87cd540  1 mClockScheduler: set_mclock_profile mclock profile: high_client_ops
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.183+0000 7f91c87cd540  0 osd.0:1.OSDShard using op scheduler mClockScheduler
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.183+0000 7f91c87cd540  1 bdev(0x55d79c120000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.183+0000 7f91c87cd540 -1 bdev(0x55d79c120000 /var/lib/ceph/osd/ceph-0/block) open open got: (13) Permission denied
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.183+0000 7f91c87cd540  1 mClockScheduler: set_max_osd_capacity #op shards: 5 max osd capacity(iops) per shard: 63.00
Sep 21 10:26:15 storage1 bash[51214]: debug 2023-09-21T10:26:15.183+0000 7f91c87cd540  1 mClockScheduler: set_osd_mclock_cost_per_io osd_mclock_cost_per_io: 0.0114000
Sep 21 10:26:15 storage1 systemd[1]: ceph-82eb0cee-583a-11ee-b10b-abe63a69ab28@osd.0.service: Main process exited, code=exited, status=1/FAILURE
Sep 21 10:26:16 storage1 systemd[1]: ceph-82eb0cee-583a-11ee-b10b-abe63a69ab28@osd.0.service: Failed with result 'exit-code'.

Any idea how to bring the osd back into the cluster?

Thank you,
Laszlo
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux