Re: osd won't restart

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I didn't get any reply on this issue, so I tried some steps:
- I removed Apparmor (Ubuntu right...)
- I restarted the server
- osd's unmanaged: #ceph orch set-unmanaged osd.all-available-devices (because when I want to create lvm's this service was bothering me)
- Created lvm on a disk

Then on creating the osd I get this:
ceph orch daemon add osd hvs005:/dev/vgsdc/hvs005_sdc
Created no osd(s) on host hvs005; already created?

I removed config keys of the first available osd number also the auth... No luck...

Can somewone give me some pointers on how to continue creating osd?

Note: My setup is a simple setup deployed with cephadm and docker...

> -----Oorspronkelijk bericht-----
> Van: Dominique Ramaekers <dominique.ramaekers@xxxxxxxxxx>
> Verzonden: maandag 20 januari 2025 11:59
> Aan: ceph-users@xxxxxxx
> Onderwerp:  osd won't restart
>
> Hi,
>
> Strange thing just happened (ceph v19.2.0). I added two disks to a host.
> Kernel recognized nicely the two disks and they appeared as available
> devices in ceph.
>
> After 15 minutes osd's were not created, so I looked at the logs:
> /usr/bin/docker: stderr --> Creating keyring file for osd.36
> /usr/bin/docker: stderr Running command: /usr/bin/chown -R ceph:ceph
> /var/lib/ceph/osd/ceph-36/keyring
> /usr/bin/docker: stderr Running command: /usr/bin/chown -R ceph:ceph
> /var/lib/ceph/osd/ceph-36/
> /usr/bin/docker: stderr Running command: /usr/bin/ceph-osd --cluster ceph
> --osd-objectstore bluestore --mkfs -i 36 --monmap /var/lib/ceph/osd/ceph-
> 36/activate.monmap --keyfile - --osdspec-affinity all-available-devices --osd-
> data /var/lib/ceph/osd/ceph-36/ --osd-uuid 41675779-943d-4dca-baa3-
> 3a4f6ace004a --setuser ceph --setgroup ceph
> /usr/bin/docker: stderr  stderr: 2025-01-20T08:21:01.979+0000 79e9fea34640
> -1 bluestore(/var/lib/ceph/osd/ceph-36//block) _read_bdev_label unable to
> decode label /var/lib/ceph/osd/ceph-36//block at offset 102: void
> bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator
> &) decode past end of struct encoding: Malformed input [buffer:3]
> /usr/bin/docker: stderr  stderr: 2025-01-20T08:21:01.979+0000 79e9fea34640
> -1 bluestore(/var/lib/ceph/osd/ceph-36//block) _read_bdev_label unable to
> decode label /var/lib/ceph/osd/ceph-36//block at offset 102: void
> bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator
> &) decode past end of struct encoding: Malformed input [buffer:3]
> /usr/bin/docker: stderr  stderr: 2025-01-20T08:21:01.980+0000 79e9fea34640
> -1 bluestore(/var/lib/ceph/osd/ceph-36//block) _read_bdev_label unable to
> decode label /var/lib/ceph/osd/ceph-36//block at offset 102: void
> bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator
> &) decode past end of struct encoding: Malformed input [buffer:3]
> /usr/bin/docker: stderr  stderr: 2025-01-20T08:21:01.980+0000 79e9fea34640
> -1 bluestore(/var/lib/ceph/osd/ceph-36/) _read_fsid unparsable uuid
> /usr/bin/docker: stderr  stderr: 2025-01-20T08:21:02.075+0000 79e9fea34640
> -1 bluestore(/var/lib/ceph/osd/ceph-36//block) _read_bdev_label failed to
> open /var/lib/ceph/osd/ceph-36//block: (13) Permission denied
> /usr/bin/docker: stderr  stderr: 2025-01-20T08:21:02.076+0000 79e9fea34640
> -1 bluestore(/var/lib/ceph/osd/ceph-36//block) _read_bdev_label failed to
> open /var/lib/ceph/osd/ceph-36//block: (13) Permission denied
> /usr/bin/docker: stderr  stderr: 2025-01-20T08:21:02.076+0000 79e9fea34640
> -1 bluestore(/var/lib/ceph/osd/ceph-36//block) _read_bdev_label failed to
> open /var/lib/ceph/osd/ceph-36//block: (13) Permission denied
> /usr/bin/docker: stderr  stderr: 2025-01-20T08:21:02.076+0000 79e9fea34640
> -1 bluestore(/var/lib/ceph/osd/ceph-36//block) _read_bdev_label failed to
> open /var/lib/ceph/osd/ceph-36//block: (13) Permission denied
> /usr/bin/docker: stderr  stderr: 2025-01-20T08:21:02.076+0000 79e9fea34640
> -1 bluestore(/var/lib/ceph/osd/ceph-36//block) _read_bdev_label failed to
> open /var/lib/ceph/osd/ceph-36//block: (13) Permission denied
> /usr/bin/docker: stderr  stderr: 2025-01-20T08:21:02.076+0000 79e9fea34640
> -1 bluestore(/var/lib/ceph/osd/ceph-36//block) _read_bdev_label failed to
> open /var/lib/ceph/osd/ceph-36//block: (13) Permission denied
> /usr/bin/docker: stderr  stderr: 2025-01-20T08:21:02.076+0000 79e9fea34640
> -1 bluestore(/var/lib/ceph/osd/ceph-36//block) _read_bdev_label failed to
> open /var/lib/ceph/osd/ceph-36//block: (13) Permission denied
> /usr/bin/docker: stderr  stderr: 2025-01-20T08:21:02.506+0000 79e9fea34640
> -1 bluestore(/var/lib/ceph/osd/ceph-36//block) _read_bdev_label failed to
> open /var/lib/ceph/osd/ceph-36//block: (13) Permission denied
> /usr/bin/docker: stderr  stderr: 2025-01-20T08:21:02.506+0000 79e9fea34640
> -1 bdev(0x566a21f14a80 /var/lib/ceph/osd/ceph-36//block) open open got:
> (13) Permission denied
> /usr/bin/docker: stderr  stderr: 2025-01-20T08:21:02.506+0000 79e9fea34640
> -1 OSD::mkfs: ObjectStore::mkfs failed with error (13) Permission denied
> /usr/bin/docker: stderr  stderr: 2025-01-20T08:21:02.506+0000 79e9fea34640
> -1 [0;31m ** ERROR: error creating empty object store in
> /var/lib/ceph/osd/ceph-36/: (13) Permission denied
> /usr/bin/docker: stderr --> Was unable to complete a new OSD, will rollback
> changes
>
> As it sad "Permission denied" and I have already osd's running, I thought it
> was an issue that docker may be updated but not restarted. So I did
> 'systemcrl restart docker.service'. Now none of the managed osd's are going
> back online!!!
> 'systemctl start ceph-dd4b0610-b4d2-11ec-bb58-
> d1b32ae31585@osd.18.service<mailto:ceph-dd4b0610-b4d2-11ec-bb58-
> d1b32ae31585@osd.18.service>' fails with not much explanation...
>
> Only unmanaged osd have no issue...
>
> I didn't payed much attention to the log entry '_read_fsid unparsable uuid'...
> So I think there is more going on. Permission denied is logical if the path is
> wrong...  see dubbel slash in '_read_bdev_label failed to open
> /var/lib/ceph/osd/ceph-36//block'
> Is this a bug like https://github.com/rook/rook/issues/10219 ?
>
> Can I get around this without recreating these osd as unmanaged?
>
> Thanks in advance.
>
> Dominique.
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email
> to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux