Re: restore failed ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



sorry, please ee below:
cephadm shell
ceph status -> does not respond

ceph-volume lvm activate --all
root@ceph01 /usr/bin # cephadm shell
Inferring fsid 7131bb42-7f7a-11eb-9b5e-0c9d92c47572
Inferring config
/var/lib/ceph/7131bb42-7f7a-11eb-9b5e-0c9d92c47572/mon.ceph01/config
Using recent ceph image ceph/ceph@sha256
:7bda5ef5bf4c06e8b720afefe24b22dd0fe2fdf7f3c34da265dc9238578563ff
root@ceph01:/# ceph status
^CCluster connection aborted
root@ceph01:/# ^C
root@ceph01:/# ceph-volume lvm activate --all
--> Activating OSD ID 10 FSID 36e617a4-f9f1-4f05-9ef2-6d0dc3249883
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-10
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-10
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir
--dev
/dev/ceph-10aa546b-f286-48b4-a237-90ccae02fb69/osd-block-36e617a4-f9f1-4f05-9ef2-6d0dc3249883
--path /var/lib/ceph/osd/ceph-10 --no-mon-config
Running command: /usr/bin/ln -snf
/dev/ceph-10aa546b-f286-48b4-a237-90ccae02fb69/osd-block-36e617a4-f9f1-4f05-9ef2-6d0dc3249883
/var/lib/ceph/osd/ceph-10/block
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-10/block
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-14
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-10
Running command: /usr/bin/systemctl enable
ceph-volume@lvm-10-36e617a4-f9f1-4f05-9ef2-6d0dc3249883
 stderr: Created symlink
/etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-10-36e617a4-f9f1-4f05-9ef2-6d0dc3249883.service
-> /usr/lib/systemd/system/ceph-volume@.service.
Running command: /usr/bin/systemctl enable --runtime ceph-osd@10
 stderr: Created symlink
/run/systemd/system/ceph-osd.target.wants/ceph-osd@10.service ->
/usr/lib/systemd/system/ceph-osd@.service.
Running command: /usr/bin/systemctl start ceph-osd@10
 stderr: Failed to connect to bus: No such file or directory
-->  RuntimeError:





On Thu, Dec 9, 2021 at 3:56 PM Boris Behrens <bb@xxxxxxxxx> wrote:

> Hi Soan,
> does `ceph status` work?
>
> Did you use ceph-volume to initially create the OSDs (we only use this
> tool and create LVM OSDs)? If yes, you might bring the OSDs back up with
> `ceph-volume lvm activate --all`
>
> Cheers
>  Boris
>
> Am Do., 9. Dez. 2021 um 13:48 Uhr schrieb Mini Serve <soanican@xxxxxxxxx>:
>
>> Hi,
>> We had 3 node cluster ceph installation.
>>
>> One of them - node-3, had system failure (OS boot disk failure) so OS is
>> reinstalled. other physical drives, where OSDs are just fine. We also
>> installed ceph on this node3, copied the ssh keys to node 3 and
>> vice-versa.
>>
>> GUI does not respond. In master node, node 1, ceph-admin can be started
>> but
>> do not respond to any command.
>> (reboot did not help).
>>
>> How shall we proceed? Any assistance appreciated.
>>
>> Regards,
>> Soan
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>
>
> --
> Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
> groüen Saal.
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux