Re: reinstalled node with OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



After reading my mail it may not be clear that i reinstalled the OS of
a node with OSDs.

On Thu, 2021-12-09 at 18:10 +0100, bbk wrote:
> Hi,
> 
> the last time i have reinstalled a node with OSDs, i added the disks
> with the following command. But unfortunatly this time i ran into a
> error.
> 
> It seems like this time the command doesn't create the container, i
> am able to run `cephadm shell`, and other daemons (mon,mgr,mds) are
> running.
> 
> I don't know if that is the right way to do it?
> 
> 
> ~# cephadm deploy --fsid 6d0ecf22-9155-4684-971a-2f6cde8628c8 --osd-
> fsid 941c6cb6-6898-4aa2-a33a-cec3b6a95cf1 --name osd.9
> 
> Non-zero exit code 125 from /usr/bin/podman container inspect --
> format {{.State.Status}} ceph-6d0ecf22-9155-4684-971a-2f6cde8628c8-
> osd-9
> /usr/bin/podman: stderr Error: error inspecting object: no such
> container ceph-6d0ecf22-9155-4684-971a-2f6cde8628c8-osd-9
> Non-zero exit code 125 from /usr/bin/podman container inspect --
> format {{.State.Status}} ceph-6d0ecf22-9155-4684-971a-2f6cde8628c8-
> osd.9
> /usr/bin/podman: stderr Error: error inspecting object: no such
> container ceph-6d0ecf22-9155-4684-971a-2f6cde8628c8-osd.9
> Deploy daemon osd.9 ...
> Non-zero exit code 1 from systemctl start
> ceph-6d0ecf22-9155-4684-971a-2f6cde8628c8@osd.9
> systemctl: stderr Job for
> ceph-6d0ecf22-9155-4684-971a-2f6cde8628c8@osd.9.service failed
> because the control process exited with error code.
> systemctl: stderr See "systemctl status
> ceph-6d0ecf22-9155-4684-971a-2f6cde8628c8@osd.9.service" and
> "journalctl -xe" for details.
> Traceback (most recent call last):
>   File "/usr/sbin/cephadm", line 8571, in <module>
>     main()
>   File "/usr/sbin/cephadm", line 8559, in main
>     r = ctx.func(ctx)
>   File "/usr/sbin/cephadm", line 1787, in _default_image
>     return func(ctx)
>   File "/usr/sbin/cephadm", line 4549, in command_deploy
>     ports=daemon_ports)
>   File "/usr/sbin/cephadm", line 2677, in deploy_daemon
>     c, osd_fsid=osd_fsid, ports=ports)
>   File "/usr/sbin/cephadm", line 2906, in deploy_daemon_units
>     call_throws(ctx, ['systemctl', 'start', unit_name])
>   File "/usr/sbin/cephadm", line 1467, in call_throws
>     raise RuntimeError('Failed command: %s' % ' '.join(command))
> RuntimeError: Failed command: systemctl start
> ceph-6d0ecf22-9155-4684-971a-2f6cde8628c8@osd.9
> 
> 
> ~# cephadm ceph-volume lvm list
> 
> ====== osd.9 =======
> 
>   [block]       /dev/ceph-07fa2bb7-628f-40c0-8725-0266926371c0/osd-
> block-941c6cb6-6898-4aa2-a33a-cec3b6a95cf1
> 
>       block device              /dev/ceph-07fa2bb7-628f-40c0-8725-
> 0266926371c0/osd-block-941c6cb6-6898-4aa2-a33a-cec3b6a95cf1
>       block uuid                mVEhfF-LK4E-Dtmb-Jj23-tn8x-lpLy-
> KiUy1a
>       cephx lockbox secret      
>       cluster fsid              6d0ecf22-9155-4684-971a-2f6cde8628c8
>       cluster name              ceph
>       crush device class        None
>       encrypted                 0
>       osd fsid                  941c6cb6-6898-4aa2-a33a-cec3b6a95cf1
>       osd id                    9
>       type                      block
>       vdo                       0
>       devices                   /dev/sdd
> 
> 
> ~# podman --version
> podman version 3.2.3
> 
> 
> ~# cephadm version
> Using recent ceph image
> quay.io/ceph/ceph@sha256:2f7f0af8663e73a422f797de605e769ae44eb0297f2a
> 79324739404cc1765728
> ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503)
> pacific (stable)
> 
> 
> ~# lsb_release -a
> LSB Version:    :core-4.1-amd64:core-4.1-noarch
> Distributor ID: RedHatEnterprise
> Description:    Red Hat Enterprise Linux release 8.5 (Ootpa)
> Release:        8.5
> Codename:       Ootpa
> 
> 
> ~# cephadm shell
> Inferring fsid 6d0ecf22-9155-4684-971a-2f6cde8628c8
> Using recent ceph image
> quay.io/ceph/ceph@sha256:2f7f0af8663e73a422f797de605e769ae44eb0297f2a
> 79324739404cc1765728
> [ceph: root@hobro /]# 
> 
> 
> Yours,
> bbk
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux