Re: Can't add OSD id in manual deploy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I didn't notice that. Have you tried this multiple times with the same disk? Do you see any other error messages in syslog?


Zitat von Joshua Schaeffer <jschaeffer@xxxxxxxxxxxxxxx>:

The OSD node is the same as the monitor and manager node at the moment and has the ceph.conf file:

user@node1:~$ ls -l /etc/ceph/
total 15
-rw------- 1 root root 151 Aug 13 15:50 ceph.client.admin.keyring
-rw-r--r-- 1 root root 432 Aug 13 16:09 ceph.conf
-rw-r--r-- 1 root root  92 Jun 30 16:44 rbdmap

On 8/14/20 10:19 AM, Eugen Block wrote:
The OSD node also needs the ceph.conf, it seems that is not the case in your setup.


Zitat von Joshua Schaeffer <jschaeffer@xxxxxxxxxxxxxxx>:

Hey all,

I'm trying to deploy Ceph 15.2.4 on Ubuntu 20.04 and I am going through the manual deploy process [1]. I was able to successfully bootstrap the monitor and manager and am now trying to add the OSD's, but the `ceph-volume` command is hanging when running `ceph osd new`. It appears the command hangs for about 30 minutes and then times out with this error:

user@node1:~$ sudo ceph-volume lvm create --data /dev/sdc
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 9a305232-6b0a-4fde-9685-68fe6150da0e
 stderr: [errno 110] RADOS timed out (error connecting to the cluster)
-->  RuntimeError: Unable to create a new OSD id

I have also tried to manually run the `ceph osd new` command, but it results in the same error. My cluster shows as up (at least the monitor and manager):

user@node1:~$ sudo ceph -s
  cluster:
    id:     6aacdfe1-42e4-4106-9503-3e16e9812195
    health: HEALTH_WARN
            1 monitors have not enabled msgr2
            Reduced data availability: 1 pg inactive
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 1 daemons, quorum bllcloudceph01 (age 23h)
    mgr: bllcloudceph01(active, since 23h)
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     100.000% pgs unknown
             1 unknown

And ceph appears to be listening:

user@node1:~$ sudo ss -tupanl | grep ceph
tcp    LISTEN  0       512                                     10.2.22.1:6789              0.0.0.0:*          users:(("ceph-mon",pid=136244,fd=25))             tcp    LISTEN  0       512                                     10.2.22.1:6800              0.0.0.0:*          users:(("ceph-mgr",pid=138125,fd=25))             tcp    LISTEN  0       512                                     10.2.22.1:6801              0.0.0.0:*          users:(("ceph-mgr",pid=138125,fd=26))

It seems like I'm missing something simple, but I can't spot it. Can anybody point me in the right direction?

[1] https://docs.ceph.com/docs/master/install/manual-deployment/

--
Thanks,
Joshua Schaeffer

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
Thanks,
Joshua Schaeffer

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux