Hi,
I've recently deployed Ceph among four machines, one admin and three nodes, following the architecture from the ceph.com/qsg and making osd directories in /var/local. However, when I have tried to activate the OSDs, it breaks, even when preparing them was successful. I have provided a log of the error messages I receive and would appreciate any advice. I have noticed that it continually increments a .fault message repeatedly, it would be worth noting that I have followed the QSG down to a tee.
Thanks, James
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/admin/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.28): /bin/ceph-deploy osd prepare node2:/var/local/osd0
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] disk : [('node2', '/var/local/osd0', None)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : prepare
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1150ab8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] func : <function osd at 0x1145668>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node2:/var/local/osd0:
[node2][DEBUG ] connection detected need for sudo
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to node2
[node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node2][WARNING] osd keyring does not exist yet, creating one
[node2][DEBUG ] create a keyring file
[node2][INFO ] Running command: sudo udevadm trigger --subsystem-match=block --action="">
[ceph_deploy.osd][DEBUG ] Preparing host node2 disk /var/local/osd0 journal None activate False
[node2][INFO ] Running command: sudo ceph-disk -v prepare --cluster ceph --fs-type xfs -- /var/local/osd0
[node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[node2][WARNING] DEBUG:ceph-disk:Preparing osd data dir /var/local/osd0
[node2][INFO ] checking OSD status...
[node2][INFO ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node2 is now ready for osd use.
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/admin/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.28): /bin/ceph-deploy osd activate node2:/var/local/osd0
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : activate
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0xfe5ab8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function osd at 0xfda668>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [('node2', '/var/local/osd0', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks node2:/var/local/osd0:
[node2][DEBUG ] connection detected need for sudo
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_deploy.osd][DEBUG ] activating host node2 disk /var/local/osd0
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[node2][INFO ] Running command: sudo ceph-disk -v activate --mark-init sysvinit --mount /var/local/osd0
[node2][WARNING] DEBUG:ceph-disk:Cluster uuid is 35775c2a-c76d-461f-bf34-93b0aaa44f2b
[node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node2][WARNING] DEBUG:ceph-disk:Cluster name is ceph
[node2][WARNING] DEBUG:ceph-disk:OSD uuid is 15335aa1-81a1-49a7-b657-d657039f01fd
[node2][WARNING] DEBUG:ceph-disk:Allocating OSD id...
[node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 15335aa1-81a1-49a7-b657-d657039f01fd
[node2][WARNING] 2015-11-06 02:18:43.120617 7f819c226700 0 -- :/1020092 >> 192.168.107.11:6789/0 pipe(0x7f8190000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8190004ef0).fault
[node2][WARNING] 2015-11-06 02:18:46.126432 7f819c327700 0 -- :/1020092 >> 192.168.107.11:6789/0 pipe(0x7f81900081b0 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f819000c450).fault
[node2][WARNING] 2015-11-06 02:18:49.132066 7f819c226700 0 -- :/1020092 >> 192.168.107.11:6789/0 pipe(0x7f8190000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8190006610).fault
...
... this repeats itself
...
[node2][WARNING] 2015-11-06 02:18:52.138230 7f819c327700 0 -- :/1020092 >> 192.168.107.11:6789/0 pipe(0x7f81900081b0 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8190016fa0).fault
[node2][WARNING] 2015-11-06 02:23:34.971917 7f819c327700 0 -- :/1020092 >> 192.168.107.11:6789/0 pipe(0x7f8190007c10 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f81900120e0).fault
[node2][WARNING] 2015-11-06 02:23:38.340389 7f819c226700 0 -- :/1020092 >> 192.168.107.11:6789/0 pipe(0x7f81900008c0 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8190016fa0).fault
[node2][WARNING] 2015-11-06 02:23:40.117327 7f819ea9c700 0 monclient(hunting): authenticate timed out after 300
[node2][WARNING] 2015-11-06 02:23:40.117766 7f819ea9c700 0 librados: client.bootstrap-osd authentication error (110) Connection timed out
[node2][WARNING] Error connecting to cluster: TimedOut
[node2][WARNING] ceph-disk: Error: ceph osd create failed: Command '/usr/bin/ceph' returned non-zero exit status 1:
[node2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v activate --mark-init sysvinit --mount /var/local/osd0
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
- Follow-Ups:
- Re: Issue activating OSDs
- From: Robert LeBlanc
- Re: Issue activating OSDs
- Prev by Date: Erasure coded pools and 'feature set mismatch' issue
- Next by Date: Re: Issue activating OSDs
- Previous by thread: Erasure coded pools and 'feature set mismatch' issue
- Next by thread: Re: Issue activating OSDs
- Index(es):