-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 It can't talk to the monitor at 192.168.107.67. - ---------------- Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Sat, Nov 7, 2015 at 1:23 PM, James Gallagher wrote: > Hi, > > I've recently deployed Ceph among four machines, one admin and three nodes, > following the architecture from the ceph.com/qsg and making osd directories > in /var/local. However, when I have tried to activate the OSDs, it breaks, > even when preparing them was successful. I have provided a log of the error > messages I receive and would appreciate any advice. I have noticed that it > continually increments a .fault message repeatedly, it would be worth noting > that I have followed the QSG down to a tee. > > Thanks, James > > > [ceph_deploy.conf][DEBUG ] found configuration file at: > /home/admin/.cephdeploy.conf > [ceph_deploy.cli][INFO ] Invoked (1.5.28): /bin/ceph-deploy osd prepare > node2:/var/local/osd0 > [ceph_deploy.cli][INFO ] ceph-deploy options: > [ceph_deploy.cli][INFO ] username : None > [ceph_deploy.cli][INFO ] disk : [('node2', > '/var/local/osd0', None)] > [ceph_deploy.cli][INFO ] dmcrypt : False > [ceph_deploy.cli][INFO ] verbose : False > [ceph_deploy.cli][INFO ] overwrite_conf : False > [ceph_deploy.cli][INFO ] subcommand : prepare > [ceph_deploy.cli][INFO ] dmcrypt_key_dir : > /etc/ceph/dmcrypt-keys > [ceph_deploy.cli][INFO ] quiet : False > [ceph_deploy.cli][INFO ] cd_conf : > > [ceph_deploy.cli][INFO ] cluster : ceph > [ceph_deploy.cli][INFO ] fs_type : xfs > [ceph_deploy.cli][INFO ] func : 0x1145668> > [ceph_deploy.cli][INFO ] ceph_conf : None > [ceph_deploy.cli][INFO ] default_release : False > [ceph_deploy.cli][INFO ] zap_disk : False > [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks > node2:/var/local/osd0: > [node2][DEBUG ] connection detected need for sudo > [node2][DEBUG ] connected to host: node2 > [node2][DEBUG ] detect platform information from remote host > [node2][DEBUG ] detect machine type > [node2][DEBUG ] find the location of an executable > [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.1.1503 Core > [ceph_deploy.osd][DEBUG ] Deploying osd to node2 > [node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf > [node2][WARNING] osd keyring does not exist yet, creating one > [node2][DEBUG ] create a keyring file > [node2][INFO ] Running command: sudo udevadm trigger > --subsystem-match=block --action=add > [ceph_deploy.osd][DEBUG ] Preparing host node2 disk /var/local/osd0 journal > None activate False > [node2][INFO ] Running command: sudo ceph-disk -v prepare --cluster ceph > --fs-type xfs -- /var/local/osd0 > [node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd > --cluster=ceph --show-config-value=fsid > [node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf > --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs > [node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf > --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs > [node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf > --cluster=ceph --name=osd. --lookup osd_mount_options_xfs > [node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf > --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs > [node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd > --cluster=ceph --show-config-value=osd_journal_size > [node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf > --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters > [node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf > --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size > [node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-conf > --cluster=ceph --name=osd. --lookup osd_dmcrypt_type > [node2][WARNING] DEBUG:ceph-disk:Preparing osd data dir /var/local/osd0 > [node2][INFO ] checking OSD status... > [node2][INFO ] Running command: sudo ceph --cluster=ceph osd stat > --format=json > [ceph_deploy.osd][DEBUG ] Host node2 is now ready for osd use. > [ceph_deploy.conf][DEBUG ] found configuration file at: > /home/admin/.cephdeploy.conf > [ceph_deploy.cli][INFO ] Invoked (1.5.28): /bin/ceph-deploy osd activate > node2:/var/local/osd0 > [ceph_deploy.cli][INFO ] ceph-deploy options: > [ceph_deploy.cli][INFO ] username : None > [ceph_deploy.cli][INFO ] verbose : False > [ceph_deploy.cli][INFO ] overwrite_conf : False > [ceph_deploy.cli][INFO ] subcommand : activate > [ceph_deploy.cli][INFO ] quiet : False > [ceph_deploy.cli][INFO ] cd_conf : > > [ceph_deploy.cli][INFO ] cluster : ceph > [ceph_deploy.cli][INFO ] func : 0xfda668> > [ceph_deploy.cli][INFO ] ceph_conf : None > [ceph_deploy.cli][INFO ] default_release : False > [ceph_deploy.cli][INFO ] disk : [('node2', > '/var/local/osd0', None)] > [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks > node2:/var/local/osd0: > [node2][DEBUG ] connection detected need for sudo > [node2][DEBUG ] connected to host: node2 > [node2][DEBUG ] detect platform information from remote host > [node2][DEBUG ] detect machine type > [node2][DEBUG ] find the location of an executable > [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.1.1503 Core > [ceph_deploy.osd][DEBUG ] activating host node2 disk /var/local/osd0 > [ceph_deploy.osd][DEBUG ] will use init type: sysvinit > [node2][INFO ] Running command: sudo ceph-disk -v activate --mark-init > sysvinit --mount /var/local/osd0 > [node2][WARNING] DEBUG:ceph-disk:Cluster uuid is > 35775c2a-c76d-461f-bf34-93b0aaa44f2b > [node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd > --cluster=ceph --show-config-value=fsid > [node2][WARNING] DEBUG:ceph-disk:Cluster name is ceph > [node2][WARNING] DEBUG:ceph-disk:OSD uuid is > 15335aa1-81a1-49a7-b657-d657039f01fd > [node2][WARNING] DEBUG:ceph-disk:Allocating OSD id... > [node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster > ceph --name client.bootstrap-osd --keyring > /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise > 15335aa1-81a1-49a7-b657-d657039f01fd > [node2][WARNING] 2015-11-06 02:18:43.120617 7f819c226700 0 -- :/1020092 >> > 192.168.107.11:6789/0 pipe(0x7f8190000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1 > c=0x7f8190004ef0).fault > [node2][WARNING] 2015-11-06 02:18:46.126432 7f819c327700 0 -- :/1020092 >> > 192.168.107.11:6789/0 pipe(0x7f81900081b0 sd=4 :0 s=1 pgs=0 cs=0 l=1 > c=0x7f819000c450).fault > [node2][WARNING] 2015-11-06 02:18:49.132066 7f819c226700 0 -- :/1020092 >> > 192.168.107.11:6789/0 pipe(0x7f8190000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1 > c=0x7f8190006610).fault > ... > ... this repeats itself > ... > [node2][WARNING] 2015-11-06 02:18:52.138230 7f819c327700 0 -- :/1020092 >> > 192.168.107.11:6789/0 pipe(0x7f81900081b0 sd=4 :0 s=1 pgs=0 cs=0 l=1 > c=0x7f8190016fa0).fault > [node2][WARNING] 2015-11-06 02:23:34.971917 7f819c327700 0 -- :/1020092 >> > 192.168.107.11:6789/0 pipe(0x7f8190007c10 sd=4 :0 s=1 pgs=0 cs=0 l=1 > c=0x7f81900120e0).fault > [node2][WARNING] 2015-11-06 02:23:38.340389 7f819c226700 0 -- :/1020092 >> > 192.168.107.11:6789/0 pipe(0x7f81900008c0 sd=4 :0 s=1 pgs=0 cs=0 l=1 > c=0x7f8190016fa0).fault > [node2][WARNING] 2015-11-06 02:23:40.117327 7f819ea9c700 0 > monclient(hunting): authenticate timed out after 300 > [node2][WARNING] 2015-11-06 02:23:40.117766 7f819ea9c700 0 librados: > client.bootstrap-osd authentication error (110) Connection timed out > [node2][WARNING] Error connecting to cluster: TimedOut > [node2][WARNING] ceph-disk: Error: ceph osd create failed: Command > '/usr/bin/ceph' returned non-zero exit status 1: > [node2][ERROR ] RuntimeError: command returned non-zero exit status: 1 > [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v > activate --mark-init sysvinit --mount /var/local/osd0 > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -----BEGIN PGP SIGNATURE----- Version: Mailvelope v1.2.3 Comment: https://www.mailvelope.com wsFcBAEBCAAQBQJWPoI8CRDmVDuy+mK58QAADfAP/i+RaYTRrGyrgmDcfAD6 Y8tXaeNxPs/3xULWRMcPnYblTbkIbTBj0FsOmDKSFjzvRp/0/I1sLuK8/Pbc UIpuJAHj27QBtZJp/ZcCP/1/5c8vh81Csk2Unoma2EDGIUUo+iIwF3ZiQRhE 7LvYW7RPYONq6nVTXiGDuTvbLtgBE5yfHbPZ7wT0uSV81uLdLrT5eM52yb+r rQT90UOGysqmkliHBI9z/dLKRlGFvhNJOQOFHqKDRNrWL1ay8KJfr85mSl+E qbSXDE6ThiSf5rJQt54FFMkf51OJHAJEQuigFO5eiqCoUjJbZM/SZpyTDquD A2yNt40wa+25o78v4vBKcVqza0jh7T1eD4nm0uBy0kdmRRGNd4EhMEzLq0uy dSPqeDJ15WKyMMl2bP0SKCyQrxQSNfEwQtq2Fu6o/tMIL9K6lTGmaFBp5Koi eNiPDaq+TS92lVgVF+7Vnv5fzTse9QQ0HGp+60bpfNphubBoyXsjgK1hJdsL MRTeYQF24xEEzD0d9DXlJFaTGeKiNPnoACEk0CO5/+DbnqFUGjCKGR4HAykk ASJjH5BEQsUtL/EdDTk3xqG6p3t37bNwzw2zckJepG/x7Ux/2DkpB1KjL5f6 /2lHYeo5y9FP9e0NdZJ2J/l7iecZm2wQsoD9QdBO8JQ2MYziu24kxERM6xdt 4dCq =hjlk -----END PGP SIGNATURE----- _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com