Oops, I said CentOS 5 (old habit, ran it for years!). I meant CentOS 7. And I'm running the following Ceph package versions from the ceph repo: root@ceph02 ~]# rpm -qa |grep -i ceph libcephfs1-10.2.3-0.el7.x86_64 ceph-common-10.2.3-0.el7.x86_64 ceph-mon-10.2.3-0.el7.x86_64 ceph-release-1-1.el7.noarch python-cephfs-10.2.3-0.el7.x86_64 ceph-selinux-10.2.3-0.el7.x86_64 ceph-osd-10.2.3-0.el7.x86_64 ceph-mds-10.2.3-0.el7.x86_64 ceph-radosgw-10.2.3-0.el7.x86_64 ceph-base-10.2.3-0.el7.x86_64 ceph-10.2.3-0.el7.x86_64 On Mon, Oct 03, 2016 at 03:34:50PM PDT, Tracy Reed spake thusly: > Hello all, > > Over the past few weeks I've been trying to go through the Quick Ceph Deploy tutorial at: > > http://docs.ceph.com/docs/jewel/start/quick-ceph-deploy/ > > just trying to get a basic 2 OSD ceph cluster up and running. Everything seems > to go well until I get to the: > > ceph-deploy osd activate ceph02:/dev/sdc ceph03:/dev/sdc > > part. It never actually seems to activate the OSD and eventually times out: > > [ceph02][DEBUG ] connection detected need for sudo > [ceph02][DEBUG ] connected to host: ceph02 > [ceph02][DEBUG ] detect platform information from remote host > [ceph02][DEBUG ] detect machine type > [ceph02][DEBUG ] find the location of an executable > [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core > [ceph_deploy.osd][DEBUG ] activating host ceph02 disk /dev/sdc > [ceph_deploy.osd][DEBUG ] will use init type: systemd > [ceph02][DEBUG ] find the location of an executable > [ceph02][INFO ] Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdc > [ceph02][WARNIN] main_activate: path = /dev/sdc > [ceph02][WARNIN] No data was received after 300 seconds, disconnecting... > [ceph02][INFO ] checking OSD status... > [ceph02][DEBUG ] find the location of an executable > [ceph02][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json > [ceph02][INFO ] Running command: sudo systemctl enable ceph.target > [ceph03][DEBUG ] connection detected need for sudo > [ceph03][DEBUG ] connected to host: ceph03 > [ceph03][DEBUG ] detect platform information from remote host > [ceph03][DEBUG ] detect machine type > [ceph03][DEBUG ] find the location of an executable > [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core > [ceph_deploy.osd][DEBUG ] activating host ceph03 disk /dev/sdc > [ceph_deploy.osd][DEBUG ] will use init type: systemd > [ceph03][DEBUG ] find the location of an executable > [ceph03][INFO ] Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdc > [ceph03][WARNIN] main_activate: path = /dev/sdc > [ceph03][WARNIN] No data was received after 300 seconds, disconnecting... > [ceph03][INFO ] checking OSD status... > [ceph03][DEBUG ] find the location of an executable > [ceph03][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json > [ceph03][INFO ] Running command: sudo systemctl enable ceph.target > > Machines involved are ceph-deploy (deploy server), ceph01 (monitor), ceph02 and > ceph03 (OSD servers). > > ceph log is here: > > http://pastebin.com/A2kP28c4 > > This is CentOS 5. iptables and selinux are both off. When I first started doing > this the volume would be left mounted in the tmp location on the OSDs. But I > have since upgraded my version of ceph and now nothing is left mounted on the > OSD but it still times out. > > Please let me know if there is any other info I can provide which might help. > Any help you can offer is greatly appreciated! I've been stuck on this for > weeks. Thanks! > > -- > Tracy Reed > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Tracy Reed
Attachment:
pgpIwdcqL3gKf.pgp
Description: PGP signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com