Re: Can't activate OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Faced with the similar problems on the CentOS7 - looks like condition
race with parted.
Update to 3.2 solve my problem (from 3.1 from the CentOS7 base):

rpm -Uhv ftp://195.220.108.108/linux/fedora/linux/updates/22/x86_64/p/parted-3.2-16.fc22.x86_64.rpm

Stas


On Mon, Oct 3, 2016 at 6:39 PM, Tracy Reed <treed@xxxxxxxxxxxxxxx> wrote:
> Oops, I said CentOS 5 (old habit, ran it for years!). I meant CentOS 7. And I'm
> running the following Ceph package versions from the ceph repo:
>
> root@ceph02 ~]# rpm -qa |grep -i ceph
> libcephfs1-10.2.3-0.el7.x86_64
> ceph-common-10.2.3-0.el7.x86_64
> ceph-mon-10.2.3-0.el7.x86_64
> ceph-release-1-1.el7.noarch
> python-cephfs-10.2.3-0.el7.x86_64
> ceph-selinux-10.2.3-0.el7.x86_64
> ceph-osd-10.2.3-0.el7.x86_64
> ceph-mds-10.2.3-0.el7.x86_64
> ceph-radosgw-10.2.3-0.el7.x86_64
> ceph-base-10.2.3-0.el7.x86_64
> ceph-10.2.3-0.el7.x86_64
>
> On Mon, Oct 03, 2016 at 03:34:50PM PDT, Tracy Reed spake thusly:
>> Hello all,
>>
>> Over the past few weeks I've been trying to go through the Quick Ceph Deploy tutorial at:
>>
>> http://docs.ceph.com/docs/jewel/start/quick-ceph-deploy/
>>
>> just trying to get a basic 2 OSD ceph cluster up and running. Everything seems
>> to go well until I get to the:
>>
>> ceph-deploy osd activate ceph02:/dev/sdc ceph03:/dev/sdc
>>
>> part. It never actually seems to activate the OSD and eventually times out:
>>
>> [ceph02][DEBUG ] connection detected need for sudo
>> [ceph02][DEBUG ] connected to host: ceph02
>> [ceph02][DEBUG ] detect platform information from remote host
>> [ceph02][DEBUG ] detect machine type
>> [ceph02][DEBUG ] find the location of an executable
>> [ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
>> [ceph_deploy.osd][DEBUG ] activating host ceph02 disk /dev/sdc
>> [ceph_deploy.osd][DEBUG ] will use init type: systemd
>> [ceph02][DEBUG ] find the location of an executable
>> [ceph02][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdc
>> [ceph02][WARNIN] main_activate: path = /dev/sdc
>> [ceph02][WARNIN] No data was received after 300 seconds, disconnecting...
>> [ceph02][INFO  ] checking OSD status...
>> [ceph02][DEBUG ] find the location of an executable
>> [ceph02][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
>> [ceph02][INFO  ] Running command: sudo systemctl enable ceph.target
>> [ceph03][DEBUG ] connection detected need for sudo
>> [ceph03][DEBUG ] connected to host: ceph03
>> [ceph03][DEBUG ] detect platform information from remote host
>> [ceph03][DEBUG ] detect machine type
>> [ceph03][DEBUG ] find the location of an executable
>> [ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
>> [ceph_deploy.osd][DEBUG ] activating host ceph03 disk /dev/sdc
>> [ceph_deploy.osd][DEBUG ] will use init type: systemd
>> [ceph03][DEBUG ] find the location of an executable
>> [ceph03][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdc
>> [ceph03][WARNIN] main_activate: path = /dev/sdc
>> [ceph03][WARNIN] No data was received after 300 seconds, disconnecting...
>> [ceph03][INFO  ] checking OSD status...
>> [ceph03][DEBUG ] find the location of an executable
>> [ceph03][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
>> [ceph03][INFO  ] Running command: sudo systemctl enable ceph.target
>>
>> Machines involved are ceph-deploy (deploy server), ceph01 (monitor), ceph02 and
>> ceph03 (OSD servers).
>>
>> ceph log is here:
>>
>> http://pastebin.com/A2kP28c4
>>
>> This is CentOS 5. iptables and selinux are both off. When I first started doing
>> this the volume would be left mounted in the tmp location on the OSDs. But I
>> have since upgraded my version of ceph and now nothing is left mounted on the
>> OSD but it still times out.
>>
>> Please let me know if there is any other info I can provide which might help.
>> Any help you can offer is greatly appreciated! I've been stuck on this for
>> weeks. Thanks!
>>
>> --
>> Tracy Reed
>
>
>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> --
> Tracy Reed
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux